Wow that is a great side-project, and a great README to boot. I've been meaning on working through Nand to Tetris after playing around some with Ben Eater's 6502 Computer (https://eater.net/)
A few nand2tetris fanatics have actually done this! And by a few, I mean quite a lot of people. Here's one such hardware project of nand2tetris: https://gitlab.com/x653/nand2tetris-fpga/
I kind of want something midway between the FPGA version and the all-transistor version, something that just uses 7400 series chips (or, presumably there’s a 26-pin equivalent with 6 gates instead of three). Heck, I think even something that goes ahead and uses the full panoply of basic logic chips available could be kind of cool to see.
It's been a few years since I studied it (I even built the clock module, registers and the ALU), but from what I remember the biggest departing point from what you want is that the control logic (from instruction decoding to deciding which sub-units to activate for each instruction) is done with an EEPROM instead of individual logic gates, as described here: https://eater.net/8bit/control
Doing a design for this (specifically, design a microcoded, pipelined RISC processor, from the bottom up, with nothing but NAND gates) was the main problem on the Computer Hardware quals exam at UC Berkeley in the early 1990s. We didn't have to physically build it, though, just produce the detailed design on paper.
This is amazing work. I wanted to build something similar (virtual) while I was taking the Nand2Tetris course. I'm so impressed that you actually did it. You must have a really good understanding of how computers work now.
And I was just thinking about the same thing this morning, using SVG to model the basic components. And lo and behold somebody has done a magnitude more amazing job then what I was imagining!
I have just embarked on this journey myself, recently. It is motivating to see others' work. Especially when they're not quite the same as what I'm doing.
How do we know that that will converge to a single constant period of oscillation? Could you have a few different-sized square waves continue to cycle through the circuit?
(I've never built or simulated that, I'm just trying to imagine what could happen!)
Okay, so I was wrong. You can get multiple harmonics if you use a long enough chain of inverters. I will simplify the paper mentioned in a sibling comment. In a long chain at any given point of time certain (variable number of) pairs of inverters drop in and out of the circuit changing the total propagation delay, giving rise to multiple harmonics. You'll have to read the paper for details.
First look up Barkhausen criteria, then read the following. For ring oscillators gain will be greater than unity for only those waves whose period matches the gate delay. Only one such wave exists since the gate delay is a fixed number.
Awesome work! Bookmarked for in-depth perusal later. As a fan of NAND-to-Tetris, but never made it all the way through, I look forward to poking around in your project.
The supercomputers (all?) used wirewrap rather than PCBs. I heard a story once about someone coming in for a demo of a supercomputer and Cray realized there was a bug in the hardware during the demo and while the potential customers were at lunch, he rewired the machine to fix the bug.
Right. Seymour Cray said that the two big problems in supercomputing were "the thickness of the mat" (of wires on the backplane) and getting rid of the heat.
I wonder if any of his machines ever caught fire, causing an localized out-of-season aurora borealis. On the other hand, Cray machines are never mentioned in the "Halt and Catch Fire" machine code instruction humor.
That's because in NMOS logic (maybe there's a symmetric reason in TTL, but I don't know for sure) you can implement a NOR with two parallel transistors between a pullup and ground, producing a zero output if either input is high. The symmetric NAND circuit requires two transistors in series, and therefore switches more slowly.
Conversely, PMOS logic has the exact opposite problem - NAND gates are parallel, while NOR gate are series.
Meanwhile, CMOS logic inherits both of these problems (in return for better performance in general), but due to semiconductor chemistry, PMOS transistors are weaker than NMOS ones (which was why NMOS was more widely used than PMOS in the first place), so NAND gates have two strong transistors in series and two weak transistors in parallel, which gives a better worst-case switching speed than a NOR gate (where the weak transistors are in series, and so switch noticably slower). This is where "NANDs are used for everything" comes from.
I thought it's the other way around, nand2tetris used NAND because it was already popular? At least I remember hearing in university that NANDs are used for everything? Can't remember why they're used for everything though (and why not NOR, for example).
It’s also neat how “ground” has been deepening. It used to mean building mainframe from source. Then building a compiler. Now building from logic gates.
How much deeper can you get? Building a mainframe out of Gödel numbers?
One curious idea my friends have entertained is to go one level even deeper and emulate the very transistors that make up the NAND gates on the web, too. It would certainly spell disaster for performance, but it's without-a-doubt interesting.
If not, I think a NAND gate is made of just two transistors, so if you mean emulating how transistors should behave then I don't think it will affect performance more than ~50%
In https://nandgame.com/ (mentioned elsewhere, a game version of NAND to Tetris) you start by making a NAND gate out of relays. The relays are electromechanical components, but you can choose to think of a transistor (within certain "regimes") as being directly electrically equivalent to one. (This simplification isn't appropriate for all engineering tradeoff purposes, although I don't know the details of how it fails or how we can benefit from knowing more about transistors' actual behavior.)
The electromechanical relay is a very simple device to understand, if you're willing to just believe that electromagnets produce magnetism (without understanding why the universe works according to Gauss's laws on the relationship between electric current and magnetism). It's a coil of wire where an electric current produces magnetism that physically pulls a switch open or closed.
Cool project. It reminds me of a theoretical issue. As the project page says, this system is clearly Turing equivalent. Since it runs software, it even implements a _universal_ Turing machine. But the design uses only (synchronic) sequential logic [1] and Wikipedia seems to suggest that automata theory considers sequential logic only equivalent to finite state machines. Not Turing machines. Isn't that clearly a major bug in automata theory?
My guess is that automata theory consideres it critically important that a "Turing machine" has an infinite tape, while intuitively it instead seems relevant that it has something like a tape at all, some sort of random access memory, even if it is finite. I think such a memory system can't be implemented with classical finite state machines, at least not with comparable time complexity for read and write, but can be realized with sequential logic.
Real-world computers are equivalent to linear bounded automata, not true Turing machines, because they have finite memory. This technicality is mostly ignored because a computer with a large finite memory is a decent enough approximation to a Turing machine for practical purposes. But, for example, the halting problem is decidable for linear bounded automata — because there are only finitely many states, every computation must either halt or eventually revisit an earlier state and get stuck in a loop — so in theory it’s an important distinction.
It seems you didn't really read my comment though? I was arguing the relevant difference between Turing machines and FSMs was the memory system, not its infinite tape. It's interesting that the Wikipedia article on LBAs doesn't tell us whether they are considered equivalent to FSMs. It seems that by standard automata theory, they must be. Which is intuitively not the correct, since they are much more similar to Turing machines.
I did read your comment but I don’t really understand what you mean by “the memory system”. A linear bounded automaton is by definition a finite state machine for a given input (i.e. fixed tape size) because the number of possible configurations is finite. A Turing machine’s infinite tape is what stops it being a finite state machine.
Well, I said I meant the tape of a Turing machine (irrespective of its size), or the RAM of a physical computer, and that I suspected that such memory is different from other states in that read/write operations have specific lower time complexity.
> A Turing machine’s infinite tape is what stops it being a finite state machine.
Well, that's if you accept the usual automata theory definition, which was what I was arguing against. Part of having an "infinite tape" memory is not just that it is infinite, but also that it is a tape. A pushdown automaton also has infinite "memory" (though a stack, not a tape memory with random access), but it is still not equivalent to a Turing machine. Nor is it equivalent to a finite state machine.
Basically, what I suspect is that the type of "memory" system that some automaton allows determines its computational power, not whether the memory or the maximum number of its states is infinite or not.
But you can Google "nand2tetris fpga" for more.
EDIT: there's more information here: https://www.megaprocessor.com/
It's been a few years since I studied it (I even built the clock module, registers and the ALU), but from what I remember the biggest departing point from what you want is that the control logic (from instruction decoding to deciding which sub-units to activate for each instruction) is done with an EEPROM instead of individual logic gates, as described here: https://eater.net/8bit/control
61 TTL chips mentioned in this 8 minute overview https://youtu.be/3zGTsi4AYLw
It displayed on a VGA monitor.
I have just embarked on this journey myself, recently. It is motivating to see others' work. Especially when they're not quite the same as what I'm doing.
(I've never built or simulated that, I'm just trying to imagine what could happen!)
No.
This is a Cray-I backplane.[1]
[1] https://www.flickr.com/photos/geekmuseum/2926520634/
[0] https://en.wikipedia.org/wiki/One-instruction_set_computer
Meanwhile, CMOS logic inherits both of these problems (in return for better performance in general), but due to semiconductor chemistry, PMOS transistors are weaker than NMOS ones (which was why NMOS was more widely used than PMOS in the first place), so NAND gates have two strong transistors in series and two weak transistors in parallel, which gives a better worst-case switching speed than a NOR gate (where the weak transistors are in series, and so switch noticably slower). This is where "NANDs are used for everything" comes from.
[0] https://store.steampowered.com/app/1444480/Turing_Complete/
I fear for the kind of competition my kids will have just to make it to college.
The course walks you through it all, and there is an accompanying book that you do not need to buy to finish the course.
Anyone who wants to do this and can focus on it for enough time can complete it and extend it into whatever shape they like, like this person.
It really is a good course.
It’s also neat how “ground” has been deepening. It used to mean building mainframe from source. Then building a compiler. Now building from logic gates.
How much deeper can you get? Building a mainframe out of Gödel numbers?
If not, I think a NAND gate is made of just two transistors, so if you mean emulating how transistors should behave then I don't think it will affect performance more than ~50%
Do you know any resources that document the transistor to logic gate translation?
The electromechanical relay is a very simple device to understand, if you're willing to just believe that electromagnets produce magnetism (without understanding why the universe works according to Gauss's laws on the relationship between electric current and magnetism). It's a coil of wire where an electric current produces magnetism that physically pulls a switch open or closed.
My guess is that automata theory consideres it critically important that a "Turing machine" has an infinite tape, while intuitively it instead seems relevant that it has something like a tape at all, some sort of random access memory, even if it is finite. I think such a memory system can't be implemented with classical finite state machines, at least not with comparable time complexity for read and write, but can be realized with sequential logic.
[1] https://en.wikipedia.org/wiki/Sequential_logic
Yet we know this doesn't happen in practice.
> A Turing machine’s infinite tape is what stops it being a finite state machine.
Well, that's if you accept the usual automata theory definition, which was what I was arguing against. Part of having an "infinite tape" memory is not just that it is infinite, but also that it is a tape. A pushdown automaton also has infinite "memory" (though a stack, not a tape memory with random access), but it is still not equivalent to a Turing machine. Nor is it equivalent to a finite state machine.
Basically, what I suspect is that the type of "memory" system that some automaton allows determines its computational power, not whether the memory or the maximum number of its states is infinite or not.