26 comments

  • rpmw 9 days ago
    Wow that is a great side-project, and a great README to boot. I've been meaning on working through Nand to Tetris after playing around some with Ben Eater's 6502 Computer (https://eater.net/)
    • laweijfmvo 9 days ago
      Would it be at all feasible to build a physical NAND-to-tetris computer? Or is it purely a virtual exercise?
      • ArchAndStarch 9 days ago
        A few nand2tetris fanatics have actually done this! And by a few, I mean quite a lot of people. Here's one such hardware project of nand2tetris: https://gitlab.com/x653/nand2tetris-fpga/

        But you can Google "nand2tetris fpga" for more.

      • moefh 9 days ago
        There's this one that goes one step beyond that, it's built out of 40,000 discrete transistors: https://www.youtube.com/watch?v=z71h9XZbAWY

        EDIT: there's more information here: https://www.megaprocessor.com/

        • dhosek 9 days ago
          I kind of want something midway between the FPGA version and the all-transistor version, something that just uses 7400 series chips (or, presumably there’s a 26-pin equivalent with 6 gates instead of three). Heck, I think even something that goes ahead and uses the full panoply of basic logic chips available could be kind of cool to see.
          • moefh 9 days ago
            I think Ben Eater's 8-bit computer is closer to what you want: https://eater.net/8bit/

            It's been a few years since I studied it (I even built the clock module, registers and the ALU), but from what I remember the biggest departing point from what you want is that the control logic (from instruction decoding to deciding which sub-units to activate for each instruction) is done with an EEPROM instead of individual logic gates, as described here: https://eater.net/8bit/control

          • cellularmitosis 8 days ago
            Slu4 has a great series of videos about exactly what you are looking for, with his Minimal 64 computer https://youtu.be/FJsnKu20ch8

            61 TTL chips mentioned in this 8 minute overview https://youtu.be/3zGTsi4AYLw

      • dailykoder 9 days ago
        Probably doable, but takes a lot of dedication. Especially debugging such physical endeavors is crazy
      • djmips 8 days ago
        A friend and myself made a working one in FPGA for a game jam. He did the hardware and I wrote a game in the Jack language.

        It displayed on a VGA monitor.

  • kristopolous 9 days ago
    I could make a few college classes out of this. Well done material.
    • djmips 8 days ago
      Isn't this just Nand2Tetris? I've done that course and at a glance this looks the same including the Jack language.
  • farhanhubble 9 days ago
    Great work! You have seen the levels of abstraction that most programmers won't throughout their careers.
  • 2OEH8eoCRo0 9 days ago
    Fantastic work. NAND to Tetris helped me land my first job out of college.
    • SilasX 9 days ago
      How did it help?
      • 2OEH8eoCRo0 9 days ago
        Resume padding and conversation starter during interviews. It also filled in some gaps in knowledge.
  • not2b 9 days ago
    Doing a design for this (specifically, design a microcoded, pipelined RISC processor, from the bottom up, with nothing but NAND gates) was the main problem on the Computer Hardware quals exam at UC Berkeley in the early 1990s. We didn't have to physically build it, though, just produce the detailed design on paper.
  • ryeguy_24 9 days ago
    This is amazing work. I wanted to build something similar (virtual) while I was taking the Nand2Tetris course. I'm so impressed that you actually did it. You must have a really good understanding of how computers work now.
    • marai2 9 days ago
      And I was just thinking about the same thing this morning, using SVG to model the basic components. And lo and behold somebody has done a magnitude more amazing job then what I was imagining!
      • c0wb0yc0d3r 8 days ago
        You still can! :)

        I have just embarked on this journey myself, recently. It is motivating to see others' work. Especially when they're not quite the same as what I'm doing.

  • bhasi 7 days ago
    This is excellent work! I recently started Nand2Tetris myself, and hope to finish part 1 (the HW part of the course) sometime in the next few months. I blogged about my progress here: https://gurudas.dev/blog/2024/04/13/nand-to-tetris-2024-proj...
  • IncreasePosts 9 days ago
    Liar. You used NAND gates and a clock.
    • userbinator 9 days ago
      ...a clock which can be made from a ring oscillator, consisting of an odd number of NAND gates wired as NOT gates.
      • ArchAndStarch 9 days ago
        Oh wow, I didn't actually know that. Thanks for the interesting trivia
      • schoen 9 days ago
        How do we know that that will converge to a single constant period of oscillation? Could you have a few different-sized square waves continue to cycle through the circuit?

        (I've never built or simulated that, I'm just trying to imagine what could happen!)

        • rational_indian 9 days ago
          >Could you have a few different-sized square waves continue to cycle through the circuit?

          No.

          • schoen 8 days ago
            Could you help improve my intuition about that, or give me a reference where I could learn more?
            • rational_indian 7 days ago
              Okay, so I was wrong. You can get multiple harmonics if you use a long enough chain of inverters. I will simplify the paper mentioned in a sibling comment. In a long chain at any given point of time certain (variable number of) pairs of inverters drop in and out of the circuit changing the total propagation delay, giving rise to multiple harmonics. You'll have to read the paper for details.
            • rational_indian 8 days ago
              First look up Barkhausen criteria, then read the following. For ring oscillators gain will be greater than unity for only those waves whose period matches the gate delay. Only one such wave exists since the gate delay is a fixed number.
  • mikestew 9 days ago
    Awesome work! Bookmarked for in-depth perusal later. As a fan of NAND-to-Tetris, but never made it all the way through, I look forward to poking around in your project.
  • gmiller123456 9 days ago
    Curious, how many NAND gates are there in total?
    • ArchAndStarch 9 days ago
      I've inspected my code closely. Every clock cycle, the NAND gate is used 3,234 times :)
  • apienx 9 days ago
    Thank you. First principles FTW!
  • Animats 9 days ago
    Seymour Cray would have loved this. Some of his computers were all NAND gates.
    • dhosek 9 days ago
      The supercomputers (all?) used wirewrap rather than PCBs. I heard a story once about someone coming in for a demo of a supercomputer and Cray realized there was a bug in the hardware during the demo and while the potential customers were at lunch, he rewired the machine to fix the bug.
      • Animats 9 days ago
        Right. Seymour Cray said that the two big problems in supercomputing were "the thickness of the mat" (of wires on the backplane) and getting rid of the heat.

        This is a Cray-I backplane.[1]

        [1] https://www.flickr.com/photos/geekmuseum/2926520634/

        • syockit 8 days ago
          I wonder if any of his machines ever caught fire, causing an localized out-of-season aurora borealis. On the other hand, Cray machines are never mentioned in the "Halt and Catch Fire" machine code instruction humor.
  • BooleanMaestro 6 days ago
    The readme is so detailed. God job.
  • greenavocado 9 days ago
    Can anybody recommend challenges similar to this one?
    • wayoverthecloud 9 days ago
      Try emulating Message Passing Interface. Could be lot more challenging though.
  • alanjay 8 days ago
  • greenavocado 9 days ago
    Incredible achievement! Good job.
  • SomeoneFromCA 7 days ago
    On one hand cool project, on the other - e-waste, unless made of recycled ics.
  • theanonymousone 8 days ago
    Nice job. Now we should program it in subleq2[0] :D

    [0] https://en.wikipedia.org/wiki/One-instruction_set_computer

  • fire_ball 9 days ago
    this is fantastic! great work...
  • userbinator 9 days ago
    NAND is popular probably because of nand2tetris, but it's worth mentioning that NOR is also a universal gate; and many early computers like the https://en.wikipedia.org/wiki/Apollo_Guidance_Computer#Logic... were entirely made of NOR gates.
    • ajross 9 days ago
      That's because in NMOS logic (maybe there's a symmetric reason in TTL, but I don't know for sure) you can implement a NOR with two parallel transistors between a pullup and ground, producing a zero output if either input is high. The symmetric NAND circuit requires two transistors in series, and therefore switches more slowly.
      • a1369209993 8 days ago
        Conversely, PMOS logic has the exact opposite problem - NAND gates are parallel, while NOR gate are series.

        Meanwhile, CMOS logic inherits both of these problems (in return for better performance in general), but due to semiconductor chemistry, PMOS transistors are weaker than NMOS ones (which was why NMOS was more widely used than PMOS in the first place), so NAND gates have two strong transistors in series and two weak transistors in parallel, which gives a better worst-case switching speed than a NOR gate (where the weak transistors are in series, and so switch noticably slower). This is where "NANDs are used for everything" comes from.

    • andai 9 days ago
      I thought it's the other way around, nand2tetris used NAND because it was already popular? At least I remember hearing in university that NANDs are used for everything? Can't remember why they're used for everything though (and why not NOR, for example).
  • bramhaag 9 days ago
    Turing Complete[0] is a fun game similar to this where you create your own computer from NAND gates, including your own assembly language.

    [0] https://store.steampowered.com/app/1444480/Turing_Complete/

  • elevatedastalt 9 days ago
    Wow, seriously impressive. And the fact that this is the work of basically a high-schooler.

    I fear for the kind of competition my kids will have just to make it to college.

    • naikrovek 9 days ago
      This is a natural extension/expansion of the “NAND to Tetris” course on coursera, and is free if you don’t want to be graded.

      The course walks you through it all, and there is an accompanying book that you do not need to buy to finish the course.

      Anyone who wants to do this and can focus on it for enough time can complete it and extend it into whatever shape they like, like this person.

      It really is a good course.

      • ArchAndStarch 9 days ago
        I primarily used the physical book to learn about the nand2tetris platform. I highly recommend it, it's an enthralling read
      • brailsafe 9 days ago
        Absolutely true, I'm working my way through it now; it's challenging and time consuming, totally worthwhile imo.
  • pyuser583 9 days ago
    Ground up projects like this are fascinating!

    It’s also neat how “ground” has been deepening. It used to mean building mainframe from source. Then building a compiler. Now building from logic gates.

    How much deeper can you get? Building a mainframe out of Gödel numbers?

    • ArchAndStarch 9 days ago
      One curious idea my friends have entertained is to go one level even deeper and emulate the very transistors that make up the NAND gates on the web, too. It would certainly spell disaster for performance, but it's without-a-doubt interesting.
      • vitiral 9 days ago
        Like... the physics?

        If not, I think a NAND gate is made of just two transistors, so if you mean emulating how transistors should behave then I don't think it will affect performance more than ~50%

      • pyuser583 9 days ago
        That would be fascinating!

        Do you know any resources that document the transistor to logic gate translation?

        • schoen 9 days ago
          In https://nandgame.com/ (mentioned elsewhere, a game version of NAND to Tetris) you start by making a NAND gate out of relays. The relays are electromechanical components, but you can choose to think of a transistor (within certain "regimes") as being directly electrically equivalent to one. (This simplification isn't appropriate for all engineering tradeoff purposes, although I don't know the details of how it fails or how we can benefit from knowing more about transistors' actual behavior.)

          The electromechanical relay is a very simple device to understand, if you're willing to just believe that electromagnets produce magnetism (without understanding why the universe works according to Gauss's laws on the relationship between electric current and magnetism). It's a coil of wire where an electric current produces magnetism that physically pulls a switch open or closed.

  • cyanfrog 9 days ago
  • m3kw9 9 days ago
    [flagged]
    • emoII 9 days ago
      I think that’s a massive achievement
      • m3kw9 9 days ago
        I think so too
  • cubefox 9 days ago
    Cool project. It reminds me of a theoretical issue. As the project page says, this system is clearly Turing equivalent. Since it runs software, it even implements a _universal_ Turing machine. But the design uses only (synchronic) sequential logic [1] and Wikipedia seems to suggest that automata theory considers sequential logic only equivalent to finite state machines. Not Turing machines. Isn't that clearly a major bug in automata theory?

    My guess is that automata theory consideres it critically important that a "Turing machine" has an infinite tape, while intuitively it instead seems relevant that it has something like a tape at all, some sort of random access memory, even if it is finite. I think such a memory system can't be implemented with classical finite state machines, at least not with comparable time complexity for read and write, but can be realized with sequential logic.

    [1] https://en.wikipedia.org/wiki/Sequential_logic

    • tomstuart 9 days ago
      Real-world computers are equivalent to linear bounded automata, not true Turing machines, because they have finite memory. This technicality is mostly ignored because a computer with a large finite memory is a decent enough approximation to a Turing machine for practical purposes. But, for example, the halting problem is decidable for linear bounded automata — because there are only finitely many states, every computation must either halt or eventually revisit an earlier state and get stuck in a loop — so in theory it’s an important distinction.
      • DEADMINCE 9 days ago
        > because there are only finitely many states, every computation must either halt or eventually revisit an earlier state and get stuck in a loop

        Yet we know this doesn't happen in practice.

        • krallja 8 days ago
          You just didn’t wait long enough.
      • cubefox 8 days ago
        It seems you didn't really read my comment though? I was arguing the relevant difference between Turing machines and FSMs was the memory system, not its infinite tape. It's interesting that the Wikipedia article on LBAs doesn't tell us whether they are considered equivalent to FSMs. It seems that by standard automata theory, they must be. Which is intuitively not the correct, since they are much more similar to Turing machines.
        • tomstuart 8 days ago
          I did read your comment but I don’t really understand what you mean by “the memory system”. A linear bounded automaton is by definition a finite state machine for a given input (i.e. fixed tape size) because the number of possible configurations is finite. A Turing machine’s infinite tape is what stops it being a finite state machine.
          • cubefox 7 days ago
            Well, I said I meant the tape of a Turing machine (irrespective of its size), or the RAM of a physical computer, and that I suspected that such memory is different from other states in that read/write operations have specific lower time complexity.

            > A Turing machine’s infinite tape is what stops it being a finite state machine.

            Well, that's if you accept the usual automata theory definition, which was what I was arguing against. Part of having an "infinite tape" memory is not just that it is infinite, but also that it is a tape. A pushdown automaton also has infinite "memory" (though a stack, not a tape memory with random access), but it is still not equivalent to a Turing machine. Nor is it equivalent to a finite state machine.

            Basically, what I suspect is that the type of "memory" system that some automaton allows determines its computational power, not whether the memory or the maximum number of its states is infinite or not.