This was an incredibly enjoyable read. A lesson to take away is that many of the ideas of Lisp can be taken advantage of without reeling in the entirety of an existing stack.
Writing a Lisp parser is easy. Walking Lisp code is easy. Serializing Lisp code is easy. Adding a new primitive is easy. Adding very basic syntax transforming macros is easy. All of these are virtually trivial if your host language is a Lisp, as was the case with Co2.
What they didn’t do is what many people might think are table stakes with Lisp: writing a garbage collector, writing a runtime, supporting lambdas, and so on. Those are unreasonable asks for 2K RAM on a 6502. I wouldn’t say they wrote a bonafide Lisp, but they made use of many ideas of Lisp successfully to write a game that is very surprisingly readable while not being too abstract over assembly.
Since Lisp began in the 1950s, it has always needed to stay tied to the low level. Even today, with SBCL or CCL, you can write your own assembly code. One relevant thing to the article is Baker’s COMFY 6502 language for writing assembly code . A few implementations can be found on GitHub.
I understand that Forth is powerful and elegant, and one of the last languages I'd want to take on in a fight when wielded by a master, so let me pretend you were asking about a stripped-down Lisp compared to, say, Pascal, instead:
* The simple syntax of stripped-down Lisp is very amenable to application-specific or domain-specific macros. This turns out to be a convenient way to do things that often the language or compiler alone can't do as well, if you used only functions, data, and conventions.
* That the syntax is so simple, and can also be first-class data that is displayed from the programming environment like it looks in syntax, makes it especially nice for things like intermediate representations that are refined incrementally. For example, you can show a translation series of steps that go from syntax parse, to resolutions, to phases of optimizations, to high-level assembler, to a very low-level target code (still represented with parentheses and "opcodes"), from which you write bytes. This can also be convenient.
In this particular case, they're using Racket (an implementation of a dialect of Scheme) to implement a compiler for a Lisp dialect they invented. Using Racket gives them both a nice general-purpose language for implementing their compiler, and happens to already have a lot of tools for parsing their own stripped-down Lisp and manipulating it.
IIUC, Naughty Dog used Racket for a similar purpose: to implement their own Lisp. For a narrative DSL for some AAA titles.
Forth implementations on the 6502 uses a stack and require more RAM than Co2 which uses a compiled stack.
I feel like Co2 would compile to faster code but of course I haven't benchmarked this. The reason I feel this is so is that I way I understand compiled forth is that the 'words' are still threaded so you don't escape the interpreter overhead.
Co2 takes away the chore of parsing, in Forth you are the parser. It's simpler but more error prone and harder to read (subjectively).
Forth is postfix notation which for a lot of people is challenging.
> Forth implementations on the 6502 uses a stack and require more RAM than Co2 which uses a compiled stack.
I don't know what you mean when you say this. Could you elaborate? Any function call is going to require putting the arguments somewhere.
> The reason I feel this is so is that I way I understand compiled forth is that the 'words' are still threaded so you don't escape the interpreter overhead.
Indirect/direct threading at runtime isn't required; it's up to the compiler. There are plenty of Forth cross-compilers (e.g. MPE's) that compile native code (and call it "subroutine threaded"). They don't have an explicit (inner) interpreter... they just use standard CPU opcodes to call/return.
> I don't know what you mean when you say this. Could you elaborate? Any function call is going to require putting the arguments somewhere.
There's a footnote in the article that might be helpful:
> this is thanks to a "compiled stack", a concept that's used in embedded programming, though I had a hard time finding much literature about it. In short, build the entire call graph of your project, sort from leaf nodes to roots, assign to each node memory equal to it's needs + the max(children)
They did parse it, albeit indirectly, by Racket’s reader. Co2 is a language, not a bunch of function calls, so it’s not quite the same as building a library in your favorite language. The article even gives examples of new syntax they produced.
Parsing Lisp in Lisp is so easy because it’s free.
I'd say this depends of the complexity of the I/O system.
This was the original claim, which you supported:
'Parsing Lisp in Lisp is so easy because it’s free.'
The example you were pointing to is explicitly calling a parsing engine of Racket via 'read-syntax'. Actually more complicated than the usual s-expression reader - which does only read s-expressions, but has no further idea about Scheme syntax.
Check the usual Scheme report / Racket documentation for the definition of Scheme syntax, syntax objects and its extension mechanisms (macros, ...). I'd say the whole thing is non-trivial. There is a grammar of Scheme, but it is not fixed, because there are extension mechanisms, which make parsing challenging.
It's 'free' because it's a provided language facility - but not free in terms of complexity of the concepts to understand.
And no, the syntax of s-expressions (-> data) is not the syntax of Lisp. It's just the syntax of s-expressions. Search the Scheme report for 'syntax'...
I loved Lisp upon my first exposure in the late 80s in university. Then I "had" to professionally abandon Lisp leanings because I entered the game industry which required, at the time, a commitment to 8-bit assembly code. No problem. Lisp remained a hobby. Fast forward! Unexpected intersect! I love this so much and thank you for the great writeup!
There's a podcast for present day NES developers called The Assembly Line. I'm sure they'd enjoy this story and talking to you.
This is a complete and absolute hack and I love it. When reading the README file of their github it is possible to see how "impure" pragmatic decisions were made like for loops and not supporting proper recursion. I wish there would be more projects like this one porting lisp runtimes to more and more hardware.
In the other hand the racket lisp behind the scenes _generates_ the assembly code based on some of the scheme primitives rather than porting its whole Racket runtime there, which in spite of not being the same as running a lisp in the NES hardware is still impressive.
Purity isn't in contrast with pragmatism; what Haskellers refer to as purity is referential transparency.
You can still do this kind of work in Haskell as well. It is a great imperative language.
Though as someone who has written assembler compilers in Common Lisp... it's felt relatively easy to do in CL. I've only heard people talk about writing control software for drones from Haskell. I have no idea how one would do that in practice though.
While I will not try to debate your experience, I will provide some extra context for my previous assertion:
You cannot program in CL without using the CLOS. Most CL books go into great detail explaining it and the MOP, they are also full of looping constructs, even using GOTO. The only exceptions are Graham's books.
In open source, it is frowned upon to use recursion as TCO is not part of the standard. Some functions are even inlined for better performance.
I'm struggling with the same problem: I'm a Lisp programmer who's writing a commercial game, in my case a Unity engine game, which requires C#.
I've opted for a less elegant, but technically simple strategy: I'm writing all the build tools/content tools in Clojurescript, and then writing only the core game engine in raw C#.
This allows me to deploy a commercial game in "native" C# without any performance penalty, but with as few lines of C# as humanly possible.
That sounds like a great approach. I often wondered how to get Scheme code running in Unity.
Can you fire up a REPL while the game is running? That would be huge.
Many years ago I experimented with the idea of creating games for iOS in Chicken Scheme. Because I'm exceedingly lazy and did not want to bother with cross-compilation – unless it became a serious project – I just told Chicken's compiler to stop at the C code generation step. The Makefile would then copy the generated (and quite unreadable) C code to the XCode project, and then compile the whole thing together with any Objective-C code I had.
With very little setup code, you could embed arbitrarily large Chicken programs. And given Chicken's excellent C interop, the Objective-C code could easily call Scheme functions (the reverse is not as trivial, so I just wrote wrappers for the handful of Objective-C functions I needed – which weren't many in a game).
The only piece missing at the time was that Chicken did not have OpenGL-ES bindings. I solved that by copying code from Gambit Scheme, and using a couple of very trivial macros to make it compatible.
That worked beautifully. I could even start a remote REPL and instantly change running code over the network, no matter if it was running in a real device or the simulator. And I mean instantly: the next rendered frame would already have the changes.
Then I hit my roadblock: I had successfully solved the technical problem, so I lost interest in pursuing the game, which was ostensibly the reason why I had embarked on this detour to begin with. Oh well.
Have you explored Arcadia at all? It uses the Clojure CLR port to hook into Unity with some sweet repl goodness.
There seems to be some active development and I hope it works out great.
One of the advantages of lisp is REPL-driven development. I imagine you can't just edit a fn, eval it and then see the changes immediately. What is the workflow like when creating a NES game using co2?
I dunno about Co2, but the last time I heard about a project like this -- Naughty Dog's GOAL -- they absolutely could compile GOAL on the fly on the development PC and send it immediately to run on the PS2 dev kit inside an already running game. REPL-driven development directly on a game console. It was awesome.
This is impressive. I appreciate you documenting the development of it and giving an overview of what's going on inside. I wish there were more blog posts like this. Have you reached out to the Racket community about the game and co2? Also, would you say this Lisp is geared more toward people who already know 6502 assembly, and not toward people who just want to write a NES game in a Lisp?
With only looking through the article and being a 6502/NES programmer myself, you can't escape having to know 6502 and the NES hardware to write a NES game. This Lisp is cool and it will certainly smooth the process and is a good fit for an Adventure game and other parts of NES game development where you aren't counting cycles (like UI).
For me the challenge with the NES itself was getting a good tutorial and then with making a game, getting your art from your mind into the tiles and sprites and in your project. Your art 'pipeline'. Once that's established it becomes easier.
If you like this, you should also check out the work around Retro City Rampage. RCR is a GTA1 clone for multiple platforms from a few years back, but the developer also made a real NES ROM of it. Here's a great talk about that process: https://www.youtube.com/watch?v=Hvx4xXhZMrU
(I'm not related to the project, but it's one of the few games I've 100% completed because it was just so good)
I've only heard this term used with PIC development tools.
It makes all local variables have a fixed place in RAM. This obviously removes support for recursion. A naive implementation would cause this to explode memory usage.
However, you can analyze your entire program and any variables (including across function boundaries) that are never live at the same time are now allowed to share storage. Now you never push or pop variables to your stack, and your dynamic stack size is only the maximum call depth (no frame pointer needed because you don't have any on-stack variables, so you only save the return pointer to the stack).