Yep, I've exchanged a few emails with Andy before, the oilshell author. I started the parser in early 2016, so we were both getting started around the same time.
If you look at my README's caveats section, you'll see a very similar story - only accepting programs that can be parsed statically in a simple way.
I'm not sure I understand what you're saying. If there was a Go->Bash transpiler (or a JS->Bash one), yes, we would theoretically end up with a Bash parser and interpreter in Bash.
I'm going to go on assuming that you're not being sarcastic :)
As I mentioned in another comment, once Go's wasm support is shipped and stable, I hope to make the JS package smaller and better. Until then, this is the best I can do without manually rewriting the Go code in JS.
Is there a more appropriate word for source-to-source transformation? I understand that compilers aren't that different, but generating machine code is quite different.
It really isn't though. PCC just walked the AST, writing out text to a file with hardly any optimizations just like most of what people call 'transpilers'. But you wouldn't call a c compiler to asm a transpiler, right? Pretty much everything that gets attributed to some intrinsic difference between source-source and source-machine compilers is just a function of the immature tooling on the web.
I mean, it should exist according to wiki's rules since there are so many people using the term. Wiki intentionally doesn't try to independently verify any research, but just serves as a collation of ideas.
What I would do is ask 'what part there is any different than any other compiler?'
Except that there's a true formal (in fact legal) distinction between a car and a truck (they have different emissions standards). The whole distinction between a 'transpilers' and other compilers on the other hand is 'what is the user going to do with the output'.
It shouldn't. There are several issues with the article and it should be deleted. None of the cited references actually support transpiler or source-to-source compiler as credible or noteworthy.
There's the (albeit not 100% correct) meme that C is portable PDP-11 asm. What is correct in my mind is that PCC has much fewer, much less complicated transformations to go to PDP-11 (or M68k) asm than Babel does to go from ES-next to ES5.
But for some reason Babel is a transpiler because it's all high level and that's magically different. And no one in their right mind would attempt to call the c compiler of the 1980s a transpiler.
The only difference between the two in my mind is that the output from a transpiler is likely going to have a ton of bloat, require additional transforming, and be a much larger amount of code than the sum of the inputs. Whereas something like the Closure Compiler actually optimizes and eliminates dead code. They are the same thing though from an ideological standpoint though.
I mean, early PCC didn't have data flow analysis, or eliminate dead code, and was known for head scratching levels of stuff like spilling registers on the stack that didn't need to be spilled. Was the c compiler of the 1980s a transpiler?
Or those experts who insist to correct you when you say you're programming something in perl/php/python. "Uhm no, you mean scripting, because the code is run on an interpreter." One of the fastest ways to lose all my respect.
The assembly that comes out of GCC is human readable too (the actual assembling occurs via a different program from a different project than GCC). Is GCC a transpiler?
Not sure if trolling. I think most programmers have a sense of what constitutes "roughly the same level of abstraction". Certainly there are gray areas, but such is life.
I've been mainly asking, how does the compiler intrinsically change depending on the abstraction of it's source and target?
In my mind your argument is equivalent to to separating novels based on if they were written with a pen or a pencil. Yes, there's a difference between a pen and a pencil, but that has no effective bearing on the novel as written. And it doesn't make sense to categorize novels based on that.
Back around 1985, I got involved in a discussion on BIX with Bjarne Stroustrup about whether Cfront was a compiler or not.
Cfront was Bjarne's original C++ compiler. It translated C++ code into C code which would then be passed through a C compiler.
I didn't think Cfront should be called a compiler, because it didn't compile down to machine code and not even to assembly, only to another relatively high level language.
Bjarne was quite insistent that Cfront really was a compiler, and the fact that it compiled to another source language was immaterial. It did essentially the same things as any compiler, it only had a different back end code generator. And the code generator could later be swapped out for one that generates machine code.
Of course you can call it whatever you want, but Bjarne called it a compiler.
Ah, it sounds like we are on the same page after all. I mistook what you said for something I've occasionally seen other people say, that a transpiler is not a compiler. My apology for the misunderstanding.
Many Pascal compilers didn't have ASTs - the language was designed so they weren't really needed. An AST is not a prerequisite for a compiler in my mind.
AFAIK Wirth's own compilers never use an AST. Also early (and perhaps late) versions of Turbo Pascal emit code as fast as it can parse it. Here is a document describing how Turbo Pascal 3 worked (by someone who reverse engineered it):
Isn't the AST there, but just hidden? It looks like the AST is the compiler's stack, it's just building the tree while doing a DFS and then cleaning up after itself as it goes.
I think calling the call stack an AST is stretching things a bit :-P. Turbo Pascal (and most of Wirth's own compilers) is a simple recursive descent compiler that emits code almost as soon as it scans a token and it keeps very little information around.
The other comment has the examples. I don’t think an assembler is a compiler, and I think transpiler is a fine term - we’ve been using it since the late 60s so it’s nothing new.
So I asked the other guy, but I'm really curious to hear you response too.
Isn't the AST there, but just hidden? It looks like the AST is the compiler's stack, it's just building the tree while doing a DFS and then cleaning up after itself as it goes. Like the tree would be extremely visible if you looked at it across time.
And I'm curious what makes you say that an assembler doesn't count but source to source compilers do.
And finally, yeah I've seen references going back to the sixties, but they're almost always marketing literature. I strongly hope that in a half century, CS theory isn't based off of any of the marketing literature from the companies I've worked at. : )
I can see what you're arguing, but I think this is so abstract as to not really be meaningful. At no point is there a tree data structure. I suppose there is one if you extend it into the fourth dimension :)
> And I'm curious what makes you say that an assembler doesn't count but source to source compilers do.
I think for me it's because an assembler doesn't have any choice about how to translate the program. A compiler has more freedom for how to translate the program.
I'd then say a transpiler is a form of compiler, but an assembler is not.
> And finally, yeah I've seen references going back to the sixties, but they're almost always marketing literature.
See for example The Communication of Algorithms by A F Parker-Rhodes, 1964. This is a peer-reviewed academic paper, not marketing material.
Anyone who tells you that transpiler is some kind of neologism invited by JavaScript developers who didn't know any better is the ignorant one themselves.
Most definitions of "compiler" I've seen are broader than that.
Anyway, a transpiler is a type of compiler that compiles to approximately the same level of abstraction. Why is it a problem to have a word to distinguish them from other types of compilers?
Some general thoughts, which may or may not be practical depending on how the transpiling works (I have no idea about go):
- Instead of calling .Print(), have the class extend EventEmitter (node builtin, https://github.com/primus/eventemitter3 is a good browser shim for the same API), with an event that fires once for each line that terminates with a newline.
- Hard mode of the above: Add streams functionality (node builtin, https://github.com/nodejs/readable-stream is an official browser shim for the same API), with output streams that operate on bytes instead of lines.
- Combine parser and printer into a single ES6 class (e.g. using `new Parser()` syntax, rather than the indirect object generation).
- In optional addition to the above, have .parse() return a Promise for the output (not the parsed program), with parsed programs stored on the Parser itself (and examinable via an additional class method).
- Change all method names to be in camelCase instead of PascalCase. Only classes (as instantiated using `new`, not just methods that generate class objects) get PascalCase.
Thanks for the suggestions! I'll need to read up on these js/node features.
One reason that the API is a bit clunky is that it tries to mimic the Go API closely, so that the documentation and examples are reusable.
However, your points on the string returns are very valid. I replaced all of Go's readers and writers (byte streams) with strings, simply because I didn't know of a better way.
It definitely sounds like the features you suggested would be better. If the transpiler (gopherjs) supports them, I'll definitely give them a try.
To help, the short version on "why" for EventEmitter, streams, and Promises is that they're three different ways of doing things asynchronously, with different use cases and details.
EventEmitter - You expect to intermittently get individually completed values that you do whatever with and then discard. These values may be grouped into useful sets by the EventEmitter (each separate event type).
Streams - You expect to get raw partial data that you want to do your own buffering and handling with. You want to be able to feed this raw data to a different target (e.g. file writing) without handling the fine details yourself.
Promise - You expect to get a single completed value when an action is completely finished, or (for a Promise without a return value) you just want to have something contingent on an action finishing.
An example of each:
Streams: A TelnetConnection class that handles connecting to a server, then supplies a stream with the bytes from the connection (with each chunk being whatever is convenient for buffering purposes), and ends the stream when the connection drops.
EventEmitter: A TelnetHandler class that uses TelnetConnection and fires an event for each complete line (ending with \n).
Promise: A TelnetHttp class that has a method get(hostname) that uses TelnetHandler and returns a Promise for the full text return value of doing a simple HTTP GET call.
Thanks for the package! When it comes to JavaScript - the code is the documentation. You look up methods etc in the source code. But as this is 30k lines of generated code it's not very fun to read in order to figure out how something works. So in this case documentation, tutorials and examples becomes very important! For example how do you find all functions, or how do you find all variables, or all variables available (in scope) from position row/col.
I know there are documentation for Go, but that is not super helpful for a JavaScript dev. We need samples ready to copy & paste. Think Stack Overflow (https://stackoverflow.com/) which is often the first result that comes up when you Google for something JavaScript related.
That's a good point. I am adding more and more examples to the Go documentation these days.
Ideally I'd just point the JS people at the Go docs and examples, but the translation is not exactly one-to-one. This is why the README file published with the JS package has a complete example. I'll try to add a few more.
idea: Maybe the examples and documentation can be transpiled too !? Then you can have several tabs for each code snippet. Go, JavaScript, other. Some transpilations might end up weird, but you could fix those manually.
This is hacky, but it beats having to write a shell parser from scratch again :)
Once wasm support is shipped with Go, I hope to be able to accomplish the same without a transpiler. I'd also hope that the final package would be smaller and more performant.
Love this. Ironic that it is BSD licensed though, considering it parses and runs the world’s probably most popular piece of GPL’ed software (probably second to Linux).
Bonus points if the code is written by a Markov chain.
If you look at my README's caveats section, you'll see a very similar story - only accepting programs that can be parsed statically in a simple way.
I imagine it would be quite the monster, though.
As I mentioned in another comment, once Go's wasm support is shipped and stable, I hope to make the JS package smaller and better. Until then, this is the best I can do without manually rewriting the Go code in JS.
I still hate this term.
It really isn't though. PCC just walked the AST, writing out text to a file with hardly any optimizations just like most of what people call 'transpilers'. But you wouldn't call a c compiler to asm a transpiler, right? Pretty much everything that gets attributed to some intrinsic difference between source-source and source-machine compilers is just a function of the immature tooling on the web.
What I would do is ask 'what part there is any different than any other compiler?'
Why do you think it is helpful to be unambiguous between atoms that make up a human or atoms of any goods in the back of a truck? /s
> translates between programming languages that operate at approximately the same level of abstraction
That's fairly straightforward, though a bit subjective (which doesn't preclude a word from having meaning)
There's the (albeit not 100% correct) meme that C is portable PDP-11 asm. What is correct in my mind is that PCC has much fewer, much less complicated transformations to go to PDP-11 (or M68k) asm than Babel does to go from ES-next to ES5.
But for some reason Babel is a transpiler because it's all high level and that's magically different. And no one in their right mind would attempt to call the c compiler of the 1980s a transpiler.
Babel is a JavaScript compiler.
But as you all say, this is just a spectrum. Perhaps the future generation will call JavaScript a low-level language :)
Anyways, how is a transpiler different from a bytecode compiler, except that the bytecode is human readable?
The bytecode is human readable.
I'm not going to waste any more time defining things for you. Suffice to say, Go and JavaScript are roughly the same level of abstraction.
In my mind your argument is equivalent to to separating novels based on if they were written with a pen or a pencil. Yes, there's a difference between a pen and a pencil, but that has no effective bearing on the novel as written. And it doesn't make sense to categorize novels based on that.
Cfront was Bjarne's original C++ compiler. It translated C++ code into C code which would then be passed through a C compiler.
I didn't think Cfront should be called a compiler, because it didn't compile down to machine code and not even to assembly, only to another relatively high level language.
Bjarne was quite insistent that Cfront really was a compiler, and the fact that it compiled to another source language was immaterial. It did essentially the same things as any compiler, it only had a different back end code generator. And the code generator could later be swapped out for one that generates machine code.
Of course you can call it whatever you want, but Bjarne called it a compiler.
* Which I admittedly didn't make clear in the parent comments, but did mention elsewhere in this thread.
They're both just subsets of the broader category of compilers.
https://www.pcengines.ch/tp3.htm
Super cool though. Thanks!
Isn't the AST there, but just hidden? It looks like the AST is the compiler's stack, it's just building the tree while doing a DFS and then cleaning up after itself as it goes. Like the tree would be extremely visible if you looked at it across time.
And I'm curious what makes you say that an assembler doesn't count but source to source compilers do.
And finally, yeah I've seen references going back to the sixties, but they're almost always marketing literature. I strongly hope that in a half century, CS theory isn't based off of any of the marketing literature from the companies I've worked at. : )
I can see what you're arguing, but I think this is so abstract as to not really be meaningful. At no point is there a tree data structure. I suppose there is one if you extend it into the fourth dimension :)
> And I'm curious what makes you say that an assembler doesn't count but source to source compilers do.
I think for me it's because an assembler doesn't have any choice about how to translate the program. A compiler has more freedom for how to translate the program.
I'd then say a transpiler is a form of compiler, but an assembler is not.
> And finally, yeah I've seen references going back to the sixties, but they're almost always marketing literature.
See for example The Communication of Algorithms by A F Parker-Rhodes, 1964. This is a peer-reviewed academic paper, not marketing material.
Anyone who tells you that transpiler is some kind of neologism invited by JavaScript developers who didn't know any better is the ignorant one themselves.
Anyway, a transpiler is a type of compiler that compiles to approximately the same level of abstraction. Why is it a problem to have a word to distinguish them from other types of compilers?
- Instead of calling .Print(), have the class extend EventEmitter (node builtin, https://github.com/primus/eventemitter3 is a good browser shim for the same API), with an event that fires once for each line that terminates with a newline.
- Hard mode of the above: Add streams functionality (node builtin, https://github.com/nodejs/readable-stream is an official browser shim for the same API), with output streams that operate on bytes instead of lines.
- Combine parser and printer into a single ES6 class (e.g. using `new Parser()` syntax, rather than the indirect object generation).
- In optional addition to the above, have .parse() return a Promise for the output (not the parsed program), with parsed programs stored on the Parser itself (and examinable via an additional class method).
- Change all method names to be in camelCase instead of PascalCase. Only classes (as instantiated using `new`, not just methods that generate class objects) get PascalCase.
One reason that the API is a bit clunky is that it tries to mimic the Go API closely, so that the documentation and examples are reusable.
However, your points on the string returns are very valid. I replaced all of Go's readers and writers (byte streams) with strings, simply because I didn't know of a better way.
It definitely sounds like the features you suggested would be better. If the transpiler (gopherjs) supports them, I'll definitely give them a try.
EventEmitter - You expect to intermittently get individually completed values that you do whatever with and then discard. These values may be grouped into useful sets by the EventEmitter (each separate event type).
Streams - You expect to get raw partial data that you want to do your own buffering and handling with. You want to be able to feed this raw data to a different target (e.g. file writing) without handling the fine details yourself.
Promise - You expect to get a single completed value when an action is completely finished, or (for a Promise without a return value) you just want to have something contingent on an action finishing.
An example of each:
Streams: A TelnetConnection class that handles connecting to a server, then supplies a stream with the bytes from the connection (with each chunk being whatever is convenient for buffering purposes), and ends the stream when the connection drops.
EventEmitter: A TelnetHandler class that uses TelnetConnection and fires an event for each complete line (ending with \n).
Promise: A TelnetHttp class that has a method get(hostname) that uses TelnetHandler and returns a Promise for the full text return value of doing a simple HTTP GET call.
Thanks again for the help!
I know there are documentation for Go, but that is not super helpful for a JavaScript dev. We need samples ready to copy & paste. Think Stack Overflow (https://stackoverflow.com/) which is often the first result that comes up when you Google for something JavaScript related.
Ideally I'd just point the JS people at the Go docs and examples, but the translation is not exactly one-to-one. This is why the README file published with the JS package has a complete example. I'll try to add a few more.
Once wasm support is shipped with Go, I hope to be able to accomplish the same without a transpiler. I'd also hope that the final package would be smaller and more performant.
https://github.com/golang/go/issues/18892
I hope I don't get an angry email from RMS at some point :)