I'm going to go on assuming that you're not being sarcastic :)
As I mentioned in another comment, once Go's wasm support is shipped and stable, I hope to make the JS package smaller and better. Until then, this is the best I can do without manually rewriting the Go code in JS.
It really isn't though. PCC just walked the AST, writing out text to a file with hardly any optimizations just like most of what people call 'transpilers'. But you wouldn't call a c compiler to asm a transpiler, right? Pretty much everything that gets attributed to some intrinsic difference between source-source and source-machine compilers is just a function of the immature tooling on the web.
I mean, it should exist according to wiki's rules since there are so many people using the term. Wiki intentionally doesn't try to independently verify any research, but just serves as a collation of ideas.
What I would do is ask 'what part there is any different than any other compiler?'
Except that there's a true formal (in fact legal) distinction between a car and a truck (they have different emissions standards). The whole distinction between a 'transpilers' and other compilers on the other hand is 'what is the user going to do with the output'.
There's the (albeit not 100% correct) meme that C is portable PDP-11 asm. What is correct in my mind is that PCC has much fewer, much less complicated transformations to go to PDP-11 (or M68k) asm than Babel does to go from ES-next to ES5.
But for some reason Babel is a transpiler because it's all high level and that's magically different. And no one in their right mind would attempt to call the c compiler of the 1980s a transpiler.
The only difference between the two in my mind is that the output from a transpiler is likely going to have a ton of bloat, require additional transforming, and be a much larger amount of code than the sum of the inputs. Whereas something like the Closure Compiler actually optimizes and eliminates dead code. They are the same thing though from an ideological standpoint though.
I mean, early PCC didn't have data flow analysis, or eliminate dead code, and was known for head scratching levels of stuff like spilling registers on the stack that didn't need to be spilled. Was the c compiler of the 1980s a transpiler?
Or those experts who insist to correct you when you say you're programming something in perl/php/python. "Uhm no, you mean scripting, because the code is run on an interpreter." One of the fastest ways to lose all my respect.
I've been mainly asking, how does the compiler intrinsically change depending on the abstraction of it's source and target?
In my mind your argument is equivalent to to separating novels based on if they were written with a pen or a pencil. Yes, there's a difference between a pen and a pencil, but that has no effective bearing on the novel as written. And it doesn't make sense to categorize novels based on that.
Back around 1985, I got involved in a discussion on BIX with Bjarne Stroustrup about whether Cfront was a compiler or not.
Cfront was Bjarne's original C++ compiler. It translated C++ code into C code which would then be passed through a C compiler.
I didn't think Cfront should be called a compiler, because it didn't compile down to machine code and not even to assembly, only to another relatively high level language.
Bjarne was quite insistent that Cfront really was a compiler, and the fact that it compiled to another source language was immaterial. It did essentially the same things as any compiler, it only had a different back end code generator. And the code generator could later be swapped out for one that generates machine code.
Of course you can call it whatever you want, but Bjarne called it a compiler.
Ah, it sounds like we are on the same page after all. I mistook what you said for something I've occasionally seen other people say, that a transpiler is not a compiler. My apology for the misunderstanding.
AFAIK Wirth's own compilers never use an AST. Also early (and perhaps late) versions of Turbo Pascal emit code as fast as it can parse it. Here is a document describing how Turbo Pascal 3 worked (by someone who reverse engineered it):
I think calling the call stack an AST is stretching things a bit :-P. Turbo Pascal (and most of Wirth's own compilers) is a simple recursive descent compiler that emits code almost as soon as it scans a token and it keeps very little information around.
So I asked the other guy, but I'm really curious to hear you response too.
Isn't the AST there, but just hidden? It looks like the AST is the compiler's stack, it's just building the tree while doing a DFS and then cleaning up after itself as it goes. Like the tree would be extremely visible if you looked at it across time.
And I'm curious what makes you say that an assembler doesn't count but source to source compilers do.
And finally, yeah I've seen references going back to the sixties, but they're almost always marketing literature. I strongly hope that in a half century, CS theory isn't based off of any of the marketing literature from the companies I've worked at. : )
I can see what you're arguing, but I think this is so abstract as to not really be meaningful. At no point is there a tree data structure. I suppose there is one if you extend it into the fourth dimension :)
> And I'm curious what makes you say that an assembler doesn't count but source to source compilers do.
I think for me it's because an assembler doesn't have any choice about how to translate the program. A compiler has more freedom for how to translate the program.
I'd then say a transpiler is a form of compiler, but an assembler is not.
> And finally, yeah I've seen references going back to the sixties, but they're almost always marketing literature.
See for example The Communication of Algorithms by A F Parker-Rhodes, 1964. This is a peer-reviewed academic paper, not marketing material.
Some general thoughts, which may or may not be practical depending on how the transpiling works (I have no idea about go):
- Instead of calling .Print(), have the class extend EventEmitter (node builtin, https://github.com/primus/eventemitter3 is a good browser shim for the same API), with an event that fires once for each line that terminates with a newline.
- Hard mode of the above: Add streams functionality (node builtin, https://github.com/nodejs/readable-stream is an official browser shim for the same API), with output streams that operate on bytes instead of lines.
- Combine parser and printer into a single ES6 class (e.g. using `new Parser()` syntax, rather than the indirect object generation).
- In optional addition to the above, have .parse() return a Promise for the output (not the parsed program), with parsed programs stored on the Parser itself (and examinable via an additional class method).
- Change all method names to be in camelCase instead of PascalCase. Only classes (as instantiated using `new`, not just methods that generate class objects) get PascalCase.
To help, the short version on "why" for EventEmitter, streams, and Promises is that they're three different ways of doing things asynchronously, with different use cases and details.
EventEmitter - You expect to intermittently get individually completed values that you do whatever with and then discard. These values may be grouped into useful sets by the EventEmitter (each separate event type).
Streams - You expect to get raw partial data that you want to do your own buffering and handling with. You want to be able to feed this raw data to a different target (e.g. file writing) without handling the fine details yourself.
Promise - You expect to get a single completed value when an action is completely finished, or (for a Promise without a return value) you just want to have something contingent on an action finishing.
An example of each:
Streams: A TelnetConnection class that handles connecting to a server, then supplies a stream with the bytes from the connection (with each chunk being whatever is convenient for buffering purposes), and ends the stream when the connection drops.
EventEmitter: A TelnetHandler class that uses TelnetConnection and fires an event for each complete line (ending with \n).
Promise: A TelnetHttp class that has a method get(hostname) that uses TelnetHandler and returns a Promise for the full text return value of doing a simple HTTP GET call.
That's a good point. I am adding more and more examples to the Go documentation these days.
Ideally I'd just point the JS people at the Go docs and examples, but the translation is not exactly one-to-one. This is why the README file published with the JS package has a complete example. I'll try to add a few more.