Is there anything that Fabrice can't do? I mean, FFMpeg is almost a PhD thesis in and of itself, and he still manages to find time to make TinyC, QEMU, and now this. To say I'm jealous of his skills would be an understatement.
2. Does anyone have other people in their mind who is in the same league as this man? Mike Pall's LuaJIT sort of make him at this level but that is only one thing. Fabrice create things that everyone are using it one way or another, and if it wasn't HEVC patents mess we would have his bpg  replacing jpeg.
I know Fabrice a little. He's definitely real, smart and humble.
Visited him at his workplace about ten years ago when he worked at Netgem.
We also have a common friend we visit with spouse and kids, where we discover and discuss new gadgets (physics-based, drones, mechanical puzzles, etc). One day we played with a kind of padlock where the lock procedure implied to move a sort-of 4-direction joystick. The trick was: you could do any number of moves to lock it. It seemed like it allowed to store an arbitrary long sequence of numbers in a finite mechanical system. Fabrice arrived, heard us explain, thought, and said "the mechanics probably implements some sort of hash algorithm". That was the answer.
I've met him as well, a few times, but do not know him personally like you.
He is definitely very humble and a very good listener. When he told us about this side project he was working on about a year ago, he made it seem like it wasn't a big deal, just a small js engine, would never compete with v8. After a few questions, it was clear that the goal was to implement the latest ECMAScript spec, with all the goodies. It will never be in the v8 league, but it's on a league of its own.
Dan Bernstein. Researched curve 25519 and provided reference implementations of x25519 & ed25519. Invented chacha20, such that it never branches on user secrets to minimize side channel leakage. Along with poly1305, these form the foundation of almost all “modern crypto” from Signal to TLS v1.3. He wrote qmail as a superior MTA to the incumbent Sendmail. He’s even beat the US government in court.
And unless there's another software engineer named Daniel J. Bernstein out there, he also authored RFC 1143 documenting the Q Method of TELNET Option Negotiation , which prevents negotiation loops and nails down (in the shape of a state machine) exactly what behavior is good and proper for a Telnet peer.
I referenced this document a lot when writing my own Telnet stack.
That probably depends more on the listener than on the story!
I played MUDs for several years, at the same time that I was learning the ins and outs of programming. I developed some plugins for a popular third-party client called MUSHclient. The game I played also had a somewhat proprietary first-party client, and they used a spare Telnet option (one left unassigned by IANA ) to pass data from the server to the client to drive some extra graphical widgets on their client. I got involved in developing plugins that made use of that data, which led me to learning how to negotiate that option with the server and get the data I wanted.
I eventually started developing my own MUD client, which is where the Telnet stack came in. Now, writing a Telnet stack is just something I do when I learn a new language. It's just large enough of a project to exercise some architectural and API-level concerns.
Richard Stallman is mostly known for his activist/politics stuff at this point, but he was also a heck of a developer. He wrote Emacs, GCC, GDB and glibc. I'm not sure if he was a total lone wolf, but my understanding is that in the early days he was overwhelmingly the main person responsible for those projects.
It's hard to get a sense of how smart RMS is, because he seems to be anti-showy about it. But I've seen him talk in different venues, and calibrate how much he says to the venue. I've also seen him ramp up how much detail he goes into, and what kinds of arguments he can make, when someone with a background in some area is challenging him on some point. I can't tell how smart he is, but I suspect that most people talking with him underestimate him.
Possibly in his defense, I suspect it made more sense at the time. He'd just tell other hackers "this is free software - free as in...", and they'd say "OK, that's an interesting idea that resonates with what we know and see", or "OK, that's some hippie stuff, but we've been benefiting from that kind of sharing in computing, so you might have a point". They'd hear that in-person, or on the Internet, or in the text files when they went to compile and install the software. Now most people never get the introduction, or it's drowned out in all the massive noise that everyone is exposed to on the Web.
Though maybe saying "free" still works for him, because, when he's giving a talk, he can say "this is free software - free as in..." and people are there, paying attention.
Also, it's in the brand name.
A wild speculation possibility is that he's still thinking decades ahead, and maybe we go back to trying to say substantive things, and paying attention when others say substantive things, and then saying "free software" makes more sense again. (Some other instances of thinking ahead are the reason behind a subreddit name: "https://old.reddit.com/r/StallmanWasRight/")
In any case, I think it's unfortunate if people dismiss RMS's speech without listening, because of quirks and things we don't immediately understand.
I don't agree at all. From day one he been using intellectual wankery.
Regarding lists like r/stallmanwasright, I work with plenty of smart people who couldn't care less about free software some of them aren't even tech savvy and they can spot all the problems with a lot of the services we have today.
I think it is more an exercise of throwing shit at a wall until and seeing what sticks. Which btw is a perfectly valid method of seeing how your message gets across but it doesn't make you a genius.
That combined with awful manners, hygiene and the fact that he has some disgusting opinions about child abuse. I can't stand the man. The fact that this guy has any importance past "Well he was the founder of the GNU project and he wrote a text editor" seems ludicrous to me.
I think the difficulty for me in responding here is that your comment contains both assertions about intellectual merit (which could be interesting to explore), and more broad aversion to the person (which might be relevant to intellectual merit, and/or to the meta of dialogue about that, but is a manners minefield).
This thread is pretty tangential to the post to start with, so there will be better occasions to discuss these things. And maybe it helps to separate them out differently.
Guy L. Steele - no proper language design can go without him. I think he is the only person who enjoys writing language reference manuals and is able to explain every language detail mathematically and linguistically everyone can understand. He was behind C, Java, Scheme, Common Lisp, ECMA, Fortan...
Perhaps you mean behind standards and reference manuals for, rather than behind the languages? Behind Scheme and Common Lisp, okay. But he contributed to standardisation and documentation of the others after their creation, rather than being one of their designers.
Linus has demonstrated incredible long-term effectiveness as a software developer, both creating and then shepherding two of the most important pieces of software of the last 50 years. But he seems qualitatively different than Ballard.
I'm trying to put into words what the difference feels like. Git and linux demonstrate, for Linus, great intelligence but not genius, the way Ballard's works do. And on Linus' side, git and linux demonstrate leadership, pragmatism, and a tremendous understanding of how to actually drive a large project forward over time, which Ballard's works don't.
Torvalds is like Brahms, who published relatively few works, but polished, refined, and winnowed them until they were of very high quality. He wrote his duds, but he knew enough not to publish them. Ballard is like Bach: an unbroken series of gems, mostly small- to medium-sized, each one an immaculate work of craftsmanship wedded to incandescent genius. Nothing in the entire hoard is inferior work --- there's nothing you can point to and say, "Bach screwed up here."
TJ has created a large number of popular software packages, but I don't consider the depth of the software to be close in complexity of the type of software that Fabrice is capable of - basically anything he puts his mind to.
bellard.org - has the most impressive portfolio of software created from a single developer I've ever seen. It's more astonishing how varied each of his accomplishments are - covering some of the hardest programs in different computer science fields.
He's by far the best programmer in my book, I'm not sure who I'd put at #2, there's a number of contenders, but anyone else I can think of are experts in their respective fields, I don't know of anyone who has compiled such a broad list of complex software covering that many different fields.
Yeah, it's the breadth that gets me. There are several very impressive expert programmers in their area, but Fabrice seems to be able to step into any subfield he finds interesting, work on it for a couple years, then leave something behind that others are willing to spend years of their lives maintaining and extending. I could see QuickJS quietly running on a Mars rover 10 years from now. Not because JS. Because Fabrice. Always bet on Bellard.
No doubt a brilliant academic and mathematical mind with invaluable contributions to field of comp-sci and algorithms and author of the iconic TAOCP, but in terms of software works he's single-handedly produced? He's most famous for TeX typesetting system and inventing Literate programming, but his other software have had a lot less impact.
But that's mostly by choice as he's predominantly a professor who spends most of his time teaching so he's obviously going to have created a lot less body of digital works than a full-time developer is going to have.
That's probably true; FFMpeg almost single-handedly created an entire cottage industry in video production. The amount of elaborate image-processing it allows you to do probably does overtake most PhD papers you're likely to read.
One thing to note... he doesn't bother himself with pretty websites, community building of any kind, splash pages... It's just plain html. Not even github as far as I could tell. He just works on the tech and puts it out there with benchmarks as proof of his claims.
His humility (and general lack of "marketing wank" on his site) is certainly another aspect I admire. There's far too much "look at my absolutely amazing $trivial_app with best-in-class X and high-quality Y and gorgeous Z and ..." out there, that just seeing the equivalent of "This is a JS engine I wrote." and a simple enumeration of features without all that embellishment feels very refreshing.
Yes that's true in general, but ... I hate to be the party pooper here, but ....
What makes this JS implementation deserve the name "Quick"?
I noticed it doesn't appear to do JIT compilation or any of the fancy optimisations V8 does. So whilst it may start up quickly, and may interpret quickly, it won't actually execute JS quickly relative to V8, unless I missed something huge in the linked web page.
You certainly can embed V8, but it's a more involved affair. It's not the use case that's prioritized.
It's not at all my field of expertise, but my guess is that problems with V8 are that it's an order of magnitude larger, that it's written in C++ rather than C89, that it's more likely to make large changes to the way it works, and that it uses more memory.
All of those are good decisions to make for a component of Chrome that helps run webpages. But sometimes you'll only want to make it possible for users of your less than massive technical application to script its behavior using a language they might already be familiar with, and then those properties are undesirable.
"Quick" calls to mind something that's not just fast, but nimble.
Wow... I didn't pay attention to the URL and when I open this site I thought 'Who does an insane project like writing a new JS/ES interpreter all by himself (in C). Almost as insane as the guy who wrote that x86 emulator for the browser...'.
Just to learn that it is indeed the same guy (plus Charlie Gordon) :D
Wow. The core is a single 1.5MB file that's very readable, it supports nearly all of the latest standard, and Bellard even added his own extensions on top of that. It has compile-time options for either a NaN-boxing or traditional tagged union object representation, so he didn't just go for a single minimal implementation (unlike e.g. OTCC) but even had the time and energy to explore a bit. I like the fact that it's not C99 but appears to be basic C89, meaning very high portability.
Despite my general distaste for JS largely due to websites tending to abuse it more than anything, this project is still immensely impressive and very inspiring, and one wonders whether there is still "space at the bottom" for even smaller but functionality competitive implementations.
>> Despite my general distaste for JS largely due to websites tending to abuse it more than anything
"If JS were so great, we wouldn't have increasing TypeScript adoption or a million compile-to-JS languages."
That's quite the leap. Plenty of people like writing plain JS. Your only argument for why "it sucks" is basically "because some other people made tools that they thought were better." What's one language that doesn't fit that?
> I know you're not meant to ask why the downvotes, but I do wonder in this case
JS is the new PHP. It's really popular (i.e. lots of n00bs) and not at all elegant, so the programmer elite regard it with derision.
In another 6-8 years when WASM targeting alternatives become more mature we will see a new explosion of browser scripting.
> not at all elegant, so the programmer elite regard it with derision
> JS is the new PHP. It's really popular (i.e. lots of n00bs) and not at all elegant, so the programmer elite regard it with derision.
I would love to find a language I consider more elegant than modern JS/TS. Haven't seen anything yet though. I also question your claim that it's the "elite" who regard JS with derision. Fabrice is presumably fine with it considering he spent valuable time writing an engine for it.
I don't think JS/TS is that bad, but it's certainly pretty far from elegant. TS adds decent type support which helps tamp down on dynamic complexity but rust, go, kotlin, c#, swift, ocml even java are all much more elegant languages. I realize that elegance is largely subjective so I won't belabor the point, but it's a pretty widely accepted subjective assessment.
The amount of JS I come across on sites that is purely superfluous and serves to irritate, manipulate, or simply waste resources far outnumbers the actually useful, efficient, and purposeful use of it in webapps. I'm talking about things like loading several MB of it with half a dozen frameworks and libraries, just to render a few KB of text. Don't even get me started on popups/popunders/popins/clickjacking/scrolljacking/historyjacking and all that other crap that JS gets used for far more often.
Observe that Bellard's own site, despite having a few JS-based projects and demos, and now himself being the author of a JS engine, has not succumbed to the "JS everything!!!1" fad.
How can they? They don't know what it is. They are complaining about it a lot of times you don't hear about it; every time they have to fill an abomination of a form, every time they hard-close their browser because it's stuck, every time they just wait for a page to load, every time 'something' happens they did not ask for but it happened anyway.
Anecdotally, but I am sure this resonates with people who sometimes do not order only from Amazon, I tried to order some impossible meat from a site here and when it was time to pay, there was a JS undefined error and it emptied my shopping cart. This happens a billion times a day all over the place.
Everyone is focused on 'process' (CI, deployment, many irrelevant unit tests; a lot of busy work basically) and 'beautiful code' (style, linting, things a beautifier can do for you automatically) and ego (github stars), but robustness or longevity is just not really a focus of many.
You just summed up the entire web development ecosystem in general. There's also the trendchasing and continual churn of breaking things that used to work just fine, replacing them with even more inefficient and complex solutions. In the area of the software industry that I work in, doing things that way would quickly make customers disappear.
The "inelegance" or otherwise "lack of purity" of JS doesn't really bother me; a lot of languages have parts like that, and I've written some JS myself.
This man is a wizard. You can also thank him for ffmpeg and qemu. A company I worked for once tried to hire him as a consultant because he had implemented an LTE BTS in software. Is there anything he hasn't done?
EDIT: tombert beat me to it by a couple minutes.
yes! he has been one of my favorite programmers over all these years. so happy that he continues to dispel the 10x programmer myth. at least, as far as i am concerned. we should all take note. it's less about productivity but more about focus and hard work.
this project seems to have started in 2017. from a quick glance at the code, they used c (his favorite language) -- which by the way people are still there convincing us not to use. many of us would have been discouraged, distracted, or simply fazed by the pace of our field.
There's at least two years of work in there as Fabrice Bellard's side project. This would mean about ~10-15 man-years of full-time work for a small team of experienced engineers, which is about what it took for jerryscript:
The problem with putting a $ figure on it is that people who create free software create emmense amounts of value but don't capture even a tiny fraction for themselves. OTOH to montize their genius they need to help corporates do things that are not neccesarily great for society. Also a coporate situation may hamper their genius, although at this level he probably gets to call the shots.
Also a lot of useless consultants would earn that much, as would people doing shady shit, as well as many talented people so the $ comparison is kind of insulting.
Worth noting: the demo is a WASM-compiled instance of this engine. I'm not sure, but I think this might be the first example of a fully featured, potentially production-ready, JS VM sandbox running in the browser. (We're looking into safe ways to enable third party scripting of our own application, and such a sandbox would be a very nice tool to have in hand.)
I have a small, CPU-intensive benchmark which shows the performance of QuickJS to be comparable to other interpreters written in C. It's on par with MicroPython and recent versions of Ruby, and a little faster than CPython and Lua.
However, it's still 2-3x slower than the optimized, CPU-specific interpreters used in LuaJIT and V8 (with their JITs disabled), and 20-100x slower than the LuaJIT, V8 and PyPy JIT compilers.
If the scoring scheme works the same as the results he shows on the site (higher is better), then V8 is still far faster than QuickJS. Which wouldn't be all that surprising since the V8 folks have spent hundreds of thousands of man-hours on optimization and JIT magic, which is something that would be hard to duplicate, to say the least, in a ~1.5MB all-interpreted engine.
It shows only 2~3% of V8 performance. But it enough for running simple embedded script. And thing is it meets JS standards. At starting you can embed it into your SW easily, and provide scripting functionality. And when your SW grow enough, and have heavy scripts, you can migrate to V8 since there is compatibility.
We usually forget the purpose of scripting languages.
Main purpose of them is to be a glue between native calls:
take output of one native function and pass it as input to other. So instead of writing ray-tracer in JS you should write it as a native function (not even in webasm).
Either to use a) some compileable language (of V8 with JIT) or to use b) something small but to provide easy ways to extend script by custom native functions.
I've chosen b) and so the engine that does HTML/CSS and scripting is 6 times more compact than just only V8 binaries. For an embeddable engine that is clearly better.
For browsers, where running JS and no real ways for native code execution... they MUST have V8 and the like.
That also leads to Electron problems... browser engines are simply not suitable to be used as embeddable UI engines, by design.
From the documentation , compilation of js to c looks very interesting. If variable types are well known/defined (like Uint8Array, ..), this effectively presents a way to derive performant binaries from js
For those who don't know, the author is one of the most prolific programmers/creators of our time. Please check his bio 
The fact of its author and its extremely recent release date give this library significantly more credibility than any other similar "embedded JS" library. It may possibly be the best one available to date.
> But for most people, they are comfortable with this agreement. They don’t see it as a breach of freedom or liberty, but as a simple and just contract.
Having people sign away something that they are not fully aware of the implications of is not the same as people not caring.
If you presented software as "and at any time, this software you just paid $300 for, can be remotely disabled and you won't get your money back and you have no choice about it", people might be less willing to accept the status quo.
To use (yet another!) car analogy, if I buy a car and the manufacturer goes out of business, I can still get support from independent mechanics. Heck if there are enough customers, a new company may spring to life just to make replacement parts for the car! (This in fact happens super often, non-oem parts are everywhere.) Even after bankruptcy of the original manufacturer, an entire third party supply chain can exist and people can keep driving their cars.
Compare this to closed source software, where the software can literally just stop working one day, with no recourse.
People are slowly coming to realize this, especially as more and more software has a requirement to ping a server somewhere. IoT devices are helping bring this issue to public awareness.
GPL says "no matter what, YOU have the right to what is running on your computers, you can open it up and do what you want, or pay someone else to do the same."
How is it unethical that I should have control over the software running on my PC, in my house? The same PC my webcam and microphone are connected to? The same PC I use for online banking and email?
Nothing else in the world works like closed source software. Hardware manufacturers try, and there is a legal fight to try and make physical goods like software (sealed shut, unable to open, no third party parts), but traditionally anything I buy I've been able to disassemble at will.
Why should software be any different?
(Of course I say this as someone who has spent over a decade writing a closed source software... and I use Windows as my primary OS.)
No. You think that GPL is unethical. It does not make it unethical.
After thorough reading, your blog post did not convince me and here is why:
> Proponents argue that the GPL has a long-term goal of making more software free for all, by limiting short-term freedoms of some, for the sake of encouraging the propagation of the GPL via software, like a virus. That’s why it’s called a viral license.
This is a faulty comparison . You do choose to use a code under GPL and make your code under GPL as a result. You don't choose to catch a virus. See also .
> they are using a system they fundamentally disagree with, in order to promote and propagate a new system, and they are doing this from within the new system, which they would replace the old system with from without if they could.
What is your suggestion? They live in our world by its rules. How do you want them to achieve their goal?
Also, what is wrong with doing this?
> So the authors of software have rights, a fact which proponents of the GPL agree with. But they argue that the users of software have rights which should, in an ideal system, trump those of software authors.
> This is a clear contradiction. In an ideal legal system, either the software authors should have the absolute right of licensing their software as they see fit, or the software users should have the absolute right of accessing and modifying the software regardless of the software author’s wishes and intent.
Sorry, I don't understand the contradiction here.
> Experience shows that most people see nothing wrong with the current system: let authors distribute their work with whatever licenses they want, and let the market decide whether an author’s requirements are reasonable enough to cooperate with.
Great. If "experience shows", you must have a reference, right?
Anyway, this is an argumentum ad populum . "[it] is a fallacious argument that concludes that a proposition must be true because many or most people believe it" [which does not make it right].
Also, I don't want to "let the market decide". I want good behavior to be enforced, not some hazardous process that might end up producing a good behavior. The "market" is not necessarily right. There is no guarantee here. But I agree this is personal opinion.
Anyway: choosing to use GPL is indeed a way to participate in this free market, if such a thing exists, and "let it decide".
> Practical philosophical systems agree with this. When someone creates a work, they have the absolute right to do what they want with it, as long as they themselves do not break the law, and as long as the existence of the thing itself doesn’t violate the law either.
Oh, there we seem to agree. So authors of GPL software do not want to see their code used in proprietary software.
> Thus, the heart of the GPL goes against common sense, experience, and the just liberty of creators over their creations.
It goes against your common sense, not mine.
Talk about common sense now.
> But that actually brings up an interesting point: the GPL actually limits people’s freedom.
This is the point of the GPL. To prevent people from reusing the code without forwarding freedom to users.
If you make a code that is proprietary, you are the one who is "actually" limiting freedom of other people.
One person's freedom ends where another person's freedom begins. Freedom is nothing like an absolute graal.
If you are pissed off because you can't reuse some interesting GPL code in your proprietary software or your permissively licensed software, well, you are free to not use it. Nobody forces you to do it. You are also free to (re)consider setting your program under GPL. It is your choice, and one of the goals of the GPL too.
Thing is: there is no absolute truth. Some people think that the right way to write free software is to use permissive licenses. Some, to use copyleft. Some people think they should be free to produce proprietary software. Some people think that proprietary software is unethical and have given many solid arguments for that.
Common sense on this question is apparently non existent (yet?).
I can't wait to mess around with this, it look super cool. I love the minimalist approach, if it's truly spec compliant I'll be using this to compile down a bunch of CLI scripts I've written that currently use node.
I tend to stick with the ECMAScript core whenever I can and avoid using packages from NPM, especially ones with binary components. A lot of the time that slows me down a bit because I'm rewriting parts of libraries, but here everything should just work with a little bit of translation for the OS interaction layer which is very exciting.
I have a domain-specific Windows application that uses Google's V8 engine for hosting user-written scripts. It hasn't been upgraded for several years, and when I recently took a look at updating the V8 version it's linked with, I was dismayed at how bloated and complex V8 has become. Seems like it can't be compiled down to a single DLL anymore, at least not without turning the compilation into a miniature research project in itself.
So there's definitely a need for more lightweight contenders in this space.
Does the the application basically pass data to a user script (the user knows JS so it is useful for them?), and then the JS returns the data after processed?
Amusingly I asked the question and remembered that I actually worked on (supported) a hardware product that did this to some extent. It was a disaster as the scripting language would eat memory / cpu and crash the box ;)
From the description it's probably a Bacon cycle collector. The basic idea is that it checks to see whether reference counts for all objects in a subgraph of the heap are fully accounted for by other objects in that subgraph. If so, then it's a cycle, and you can delete one of the edges to destroy the cycle. Otherwise, one of the references must be coming from "outside" (typically, the stack) and so the objects cannot be safely destroyed. It's a neat algorithm because you don't have to write a stack scanner, which is one of the most annoying parts of a tracing GC to write.
This is a very broad question. Garbage collection is a large topic. The general consensus is that reference counting methods give typically lower throughput than heap scanning methods but (I think) lower total memory usage and more predictable performance.
They do bookkeeping on every free whereas a heap scanning gc will typically do bookkeeping in small incremental bits on alloc and mainly on gc cycle (when space runs low).
Would also be interesting to see how this GC runs in threaded or multi-instance environment. For example, Nim (the language) can produce libs by having them link to one dynamic lib containing the RT. Other compilers, like Crystal, don't support this outright.
Some editors, like Emacs, make it very easy to work with one huge file, because you can have multiple independent views ("windows" in Emacs parlance) into the same file. It is - or maybe has been - quite common to work like this in LISP communities.
It's C so it's harder to tell than other languages, since it doesn't have separate namespaces for everything. You can just write a bunch of functions and global variables in separate files and then act like they're all in the same file, and there's really no difference.
Can not pass tests from armv8l, the site seems no bug report links. is anyone know how to let bellard know this ?
make -j8 test
Error: assertion failed: got |2e+1|, expected |3e+1|
at assert (tests/test_builtin.js:17)
at test_number (tests/test_builtin.js:307)
at <eval> (tests/test_builtin.js:589)
> What's to like about it? First thing I wanted to do is look at the sources, which is now a multiple-step process - especially since I'm on windows and usually work on a Mac. Not to mention mobile.
Such hardship. High quality software is provided for free and people are complaining about the distribution format? Code was developed just fine before github. If viewing files in a tarball is an impediment then this library is not for you.
Actually, why are we (mostly) ignoring Charlie Gordon? Every second comment here seems to be ready with praise for Bellard (who no doubt deserves it), but Gordon seems like the right hand man, a significant part of the dynamic duo here and very much a unsung hero barely noted.
> Seems like a pseudonym; Charlie Gordon is the name of the protagonist in Flowers for Algernon.
Just had a vision of a hacker blog where the author starts out writing in the most godawful VB6 spaghetti, gets some sort of brain operation and subsequent blog postings are like Linux booting in RISC-V implemented in Conway's Game of Life. Then suddenly a mouse dies and the quality gradually reverts back to how cool the <BLINK> tag is in HTML.
He's got over 40 projects in total there, and each one of those is deep --- they would either take a very long time or be impossible for the average programmer. In contrast, many other programmers I've heard claim to have done over a dozen different projects turn out to really be "I glued several libraries together" repeated many times; very much the opposite of Bellard.
In fact, I suspect one of the reasons he is so productive is because he shuns all social media.
I just tried building this on my Mac and it looked initially like it all was building fine, but it eventually failed when building qjs32.
Thinking this probably didn't matter much I went ahead and ran `./qjs examples/hello.js` which worked as advertised – cool! Tried `./qjsbn examples/pi.js 5` and it worked as well – very cool! Then I tried `./qjs examples/hello_module.js` and got this:
SyntaxError: unsupported keyword: import
I don't know what I did wrong – anyone else try it yet?
I'm asking this here because I don't really know where else to do so: I'm trying to compile a binary from a js source that uses the standard modules (they are loaded by default if you run the interpreter) so the following works:
I think mainly as a C replacement, for added productivity. Compared to Node.JS QuickJS seems to interoperate with C. Node.JS standard library is fully async, while QuickJS use C standard lib. There's also GnomeJS which can be used to build apps that looks and feels like native apps, maybe you can also do that with QuickJS!? Being able to obfuscate the code by compiling is also considered a feature, useful when you want to distribute an application without giving away the source code.
People who tend to choose this instead of V8 for instance need to run this in a microcontroler for instance, or a OS without executable memory pages (and therefore where JIT´s are forbidden).
Does Fabrice Bellard knows any other language other than 'c' ?. Looks all his works are in 'c'. Just trying to look if there is a correlation among the 100x programmers and the number of languages they know. Because being extremely proficient in a language is important to be highly productive.
I think you'd need to translate node specific APIs in order to make this work, e.g. i/o features and require function. It is probably more work than just rewriting the app to use QuickJS APIs instead, but I don't think it's impossible to make some sort of compatibility or translation layer. Couldn't say if it'd be more performant, but it'd have a killer feature over node: compilation to single binary without dependencies.
fs/net and some other important pieces of Node.js are written as C++ addons for v8, hooking them would be very difficult I guess. Though Microsoft have been working on this for their ChakraCore engine. There was a conflict if I recall correctly, Microsoft suggested to add another abstraction layer to make hooking other JS engines easier but Node.js team refused it. I might not know that situation very well. But I have downloaded ChakraCore based Node.js once and it ran my project without problems, though the performance was about 5% slower.
Doesn't look like. It uses the `os` module name for its own thing, so you'd need at a minimum to patch that (in addition to actually having to implement all of Node's APIs on top of the `std` primitives or as C modules)
Yeah, I got the same error, along with a bunch of warnings earlier in the build about how 32-bit builds are deprecated on recent macOS versions. And then when it goes to link, it can't find the 32-bit symbols for functions like sqrt() and `printf()`.
Hacky workaround: comment out the "CONFIG_M32=y" line in the makefile. This will disable building the 32-bit versions of some tools. ("Edit the makefile" is, according to the docs, the canonical way to customize your build settings.)
And I don't think there's any real shortcut for dealing with compiler error messages. You've just got to learn what they mean and what sort of thing tends to cause them.
Record-query  embeds v8 for its query language, which seems to add a significant amount of heft. I know that couchdb uses js for querying as well. It seems like this could be another option for something like that.