Ruby 2.6.0-preview1 Released

(ruby-lang.org)

173 points | by petercooper 2244 days ago

5 comments

  • petercooper 2244 days ago
    • igravious 2243 days ago
      Thank you for the links sir, for anyone short on time the first link is from the patch committer themselves. Definitely worth a read to get an insight into what's going on.

      Optcarrot has gone from 37.2 fps with Ruby 2.0.0 to potentially 59.2 fps with 2.6.0-dev. Wow!

  • ksec 2243 days ago
    119 points, I mean HN doesn't care about Ruby anymore.

    But, I wanted to say Ruby MJIT, developed by Vladimir Makarov, and this base infrastructure variant merge from k0kubun, both doing do all on their own free time.

    It isn't a company sponsoring any of them to work on it.

    Sometimes I wonder if Ruby JIT somehow do save those companies millions, would they give donate some of them back to further develop Ruby.

  • geraldbauer 2243 days ago
    FYI: I collect articles about all things 3x3 at Planet Ruby (incl. the new jit (mjit), of course), see https://planetruby.github.io/calendar/ruby3x3. Cheers.
  • jhoechtl 2243 days ago
    Don't hold your breath:

    > MJIT takes a block of ruby’s YARV bytecode and converts it into what is basically an inlined version of the C code it would have run when interpreting it.

    > In some ways, this is the same as what other JITs do: they compile bytecode into machine code at runtime. I don’t know of another JIT which so directly shells out to an off-the-shelf C compiler.

    This will never scale out (embedded?) and has tons of security issues.

    The only high grade JIT solution anywhere available for a dynamically typed language is LuaJIT - I don't know of the social implications of all the involved here why this solution never took of and found more widespread use in other languages like Ruby or Python

    • pjmlp 2243 days ago
      I doubt LuaJIT is better than Common Lisp or JavaScript JIT compilers.

      On the other hand, if forking a C compiler for generating code is "fast", that tells a lot about the slowness of regular Ruby interpreters.

      • byroot 2243 days ago
        >if forking a C compiler for generating code is "fast"

        I don't think anyone claimed this, even the author report a 6 times slower boot time. However the generated code once loaded is "fast".

        Also I haven't followed the projects that closely, but if I'm understanding correctly, that shell out to the compiler is a stub.

        • orf 2243 days ago
          How dynamic is Lua compared to Python though?
      • jashmatthews 2243 days ago
        Ruby 1.9+ uses the YARV VM which is a "direct threaded" VM and on a similar level to the standard Lua 5.3 interpreter. It's obviously not as fast as the hand rolled assembly of the LuaJIT interpreter but Mike Pall basically designed Lua from scratch to be easily optimised. Ruby does not have that luxury.
        • jhoechtl 2241 days ago
          Mike Pall never designed Lua - Lua was designed many years ago

          https://www.lua.org/history.html

          but Mike Pall did an extraordenary job to optimise his Jit

          • jashmatthews 2240 days ago
            Thanks for the correction. I meant to say that the LuaJIT implementation was designed from the start to be fast. Even with the JIT disabled the interpreter, GC etc is far ahead of PUC Lua.
      • bjoli 2243 days ago
        LuaJIT is probably faster than the modern JavaScript engines. I don't know how it compares to something like SBCL, but as a JIT for dynamic languages it's pretty much the bee's knees. I have had it beat carefully written PyPy code by an order of magnitude several times
        • jashmatthews 2243 days ago
          Lua only has 5 types: boolean, float, int, string and table, so that helped offset the fact that basically only one person (Mike Pall) worked on LuaJIT.

          The kind of complex prototypical inheritance that V8 has to deal with is a totally different issue to the goal of having a minimal, embeddable, C friendly scripting language like Lua.

        • pjmlp 2243 days ago
          Actually when I think in Common Lisp compilers, I think about Allegro Common Lisp and LispWorks.

          As for JavaScript, it is really hard to envision how it can beat the money spent by Mozilla, Apple, Google and Microsoft, including universities sponsored by them, in JavaScript JIT optimization research.

          • marmaduke 2243 days ago
            I think it’s not so much that the effort invested is better (though Pall has strong reputation) but in large part that Lua’s semantics make for easier JIT.
          • bjoli 2243 days ago
            It was beating v8 JavaScript hands down when it was still a part of the computer language shootout.

            I can't find other things than one off microbenchmarks now (maybe my Google fu is off), but in those LuaJIT is still king.

    • igravious 2243 days ago
      It's unorthodox I'll give you that. When I think JIT I don't think "generates C code and hands that to a C compiler"!

      However. Let's look at this dispassionately. This is only enabled if you use the --jit option. This means that this option can be switched on for production runs and not for dev and test modes. That seems like a very easy switch to flick.

      Also, from reading the pages @petercooper linked to it seems the infrastructure is pretty non-invasive so far and leverages a lot of the existing infrastructure. Bugs have been squashed and all tests are passing.

      I'm excited to try this on my own machine and see how it compares to 2.5 w/ Bootsnap, as I've posted here before I'm itching to drop Bootsnap.

      • byroot 2243 days ago
        Well, Bootsnap only aim at speeding up code loading / boot, whereas the author of the JIT talks about something like 6 time slower boot, so I don't really see the relation between the two.

        Also since I'm somewhat part of the Bootsnap creation, I'd be very interested to know why you're itching to remove it.

        • igravious 2243 days ago
          Hey thanks for informing me.

          If Bootsnap is an opt cache I'm all for it. If it's some sort of `require' and `require_relative' hackage black-magic jiggery pokery then no.

          I already ran into issues with my own app where I encountered a slight glitch because of Bootsnap. Bootsnap needs to be 100% bullet-proof before I go near it.

          I hate all these optimizations like Spring and Bootsnap and Turbolinks that only end up causing headaches and are yet another thing you have to reason about if something is going wrong. Simpler is better. Less moving parts is better.

          Now of the three: Spring and Bootsnap and Turbolinks–probably Bootsnap is the closest to being deployable with 0% hassle. I wouldn't touch Turbolinks with a 10 foot barge pole. Yet another recent confirmation of the wisdom of that decision for me is that it messes with Vue.js. No Turbolinks, no problem.

          If Bootsnap and MJIT are more or less orthogonal then yay. There is startup time and execution time. Improving both would be awesome.

          • byroot 2243 days ago
            > If Bootsnap is an opt cache I'm all for it.

            It does integrate with the ISeq API in recent MRIs to acts as an opt-cache yes. It's not what brings most of the performance gain though.

            > If it's some sort of `require' and `require_relative' hackage black-magic jiggery pokery then no.

            Well, there is no black magic, it's computers, they are machines, not magicians. And yes it's mostly what Bootsnap (and bootscale before it) does.

            > Bootsnap needs to be 100% bullet-proof before I go near it.

            The thing is it can't. There is a couple corner cases in Ruby's require semantic that just can't be replicated. So yeah, there is about a few dozen obscure gems out there that will break if used with Bootsnap, but for the vast majority of projects it's gonna be a drop in boot time gain without any headache.

          • alceta 2243 days ago
            We use bootsnap for the development more of our rails Backend / Monolith and it's a breeze for startup especially when running single tests locally. I have never experienced any issues since we introduced it last autumn.

            It will not help too much for production or testing environments if you do not have specific timing requirements on startup.

            It's a completely different type of shoe compared to things like spring which will result in a world of pain in specific situations if you're not constantly aware of what it does.

      • sifoo 2243 days ago
        Compiling (parts of) a C-based interpreter to C is definitely easier than writing a JIT. Precisely for the reasons mentioned, you can pretty much reuse the entire interpreter and gradually improve the generated code over time. And as an added benefit; once the whole thing can be compiled, you gain the capability to build native executables. I just went through the same process with Cixl myself:

        https://github.com/basic-gongfu/cixl/blob/master/devlog/comp...

    • RVuRnvbM2e 2243 days ago
      > embedded?

      This is the Ruby implementation targeting embedded: https://github.com/mruby/mruby

      > tons of security issues

      Any examples?

    • riffraff 2243 days ago
      IIRC Mike Pall himself said that to have a fast runtime you need to have it carefully designed for the language it's going to run, Lua has fairly different semantics from Ruby or Python.
    • draegtun 2242 days ago
      Pharo & Squeak (and I suspect also most commercial Smalltalk versions) are another dynamically typed language that have a high grade JIT solution.

      see: http://opensmalltalk.org/

  • sadiqmmm 2243 days ago
    Thanks, awesome :)