Particularly thunderous applause for providing both backwards compatibility (published packages will keep working, won't require anything from maintainers, and we can mix old and new) and autofix features (for when maintainers are willing to make the breaking changes).
Glad Rust learned from breaking changes hiccups of other programming languages :)
EDIT see discussion with kibwen below: fixed forward compat to backwards.
> published packages will keep working and won't require anything from maintainers
That's backwards compatibility: new versions of the compiler will continue to be compatible with code that was written previously. Forwards compatibility would mean that new code would be compatible with old versions of the compiler (this would be analogous to a website from 2018 working in IE5, or being able to play The Last Of Us on PS1). Rust's backwards-compatibility promise ensures the former, but not the latter.
That said, if you stick edition=2015 in your crates, that might go a liiiiitle bit towards keeping your code compiling on old versions of rustc, allowing you to partially "opt-in" to forward compatibility for your users, if you so chose. Though this would only keep you from accidentally using things that are edition-specific; remember that when you use a version of rustc from here on out, if you compile with the 2015 edition, you're not just getting a version of the compiler frozen as of last month; code compiling with the 2015 edition will still be receiving most of the same features as the 2018 edition (a strict subset, obviously; they're still the same compiler under the hood, only the frontmost of the frontend diverges).
Ah, very good point! I was imagining the case where some does `cargo new`, which will give them `edition=2018` automatically, and didn't consider that you could just delete that line rather than editing it. :)
• There are no disruptive changes. You'll just opt in to a couple of new keywords and some warnings turned into errors (most of which will be fixed for you automatically). The "old" code continues to work unchanged and can be mixed with the "new".
• In the last year, in general, Rust has fixed a bunch of papercuts in the syntax, modules/namespaces, and the borrow checker that were annoying and unintuitive. New tooling has been added. It's now easier for new users, so if you've hit a wall with Rust 2015, give Rust 2018 a try.
• Rust is finding its strengths in embedded, WASM, CLI apps, and these areas are getting more polished.
• High-performance networking is another area of focus, and async/await is coming soon.
Seriously though, the Go core team has been saying from the start that even they were surprised how big of a positive impact gofmt had. It reduces bikeshedding and makes it easier for people to read each other's code because you learn to read the same basic structure fluently and quickly.
It's a basic IxD lesson about software tooling that I think more languages should pick up on.
Personally, I think I miss gofmt-on-save more than any other feature when I write other languages (gofmt is a lot more robust than autoformatters in most other languages). Being able to type carelessly and quickly and know the output will be corrected as soon as I hit CTRL+S lets me focus on putting my thoughts on the screen. And it even works as a soft-compiler: if the autoformatter breaks it means I have a bug somewhere.
Golang, perhaps it is good I don't know but clang format sucks: it improves slightly normal code but the price is too high as sometimes it totally butcher code making it unreadable.
Macro+lambda expression makes code formatters generates ugly things (I don't know what clion use to parse code but it also goes nut thinking wrongly that some code is unreachable..)
To be fair, one of the most useful features of rustfmt/clang-format in my opinion is reformatting things that go beyond a maximum line length, which also seems like one of the harder parts to implement in a way that's useful. At leas the last time I used it, gofmt punted on this issue.
Personally, I think gofmt made the right call -- let the editor soft-wrap long lines. This is much nicer because you can then dynamically pick whatever window width you want. Hard wrapping defeats this.
(But of course there are downsides, because if you're in a context where the editor/viewer isn't soft wrapping, it's worse. Still, on balance, I think not manually wrapping lines is the way to go.)
A lot of it depends on the properties of the language itself. Traditional C-style imperative code involves many short statements each doing one small thing, one expression per statement, one statement per line; this makes it simple to punt on wrapping long lines (which is 90% of the difficulty of writing an automatic code formatter). Go inherits this "tall-and-narrow" idiomatic style. But other languages, especially ones that try to encourage fluent APIs, will often have several expressions within each logical statement, leading to fewer lines of code overall but more action on each line. In these languages it tends to be idiomatic to break the statement across multiple lines (usually at method calls), in which case an automatic code formatter's job is to do this operation manually.
Seems like you're talking about splitting statements over multiple lines because it aids comprehension and makes sense semantically. Code-formatters are concerned with splitting lines that exceed some arbitrary line limit, no? The former it seems like it makes sense for me to do as the human authoring the code for readability (it seems a very similar decision to, "do I assign this value to a variable or include it inline"), the latter makes sense to do dynamically, given the line-length-limit of the moment.
That said, no experience with these languages, so maybe I'm way off base. I'd be curious to see code samples, if you really feel like digging deep. :-)
Implementing sensible line wrapping behavior is the hardest part of a code formatting tool. It's also very important for readability, and if you don't design for it early on it can be hard to add later.
I just converted one smaller crate and it was only about 5 minutes of work with cargo fix --edition. Especially because it does not matter that all the dependencies are still in Rust 2015.
Besides NLL, I really like the changes in the module system. Having to use extern crate was a drag, so it's nice that that isn't necessary anymore. Importing macros with use is both nicer and more intuitive than macro_use.
Really excited that non-lexical lifetimes landed! This was the most confusing part of the borrow checker: NLLs were the main class of "this should work but it doesn't" that I encountered in Rust. This should make Rust feel easier to pick up.
This might seem silly but I'm struggling to get into rust because the vscode plugin for it is sometimes too buggy (but the progress on it is awesome and devs deserve a round of applause). I use vscode for 4 or 5 other languages every week and don't want to maintain multiple text editors/IDEs.
What do people usually do in these cases? Surely I'm not alone.
I use vscode with the rust (rls) extension. It's not perfect. Sometimes when I edit cargo files it gets stuck and isn't recompiling my code. When that happens, I just ctrl-shift-p and select "reload window" and it's fixed in a couple seconds. It's not ideal, but it gets the job done and it's better than manually compiling my code.
I write Rust in `neovim` and I don't use any kind of linter that's integrated into the editor. I also have Visual Studio Code installed and setup and here's the thing: Rust (and also Haskell to a lesser degree) can have pretty long error messages but they are also informative. Some of them even have this sort of ASCII art showing you exactly where something was moved and then borrowed and so on. If you only use VSC you might not even be aware of those errors since the formatting of error window popups is pretty poor.
So I simply open a new tmux pane and run a file watcher (`entr`) which runs `cargo check` (and in other windows also clippy and test) and it works perfectly.
Since VSC has an excellent integrated terminal which you can split and have multiple tabs you could just do that in VSC itself. Frankly, I don't know any editor that displays live linting (and compiler feedback) in a way that I find useful but of course your YMMV.
I've only used Intellij IDEA, VSC and neovim though and I write JS (professionally) and now Rust and Haskell for fun.
Why get hung up on one tool? Just write Rust code as if it were plain text. It's a statically typed language, so the compiler will check your code before running it, and your can run rust fmt to fix formatting.
Maybe one day you can shun the darkness and embrace emacs ;) all your problems disappear on that day
The fancy tools can check your code as you write it and highlight problems inline so you don't have to switch context in order to fix them. This saves time in any language, but static analysis with Rust is unusually powerful so tightening that feedback loop pays serious dividends (when it works).
I'm not saying emacs and vim aren't really cool technologies. But why would a person who cares about optimizing keystrokes and crafting macros dismiss a tool that optimizes debugging and refactoring? I spend a heck of a lot more time staring at code than I do typing it, and if the reverse is true for you then you're some kind of savant and/or (more likely) you're writing too much code and reading too little of it.
Robust autocomplete is probably the most important tool for a language there is though. It's a shame that Rust doesn't have it and I'm slightly concerned that in never will - type inference means in lots of places there simply isn't enough information to do autocomplete properly.
Well, that's always an option, so it isn't really advice. It's already what you have to do when the tooling support is bad. It's the poorest when an editor can instead inline its output of static analysis.
Integration has a lot of benefits like tell you the inference of intermediate types. "Don't care about good integration" isn't really advice.
It's like people who brag about syntax highlighting. The 99.9% rest of us consider it a good tool that improves our workflow.
It's more of a "not yet implemented" thing than a non-solvable problem.
However a thing that needs to be considered is that building good IDE support for Rust is a very hard task, since it's a pretty complicated language (e.g. with all the type interference, traits, automatic dereferencing and conversions going on).
Another fact is that the rustc compiler hasn't been built for supporting these kinds of use-cases right from the start, as e.g. the typescript and kotlin compilers have in my understanding. That means the IDE tooling can't directly query the compiler, but must duplicate some of the parsing and interference logic.
Another good example of "compiler as a service" is the Roslyn compiler for C# (and also written in C#).
What is interesting is that Roslyn was a complete rewrite compared to the original C# compiler (which was written in C++ and couldn't power an IDE, so the tools like Resharper had to implement their own compiler front-end, essentially). IIRC, the ability to evolve the language was also a factor.
It seems that as the language matures, there comes a point where tooling (and further evolvability!) becomes important enough to justify the huge undertaking of rewriting the entire compiler. I wonder when will Rust reach that phase...
I know that there are several projects around, and I'm for example halfway comfortable with using cmake as a main build tool (bazel looks interesting too).
A huge issue however is still fragmentation. Only a minimal amount of libraries are provided and can be built in this fashion, and translating and maintaining build files of others is no fun.
Another annoyance can be getting into existing projects, where maintainers don't want to learn or invest in anything new, and claim that Makefiles and copying dependencies into subdirectories are just fine.
There's a huge difference between a state of the art modern C++ project and the typical things one encounters.
I fully agree, however I would say many existing projects are going go benefit from incrementally change into more modern C++ practices, than rebooting into Rust.
It is great that the team has achieved the Rust 2018 milestone, and there are already a few key companies adopting it, but it needs to catch up with 30 years of tooling, e.g. Qt, C++ Builder, CUDA, ...
I'm very interested with Rust on Embedded Devices. I've recently ported my cryptographic library (www.embeddedDisco.com) to C, thinking it was the only way to support these many tiny systems, but it looks like it might become less and less true. I'm still wondering, how are young developers using something like the Arduino IDE supposed to figure out how to use a Rust library.
Arduino has the benefit of several years of refinement and an all-in-one, multi-target library. Embedded Rust is still in its infancy, though it's absolutely usable and (imo) practical for experienced users. I think as the embedded abstractions get polished and there is more convergence of HALs, something like an all-inclusive solution may come about. As it stands, it's very easy to combine a chip HAL and device drivers, but you have to know what building blocks to grab (if they're there - there's lots to be done!).
This is what most interests me about Rust, using it to replace the C code on our embedded devices. Nice to see that is a first class use case now and the language will be mainted to ensure it is supported.
I might have to try and do a proof of concept with one of our devices now :)
To add on to what jamesmunns has already said, we're an excited group, us embedded Rust users. I recently built one of our in-house products using Rust as a proof-of-concept, and the process convinced me that Rust is the future of embedded devices. Feel free to reach out to me if you have any questions. I'd love to keep working with embedded Rust!
Probably none I'd guess. The embedded space really loves C simply because it's viewed (usually wrongly) as more deterministic than C++ but there are a lot of instances where C is not a good choice (when you no-shit need objects) so they come up with abstractions for common things (terrifying C patterns). Rust has a lot of features that C lacks so we probably won't need to do any abstraction for now.
Why do you say none, then immediately list a few advantages of Rust? To build on your points:
- it might be a easier language to target with a compiler because it's a higher-level intermediate language
- there would be fewer undefined behavior shenanigans
- successfully writing a compiler that targets Rust without using unsafe (too much) would mean getting (some of) the safety benefits as a side-effect. Could especially be relevant for languages aiming to introduce zero-overhead safety to the embedded environment themselves
- regarding previous point: discovering bugs in the process of porting to Rust
- because Rust plays relatively nicely with C and C++, one wouldn't have to lose existing libraries along the way
I'm saying "None" in response to needing compilers that compile to Rust, not to rust itself. Basically to boil down my argument, we don't need to compile to Rust instead of C because simply replacing C with Rust should be sufficient. People made "compiles to C" languages because C was not capable of doing everything they wanted easily.
Not only that, they also love to deploy with optimizations turned off to avoid any compiler magic, so you end up with some convoluted hand written code, which could otherwise be written by the compiler itself.
FYI, if you're on MacOS at least, `rustfmt` looks like it's still under the `rustfmt-preview` name in `rustup components list`. Running `rustup component add rustfmt` (as in the official announcement post) gives an error:
> rustup component add rustfmt
error: toolchain 'stable-x86_64-apple-darwin' does not contain component 'rustfmt' for target 'x86_64-apple-darwin'
Apparently you need to `rustup self update` before installing this version, and it will work. So if you've installed it, uninstall it, then update rustup, then reinstall it. Sorry about that! There was a miscommunication...
$ rustup component add rustfmt
error: toolchain 'stable-x86_64-pc-windows-msvc' does not contain component 'rustfmt' for target 'x86_64-pc-windows-msvc'
I looked for how to uninstall rust, and found that "the book" says that the command 'rustup self uninstall' will uninstall both rust and rustup. So I figured, what the heck, I'll start from scratch. I ran this command, uninstalling everything. Then I re-downloaded rustup-init.exe, and then ran it, reinstalling rust. I even then did 'rustup self update' and 'rustup update' for good measure.
I still get the same error when trying to install rustfmt. What else is there to do if completely uninstalling and reinstalling doesn't work?
Incredibly excited for those lifetime improvements. Rust is cool and everything, but (at least for someone who comes from a memory managed runtime background) lifetimes are a hard pill to swallow. Anything that makes them go down easier is gladly received.
One small note on WASM, I really hope that wasm32-unknown-emscripten doesn't become an orphaned target. I've got quite a few WASM/ASM.js projects that have C dependencies which work great with WASM. Unfortunately wasm-bindgen and the like appear to require wasm32-unknown-unknown and since the cc crate doesn't work with wasm32-unknown-unknown I can't take advantage of them.
Totally love the wasm32-unknown-unknown target from a easy bootstrapping standpoint but it feels like that and the traditional target are starting to diverge pretty significantly(you can't do a cdynlib crate on wasm32-unknown-emscripten for instance).
Is there any plans in the works to bring the cc crate over to wasm32-unknown-unknown? Most of the impressive stuff I've been able to do with WASM has been pairing existing C libraries with Rust in the right ways and it would be unfortunate to lose access to that whole ecosystem.
In terms of web-assembly, Rust really is the "market leader". C/C++ come with a lot of baggage, and the distinct disadvantage of not having a package manager, and everything else has more work to do due to having to implement a GC. Additionally, Rust has made wasm a focus, and now has bindings to most of the web APIs. And even high-level libraries that generate DOM.
For doing anything with WASM, Rust is (as of this moment) far and away the best experience. The Rust devs have had their eye on WASM support since long before WASM stabilized, giving them a first-mover advantage, and they benefit from the fact that Rust's model is so C-like that it's relatively little work to make it work with what WASM expects.
We’re seeing two ways so far: the first is WebAssembly. Our tooling does not assume that you’re re-writing everything in wasm, but that you’re using it alongside JS, so you can use it only where it makes sense, and it’s easy to do so.
Second, in services. Microservice architecture means you can write parts of your applications in different languages, and we have really nice JSON handling, or whatever other format you’re using. People have moved high load services to rust and seen big gains.
Of course, there’s still lots of work to do, and there’s always compelling reasons to use other technologies too.
>we have really nice JSON handling, or whatever other format you’re using.
Good to know. Data formats and conversion between them (often, though not solely, in the context of CLI apps) is one of my interests and areas of experience, so will check out those aspects of Rust over time.
>People have moved high load services to rust and seen big gains.
I was kind of disappointed with Rust 2018. While it technically maintains the guarantee of backwards compatibility I don't feel that it meets it in spirit.
The prime example is that code for working with modules that works fine on 2015 throws an error in 2018. This means that I now have to know two languages that are basically the same but have important differences. This is a real issue because I'm sure that some crates will jump to 2018 while others want to continue compiling on old compilers.
My understanding is that, since the edition is a per-crate property, crates that don't jump to 2018 aren't any worse off, and will continue to compile on both old and new compilers. I'd also be interested to know whether `cargo fix` covers the module changes for you.
(EDIT: I now realize you're talking about crates that will continue development on Rust 2015 specifically because they want to target older compilers, which is a different concern from what I was addressing. But now I'm not sure what the concern actually is. Even if you stay on Rust 2015, what reason is there to not upgrade your compiler? Both editions remain supported in 1.31 and will be for...ever, as far as I know.)
`cargo fix` probably covers it, but `cargo new` defaults to Rust 2018, which was kind of annoying when I was getting errors on code from another project that worked fine there.
There isn't a reason not to upgrade the compiler. My concern is that if I stay with Rust 2015 nothing changes for me, but if I need to fix a bug in someone's library there is a good chance they'll be on Rust 2018 (or vice versa). My concern is with the mental aspect of needing to remember the differences, not the technical differences.
It's possible that there isn't much to remember. Admittedly, I haven't yet looked at what the module changes are; but it still bothers me that such basic code no longer works.
That's a great point! Helping devs get up to speed on the changes is definitely a significant concern for editions, which is, I'm sure, why they put so much effort into creating a dedicated edition guide: https://doc.rust-lang.org/edition-guide/
Sure, the rust team is really good about documentation. I think that I worded things poorly though. One of the reasons that I started using rust is that the rust team said that once they hit 1.0 they would not make breaking changes (unless there was a soundness bug). While Rust 2018 technically keeps this promise since Rust 2015 will continue to compile and interoperate, I do not feel that it keeps the spirit of this promise; which is frustrating to me.
To be clear, rust 2015 code compiles just fine on new compilers, and will indefinitely. That’s a key part of it! Nothing should change with regards to these crates; they already weren’t using new features anyway, in order to maintain that compatibility.
Sure, I understand that. But if I've only ever used Rust 2018 and I go to submit a PR to the regex crate, I now need to be aware of any differences between the two versions.
It might not be a big deal, but a few weeks ago I started a new project that was going to reuse code from an existing project. The new project defaulted to 2018 (nightly compiler) and I got a bunch of errors with my module imports. It was just frustrating because it felt like it was a breaking change and one of the reasons I started using Rust is the promise of no breaking changes.
You can use a 2018 crate from a 2015 crate (and vice versa) but you can’t “expose rust 2015” or something. There’s raw identifiers for anything that would be incompatible. That would require a new rustc, of course.
I'm... not sure you replied to the right comment here? What I'm talking about is writing 2018-style paths (starting with `crate::`, etc) in a 2015-edition crate, so you don't have to switch between two path styles when you switch between crates.
The Rust "edition" flag is set on a per-library ("crate") basis, not per source file. It basically just controls a few small differences at the parser level.
I'm really happy about the ability to mix older Rust 2015 and newer Rust 2018 code in the same project, and the tools for automatically migrating code to the 2018 edition. It took us like 3 hours to migrate a large code base at work, and I think I needed to file a bug report about exactly one upstream library. Everything else was seamless. And we can still use all our Rust 2015 dependencies.
With , rust has gone on the path of tightly integrating the build system into the language. Before, you had to declare the crates you were using with "extern crate" in your main package. Now, the only place those are found is in the Cargo.toml build script. The language version is chosen in the same place.
For instance, previously I could `rustc src/main.rs -L deps` and assuming that the dependencies had all been built and had their rlibs put in deps things would just work. Now that is no longer possible, I have to manually specify every external dependency on the command line.
`extern crate` hasn't been made a syntax error, it's just now possible to elide it. There are plenty of use cases where I imagine people will still use it, not least of which is play.rust-lang.org, which has no direct access to Cargo or rustc, and `extern crate` is what one will continue to use to import third-party libraries there.
Still, we want more things to be boldly and shamelessly borrowed from ML and Haskell
* Universal pattern-matching (everywhere)
* Obvious syntactic sugar for defining and applying curried
functions, partial application.
* Type-classes (as a one single major innovation in PL
* receive (or select) with pattern matching (Erlang, protobuf).
* first class typed channels (Go).
* multimethods (CLOS).
Rust doesn't currently have higher-kinded types, which are a separate feature from typeclasses. Implementing a true Functor trait is impossible without HKT, though the lack of it and other FP abstractions generally speaking hasn't been an issue.
My understanding from reading the article is that they have broken backwards compatibility; things using the new keywords that were introduced (e.g., "await") will no longer compile.
I think part of the confusion here is that there are essentially two things: there's the compiler, and there is the language itself. It appears (from the article) the compiler can presently support compiling either "edition 2015" or "edition 2018" of the language. (So, it's not the compiler's major we're bumping, rather, it's the language's.)
That flag which says which "edition" to use could just as easily specify which SemVer major version to use. So, instead of saying "use Rust edition 2015" or "edition 2018" it would just be "Rust (the language) v1" or "Rust (the language) v2". There's no difference aside from the naming.
The year doesn't particularly add anything, and obscures if we're jumping over multiple "barriers" of breaking changes or not. I'm not actually sure that that really matters.
My only thought is that having completely different looking things might be better in that it just visually distinguishes the two better. I've repeatedly seen devs muddle the difference between the language, and a implementation of the language, and what the language guarantees and what the implementation happens to do.
> things using the new keywords that were introduced (e.g., "await") will no longer compile
That's not backwards compatibility, it's forwards compatibility. Whenever any language adds a new feature, using that feature breaks the build on old compilers. Setting `edition=2018` is no different. But the important thing is that old code continues to build unmodified on new compilers. That's what the edition system is preserving.
Forwards compatibility would be old code compiling in the newer "language", and I mean this from the grammar perspective: that the changes to the grammar are such that all things that were previously valid in the language remain as valid, and their meaning does not change.
This is not the case.
That the compiler has the user opt-in to the never version by setting a flag is irrelevant, for the discussion of versioning. (Now, for an end-user, I think it's great: you do not want to automatically break working code. I'm not saying the flag shouldn't exist, I'm saying the flag exists because the change is breaking.)
The flag's mere existence is proof that the change is not backwards compatible. If it were, you wouldn't need the flag.
> All existing code compiles and will continue to compile with no changes into the future.
My point, again, is that it won't compile without changes in the new edition. "Edition" is the word that was chosen (and it's not necessarily a bad word), but what it is is a version for the language. Whenever a backwards incompatible change (or set of changes) is going to be made to the language, that's a new "edition", but you could just say "version" and it would be as accurate.
Again, consider the language and compiler as separate entities, each with their own API, and I think it becomes clearer. Changes have been made to the language (e.g., the addition of new keywords) that render strings previously valid in the language now invalid, or valid with a changed meaning. In any system, this is a breaking change. This change could, were SemVer used, be identified with a major version bump.¹
That the compiler (which itself has a version number, separate from the language now) is capable of recognizing a flag and switching internally which version of the grammar it uses is great, but again, proof that there exists a version of the grammar (called "edition").
¹But honestly, that ship has sailed for Rust. And that's fine, and I think it's just a different term, and that it has a good chance of helping better convey the meaning by simply being different. But the argument that, under that, it's essentially SemVer, is nonetheless true.
(That at present the compiler and the language are very much intermixed makes this much more muddled. Were the language's definition more formal, I think this would all get much clearer, as we could ignore rustc, and talk about the language.)
It's true that C allows manually setting an enum to an integral value other than those explicitly specified, but no project that I've ever seen regards such an action as anything other than a bug. Can you give an example of code that's using such a thing, especially for FFI?
Of course, we can use i32 directly, but when project has hundreds of enum's, when enum's are constantly updated, when different crates are using different incompatible approaches to represent repr(C) enum's, it hurts. Rust compiler can pack Option<NonZeroU8> into single byte, so it can do that for repr(C) enum's with e.g. `Other(2..255:u8),` variant too (see my proposition above). Technically, they are not different, just have more specific cases.
For the first one, which I see you've made a comment on:
> This behaviour affects Prost: i32 type is used instead of enum type, because Protobuf defines that enum variable must be able to hold values outside of enum range to be compatible with future versions. As alternative solution, it's proposed to use _Unknonwn_(i32) variant, but this solution cannot be implemented in current version of Rust, because C-like enums cannot contain tags.
In principle, a protobufs library (like Prost) could map a protobuf enum `X` into a Rust enum `ProtobufEnum<X>` with `Known(X)` and `Unknown(i32)` fields. Or autogenerate a Rust enum with the known variants inlined, plus the extra Unknown field.
It is incumbent on the library author and/or FFI user to map semantic notions appropriately across the interface. Rust enums do not have C semantics, even if the underlying machine representation is annotated to mimic C. If something across the interface allows its flavor of enum to take on unnamed values, then that semantics needs to be preserved faithfully in Rust -- and that may mean using something slightly different from what Rust itself natively provides.
In vast majority of cases, C enums are used in the same way as Rust enums - i.e. only the named enum variants are supported by any code that touches enum values (and if you're lucky, it might check for other stuff and reject it explicitly, but more often people forget and things just don't work or silently misbehave). So in practice this behavior is correct.
For those very rare cases where someone used open-ended enums in their API, ascribing some meaning to unnamed values that must be exposed, you can always use i32 etc.
Crazy to think how far Rust has come since 2013. Very excited for the electricity update coming later today -- I've already seen a few videos of people implementing basic computer systems. I imagine there won't be many people falling for bases with open doors this wipe... Still frustrating to see basic optimizations in the pipeline. Many users have been suffering from microstutters for years. Regardless, there is nothing else out there like Rust, and for that I wish the Facepunch team a happy 5th birthday!