Weird architectures weren't supported to begin with

(blog.yossarian.net)

353 points | by woodruffw 1152 days ago

29 comments

  • macksd 1152 days ago
    This is definitely the most reasonable case I've heard made from this argument, and it's changed my stance on the issue.

    That said, I feel the cryptography project handled this poorly. I encountered the issue when one day, installing a Python Ansible module the same way I did the day before required a compiler for a different language that I've never used before and that is hardly used, if at all, in the rest of my stack. The Ansible project seemed to be taken by surprise, and eventually I found the root upstream issue, where the initial attitude was "too bad". Some people making those comments worked for Red Hat / IBM, Ansible is heavily backed by that. What company cares more about Linux on S390 than Red Hat / IBM? I would suggest none. So the fact that they had this attitude and community reaction to me suggests the problem is not one of expecting free work for a corporation on a non-free architecture. It was IMHO a combination of a lack of forethought, communication, and yes, perhaps a change that is overdue and will just be a little painful. The suggestion to maintain an LTS release while people figure out how to proceed is the right move.

    • steveklabnik 1152 days ago
      > Some people making those comments worked for Red Hat / IBM, Ansible is heavily backed by that.

      Okay, so, I don't mean to pick on you here, but I've seen this sentiment cropping up a few times. Not everyone can be familiar with everything, but I would caution you against immediately assuming corporate politics are at the root here.

      Anyone familiar with Alex Gaynor and his work over the last few years would know that he cares about Python, and memory safety. That has remained consistent regardless of his employer. Immediately assuming that this has something to do with company politics, rather than just a tireless open source contributor working to improve the things that he cares about, for years, is making a bit of a category error, in my opinion.

      • macksd 1152 days ago
        I agree - I don't feel picked on :D I guess my point is that it's overly dismissive to say "no free work for companies on niche architectures" when projects backed by the same company were a bit blind-sided by this and a bunch of people lost time chasing down what happened and why. To me that's a sign that this isn't just a mismatch of ideology or someone wanting free work: it was a failure in communication and mismatched expectations. If it had just been on a major version bump, I wonder if we'd even be having this discussion.
        • steveklabnik 1152 days ago
          Glad to hear it :)

          I think it gets really hard the larger the company you talk about. Before I worked at bigger places, I assumed a lot more coherence than is actually the case.

        • rodgerd 1152 days ago
          Sure, but the message - that if it's a problem for Red Hat's Ansible team or Red Hat's mainframe Linux team, they should be doing the work needed to make it not a problem.

          Ansible Tower subscriptions aren't exactly cheap, and neither is RHEL s390x. If there's not the fat to be uplifting the core infrastructure needed to run RHEL or Ansible on Red Hat's products, that's most likely a choice.

      • stonogo 1152 days ago
        I think corporate politics are at the root here, even if they're not Alex Gaynor's corporate politics. This blog post amounts to blessing amd64 and aarch64 as the only sustainable instruction set architectures.

        Rust is currently the apex of a tall stack of hundreds of millions of lines of code, once you account for LLVM etc. Using it as the basis for other software means only processors with sufficient market penetration are 'worthy' candidates. In the long run, if Rust is as successful as lots of us hope it will be, this will kill what innovation is left in the hardware space. If a company is sufficiently motivated to care, it will almost certainly be cheaper to fork the code back to using C than it would be to forklift LLVM to a new architecture.

        Is anyone aware of a direct Rust compiler project? Even discounting GCC, Go was bootstrapped from relatively simple (and naive) C compilers until it became self-hosting. I think a basic non-optimizing Rust compiler would go a long way toward leaving the door open for onboarding older -- and more importantly, novel -- architectures into the ecosystem.

        • steveklabnik 1152 days ago
          I didn’t compare HEADs because I’m on my phone, but found reports that in 2019, LLVM was 7 million LOC, and gcc was 15. Both are C++ compiler projects. Why the double standard?

          I also don’t know why re-writing an entire project and creating a compiler for it is somehow easier than writing an LLVM backend.

          • stonogo 1152 days ago
            What double standard? I wouldn't recommend a language tie itself to GCC any more than I would recommend LLVM. The point is that the Rust compiler catalog is unhealthily small, and that introduces problems like this one.

            To your other point, writing an LLVM backend is one thing. Getting it upstreamed is another, and maintaining it is another still. Then you have to navigate the politics of two foundations, both of whose boards of directors are basically the Who's Who of competing interests. I've watched more than one project fail to navigate those waters.

            Anyway, the cryptography package going from python + c to python + C + C++ + Rust is a cambrian explosion of build time complexity, and in my work we found it simpler to just get rid of the python and the cryptography package, so it's mostly academic to me.

            • steveklabnik 1151 days ago
              > I wouldn't recommend a language tie itself to GCC

              Okay! I misunderstood you then, sorry. I think one of the hardest parts about this conversation is that there are so many different people with various, but overlapping, opinions. A lot of folks do think this way, and I thought that's what you were saying. My bad.

              > Getting it upstreamed is another

              You do not need it to be upstreamed in order to build Rust, we build with our own modified LLVM by default, so using it is quite easy.

          • flohofwoe 1152 days ago
            The question has probably been asked a thousand times before, but wouldn't all those bootstrapping problems for niche platforms be solved if Rust had a C backend? Are there technical issues which would prevent compiling LLVM bytecode to a C blob, similar to what wasm2c does for WASM bytecode?
            • steveklabnik 1151 days ago
              I think, like majewsky says, it is a bit more complicated than it may initially seem. However, even if we assume that it is trivial, there's other problems. Sure, maybe it would. But who is going to do that work? We're an open source project. Effort is not fungible. On some level, we can only get stuff done when there's a sufficient need for it, and while there have been some folks talking about this in the last week or so, historically, it just hasn't been a massive issue. If it is a massive problem for someone, they should solve it! The Rust project's stance has been open to new platforms, and will continue to be so. But we need experts in those areas to help us help themselves.
            • majewsky 1151 days ago
              Having a C backend does not solve the hard issues. Because of undefined behavior in the C specification, sometimes there just is no way to write down a particular expression in a portable manner in C without treading through undefined territory. This may not be as big of a deal for boring application code, but we're talking about cryptographic code here, which needs to work hard to avoid memory corruption, integer overflows, timing side channels, etc.
              • flohofwoe 1151 days ago
                I'd imagine the generated code could be hardened similar to the code generated by clang ASAN, UBSAN and TSAN, and also wouldn't generate code that depends on undefined behaviour in the first place. Or you could do a little detour through WASM:

                https://kripken.github.io/talks/2020/universal.html#/

                In any case, that's better than shrugging off esoteric platforms, IMHO.

        • xet7 1151 days ago
          Maybe this alternative Rust compiler?

          https://github.com/thepowersgang/mrustc

    • pornel 1152 days ago
      The switch hasn't been sudden. It's just that many levels of disconnect between the project authors and downstream users meant it was basically impossible to communicate to all the affected users — nobody looks at their deps-of-deps until they break.

      And they've released Rust as optional component you can disable, precisely because nobody paid attention until it actually shipped.

      • masklinn 1152 days ago
        > The switch hasn't been sudden. It's just that many levels of disconnect between the project authors and downstream users meant it was basically impossible to communicate to all the affected users — nobody looks at their deps-of-deps until they break.

        Exactly, from the original shitstorm issue:

        > Rust bindings kicked off in July 2020 with #5357. Alex started a thread on the cryptography developer mailing list in December to get feedback from packagers and users. The FAQ also contains instrutions how you can disable Rust bindings.

        > Do you have constructive suggestions how to communicate changes additionally to project mailing lists, Github issue tracker, project IRC channel, documentation (changelog, FAQ), and Twitter channels?

        At one point there's sadly not much the project can do and still make progress.

        • its-summertime 1152 days ago
          > > Do you have constructive suggestions how to communicate changes

          That one kinda got me, when python intentionally has a runtime developer-to-developer communication system, per say:

          https://docs.python.org/3/library/warnings.html

          • ploxiln 1152 days ago
            Warnings are a funny thing, lots of projects like to turn all warnings into failing errors in CI, sometimes even at package build time, they think it's a best-practice ... but it means that nobody else can use warnings to communicate things, or else everything breaks, nullifying the utility of warnings.

            see also https://lwn.net/Articles/740804/

            • saagarjha 1152 days ago
              The easy answer to this is "warnings should be warnings" but the hard question is "how do we get people to stop treating them as errors?"
              • eyelidlessness 1152 days ago
                The easy answer is to stop treating errors as warnings! I’m working on a project where for the first time in my career I’ve made my environment/workflow/tooling less shouty about problems.

                I’m writing for my blog, so I installed a spell check extension. But its dictionary stinks. So I just turn off its warnings before a final post pass.

                Most of the time when I see yellow in my editor it’s stuff I would expect to be red. Even from my primary language (TypeScript) which officially doesn’t even have warnings.

                More things should be errors and treated as such! And more things that legitimately qualify as warnings should error in final checks to ensure they were addressed somehow, even with just an explicit dismissal.

      • brundolf 1152 days ago
        "Sudden" is relative to the rate of propagation. Maybe we can say that the difficulty of communicating to all stakeholders of a package is egregious, and even endemic to the ecosystem as a whole, but it still sounds like a better communication effort could have been made, even if doing it perfectly is impossible
    • ploxiln 1152 days ago
      There was advocacy and pressure to adopt pyca/cryptography in other popular python packages. Don't roll your own crypto, don't use old unmaintained crypto libraries (pycrypto ... but these days there's pycryptodome) etc. If pyca/cryptography was instead advertised as "python crypto library for rust lovers" and other python developers knew that a _gigantic_ and _obnoxious_ rust toolchain dependency was involved, I think many would have avoided cryptography, or made it optional. This was a bait-and-switch.

      This is obnoxious for people who want to start with a tiny standard distro image with just a C toolchain, and build their app and all dependencies from source. It is also obnoxious for distros that want to build all packages from source. rust depends on llvm, itself a huge complicated finicky package ... but not just that, it depends on a custom-patched llvm! ...but not just that, it also depends on an extremely recent version of rust! this problem is recursive! And there are many other niche cases ... and "niche cases" of "unauthorized" porting/modification/substitution is how much of the open source ecosystem we love got its start.

      This is really about issues of paranoia and control in the "security" realm. They consider it irresponsible to enable users to do anything they can't guarantee "safe". "Your organization should be paying for a $$$ corporate support contract if you want to do anything we don't officialy support." Maybe you could have fixed a portability bug in some python or C code (I've fixed a couple when bringing up a little-endian mips64 system a decade ago), but no you'll need rust+llvm experts to port that whole crazy toolchain. It's for your own good.

      • tsimionescu 1152 days ago
        What you're saying makes sense in any other domain except cryptographical primitives. With those, the basic correctness of the code will depend on details of the compiler and processor architecture, and it is extremely likely that the code will be fundamentally incorrect when used on other architectures or compilers.

        And this is not just a matter of fixing some little-endian assumption, it's a matter of understanding microoptimizations, instruction reordering, prefetch behavior, branch prediction etc. Ensuring code has predictable runtime in the presence of an optimizing compiler and out of order processor is extremely difficult.

        It's probably just as smart to roll your own crypto instead of trying to use a known library on a new platform, because the hardest parts of rolling your own will have to be done anyway.

      • tptacek 1152 days ago
        Here you seem to be demanding that the authors and maintainers of their own cryptography library lighten up about security, because you find the way they manage their own package to be controlling. You see some irony in that, right?
        • will4274 1152 days ago
          I don't read /u/ploxiln as demanding anything. They seem frustrated with the changing dependency profile of the dependency, and a little frustrated with the way some people like to shout "security" as if it's a trump card, ignoring that security is a spectrum and that the only perfectly secure software does nothing (perfectly).
          • tptacek 1151 days ago
            This is a security library. In fact, it's a security library that was created specifically to harden and user-proof less secure alternatives. If you don't care as much as they do about security, use a different library (or just keep using the pre-Rust version of this one).
      • aw1621107 1152 days ago
        > If pyca/cryptography was instead advertised as "python crypto library for rust lovers"

        I feel like this really only makes sense if pyca/cryptography had planned on adding the Rust dependency from the very beginning (or from very early on). Is there any indication that was the case?

        > but not just that, it depends on a custom-patched llvm!

        This doesn't seem to be true [0, 1].

        [0]: https://rustc-dev-guide.rust-lang.org/backend/updating-llvm....

        [1]: https://news.ycombinator.com/item?id=26217182

        • ploxiln 1152 days ago
          >> Strongly prefer to upstream all patches to LLVM before including them in rustc.

          > That is, this is already the case. We don't like maintaining a fork. We try to upstream as much as we can.

          > But, at the same time, even when you do this, it takes tons of time. A contributor was talking about exactly this on Twitter earlier today, and estimated that, even if the patch was written and accepted as fast as possible, it would still take roughly a year for that patch to make it into the release used by Rust. This is the opposite side of the whole "move slow and only use old versions of things" tradeoff: that would take even longer to get into, say, the LLVM in Debian stable, as suggested in another comment chain.

          > So our approach is, upstream the patches, keep them in our fork, and then remove them when they inevitably make it back downstream.

          ... so there are always some outstanding patches rust applies to the llvm codebase.

          • aw1621107 1152 days ago
            For Rust's LLVM fork, sure. But as Steve Klabnik noted in the first comment I linked, unmodified LLVM is supported.

            In addition, later down in the comment chain:

            cycloptic:

            > Can't there be a build option to not use the LLVM submodule, and instead use the system LLVM?

            steveklabnik:

            > There is. We even test that it builds as part of our CI, to make sure it works, IIRC.

            For a more concrete example, Fedora supports choosing between system and bundled LLVM when building Rust [0, 1].

            [0]: https://news.ycombinator.com/item?id=26222190

            [1]: https://src.fedoraproject.org/rpms/rust//blob/rawhide/f/rust...

        • zimmerfrei 1152 days ago
          > I feel like this really only makes sense if pyca/cryptography had planned on adding the Rust dependency from the very beginning (or from very early on). Is there any indication that was the case?

          I am sure this idea surfaced several times in IRC or possibly in the mailing lists. Certainly, the authors have been toying with handling ASN.1 in rust since 2015 [1], which I guess will be the next logical step.

          I do agree that this is mostly a political stance. pyca/cryptography is a wrapper sandwiched between a gigantic runtime written in C (CPython/PyPy) and a gigantic library written in C (openssl).

          The addition of Rust as dependency enables the inclusion of just 90 lines of Rust [2] where the only part that really couldn't be implemented in pure Python is a line copied from OpenSSL [3] (i.e. it was already available), and which is purely algebraic, therefore not mitigating any real memory issue at all (the reason to use rust in the first place).

          The change in this wrapper (pyca/cryptographic) does not move the needle of security in any significant way, and it is really only meant to send the signal that adding Rust in all other Python packages and especially in the runtime itself will now come at no (political) cost.

          [1] https://github.com/alex/rust-asn1

          [2] https://github.com/pyca/cryptography/blob/main/src/rust/src/...

          [3] https://github.com/openssl/openssl/blob/OpenSSL_1_1_1i/inclu...

          • steveklabnik 1151 days ago
            > The addition of Rust as dependency enables the inclusion of just 90 lines of Rust

            My understanding is that this is just the beginning, and the whole reason it's only a small amount is precisely to do it in small steps, correctly, rather than re-writing the entire world in one go.

            • zimmerfrei 1151 days ago
              And why should they re-write the entire world? It's just a wrapper library to openssl and it was marketed heavily to the community (at the beginning) as one where maintainers would follow good practices and not try to write security-sensitive code as they are not security experts, but just rely on openssl so that all focus go into the same place.

              So either that's still valid (and not too much of rust will come in to make a real difference security-wise) or they have revisited their position and will re-implement a bunch of openssl logic (in rust apparently though, and not in Python as it would be more logical, and as golang does successfully). And in the latter case, why just not focus on wrapping rustls instead?

              In either case, the hand is being forced: it's a small amount, and I don't see how it would have been an excessive burden to maintain such a small logic as an opt in for a period. It makes much more sense to read this as a move to forcefully push rust into the python ecosystem.

              • aw1621107 1151 days ago
                > So either that's still valid (and not too much of rust will come in to make a real difference security-wise) or they have revisited their position and will re-implement a bunch of openssl logic (in rust apparently though

                Looks like it's the former. From the initial (?) GitHub issue discussing the move to Rust [0]:

                > We do not plan to move any of the crypto under the scope of FIPS-140 from OpenSSL to cryptography. We do expect to move a) our own code that's written in C (e.g. unpadding), b) ASN.1 parsing. Neither of those are in scope for FIPS-140.

                > and not in Python as it would be more logical

                Is Python actually suitable for cryptographic code, especially if constant-time operations are needed?

                > I don't see how it would have been an excessive burden to maintain such a small logic as an opt in for a period.

                And then when said logic stops being opt-in, why wouldn't the same problem arise?

                [0]: https://github.com/pyca/cryptography/issues/5381#issuecommen...

                • zimmerfrei 1151 days ago
                  > Is Python actually suitable for cryptographic code, especially if constant-time operations are needed?

                  No, but as seen in the first chunk of rust code added, the amount of logic that needs to be constant-time is a) very, very limited to a few primitives only b) algebraic in nature (put differently, it's not where memory bugs will pop up, so using rust over C doesn't even buy you much).

                  For instance, ASN.1 parsing doesn't need to done in constant time in the vast majority of cases (and in all cases I am aware of).

                  > And then when said logic stops being opt-in, why wouldn't the same problem arise?

                  Some friction will certainly remains, but it will be nothing compared to the current breakage.

                  • aw1621107 1151 days ago
                    > No, but as seen in the first chunk of rust code added, the amount of logic that needs to be constant-time is a) very, very limited to a few primitives only b) algebraic in nature (put differently, it's not where memory bugs will pop up, so using rust over C doesn't even buy you much).

                    That's fair, though if some other part of the codebase is ported to Rust anyways sticking to C doesn't save you, unfortunately.

                    > For instance, ASN.1 parsing doesn't need to done in constant time in the vast majority of cases (and in all cases I am aware of).

                    I'm curious why this has to be done in C/Rust. Performance?

                    > Some friction will certainly remains, but it will be nothing compared to the current breakage.

                    What would be different then compared to now that would reduce breakage to such an extent?

                    • zimmerfrei 1151 days ago
                      > That's fair, though if some other part of the codebase is ported to Rust anyways sticking to C doesn't save you, unfortunately.

                      This should be turned around. There is no evidence that part for the codebase really need to be ported and for which Rust makes a real difference.

                      > I'm curious why this has to be done in C/Rust. Performance?

                      ASN.1 is only used during handshakes and I/O of files (the chunk of rust added covers loading of PKCS#7 files which are typically quite small, and not typically dealt with in massive numbers). I doubt the performance hit would be so high. Also, the package wbond/asn1crypto has shown that doing ASN.1 in pure Python can be quite fast.

                      > What would be different then compared to now that would reduce breakage to such an extent?

                      The ability to try the feature out and still have a plan B if it doesn't work out, and the possibility to have a smooth, long, relaxed upgrade plan with good warnings and more time to prepare. But to be honest, while writing all this chain of comments, I really doubt there was a real need to add rust into the mix (other than the political angle I already mentioned).

                      • aw1621107 1150 days ago
                        > This should be turned around. There is no evidence that part for the codebase really need to be ported and for which Rust makes a real difference.

                        I was thinking that if the maintainers were planning on porting some other larger part of the codebase, then starting with something small/relatively inconsequential would be a good first step, and keeping it in C wouldn't provide much benefit once said other larger part were ported.

                        > Also, the package wbond/asn1crypto has shown that doing ASN.1 in pure Python can be quite fast.

                        Interesting! I'm curious why the maintainers didn't opt for that approach instead.

                        > The ability to try the feature out and still have a plan B if it doesn't work out

                        Would people have tried this feature out if it were made opt-in? It's clear that the initial announcements reached far fewer people than one might like, and I honestly have no idea how many people would see a build-time warning.

      • mlindner 1149 days ago
        > rust depends on llvm, itself a huge complicated finicky package

        gcc is also a "huge complicated finicky package"... Any decently sized compiler is.

    • zamalek 1152 days ago
      > installing a Python Ansible module

      It's funny that you mentioned having to install Rust to use some Python because, for us non-Python users, everything needs Python :). I don't use Python at all, but I need both versions installed.

      Just pointing out that many people are already wearing that shoe.

      Python is the Lingua Franca of getting shit done. Rust is seemingly becoming the Lingua Franca of [more] secure code.

    • bonzini 1152 days ago
      Red Hat doesn't care at all about Linux on 32-bit s390, and neither does IBM as far as I know (except possibly the consulting group, which is interested in anything that makes them money).

      s390x has no problem with switching to Rust/LLVM. Red Hat and IBM both employ engineers working on LLVM, specifically for s390x in IBM's case.

    • brundolf 1152 days ago
      There's a phenomenon in lots of domains where "a change that is itself neutral or good can be bad if it happens too suddenly". I think that was the case here; we can say that this is a shift that reasonably can or should happen, but can still have caused needless disruption by catching a bunch of people off-guard and not giving them time to adapt to it
    • laserharvest 1152 days ago
      Why is it necessarily Cryptography’s fault that Python Ansible was taken by surprise? Or any of the other affected parties along the chain? That Cryptography was starting to include Rust in the project was announced on the mailing list by Gaynor last summer. And that email said exactly what future (at that time) release would require a Rust toolchain if the project was to be built from source.

      The reply to that by the Gentoo guy (who started the Github issue) was that package maintainers cannot follow every single mailing list of every dependency. That is debatable (e.g. maybe you should make an exception for security applications), but let’s take that as a given for now. In that case, what is Cryptography to do? Where should they announce such things in a way that orgs like Gentoo will see it? And also notice it and not just mentally gloss over it as some kind of “spam”? If the Gentoo guy didn’t see it half a year ago, would he have seen an announcement (or the reminders) if it was made five years ago?

      • macksd 1152 days ago
        I think making a change like this warrants a major version bump. It won't eliminate all the surprise for everyone, and I do have sympathy for the people who did go out of their way to talk about this on the mailing list and then surprised everyone. But it's common to pin yourself to a minor or maintenance release line to automatically pick up security fixes, etc. I expect breakage when changing the major (or even minor), and that's almost always a manual upgrade. And that's when I do read all the release notes, run tests, etc. before committing to the change.
        • chrisoverzero 1152 days ago
          What is a “major version bump”? Before you answer, consider that the library doesn’t use semantic versioning. Before this all blew up, the versioning scheme was this[1]:

          > Given a version cryptography X.Y.Z,

          > - X.Y is a decimal number that is incremented for potentially-backwards-incompatible releases.

          > - - This increases like a standard decimal. In other words, 0.9 is the ninth release, and 1.0 is the tenth (not 0.10). The dividing decimal point can effectively be ignored.

          > - Z is an integer that is incremented for backward-compatible releases.

          The system has since changed, but it continues not to be semantic versioning. (It’s effectively the same, in fact, but protects against dependents who think it is semantic.)

          By that scheme, it was already a “major” (signifying potential backwards-incompatibility) release.

          [1]: https://cryptography.io/en/latest/api-stability.html#previou...

          • AaronFriel 1152 days ago
            This reads to me like an argument for semantic versioning, because otherwise I need to internalize the rules of every package and know that some will break compatibility on the Y, some on the Z, ... Etc.
            • M2Ys4U 1152 days ago
              Even SemVer doesn't help here, as changing the build toolchain isn't generally considered an API breakage (after all, the resulting binaries are API compatible)
              • AaronFriel 1152 days ago
                Changing the build toolchain/requirements in such a significant way does seem like a major version break, as it could break downstream consumers attempting to install the package.

                Because the build toolchain is "visible", that is, pip isn't just downloading a prebuilt binary every time, I think breaking changes that could cause CI systems or user installs to fail is part of the API contract. Think of what major distributions or software packages do when they want to deprecate support for certain platforms - those are major bumps that typically only occur on incrementing the most significant component of the version.

                Hypothetical: suppose the the authors changed setup.py so that it only built on Red Hat Enterprise Linux(tm) version 6. Again, they could do that, it wouldn't change the runtime API. And on all other distributions or installers, it would error.

                Would that be a major semver change? Of course it would be. The API contract has to include everything from packaging to use.

            • chrisoverzero 1152 days ago
              You know, I had that thought while writing it.

              I wouldn’t want to force any particular versioning scheme on any particular developer, but maybe the “SemVer façade” versioning scheme they switched to is the best compromise. It has defensive value, at least.

              Then again, PEP 440 has nothing to say about the semantics of versioning, only requiring:

                [N!]N(.N)*[{a|b|rc}N][.postN][.devN]
              
              PyPA themselves describe various expected versioning schemes, but listing Semantic as preferred[1]. If I squint, I can fit `cryptography`’s previous scheme into “Hybrid”. The biggest lesson I take from this is that if your version scheme isn’t SemVer, work hard to make it look obviously different from SemVer.

              [1]: https://packaging.python.org/guides/distributing-packages-us...

              • detaro 1152 days ago
                Would SemVer even strictly require a jump here? They didn't change what's usually thought of as the API of the library (i.e. if I don't compile it myself I don't really notice the change?), and that's what SemVer uses: "MAJOR version when they make incompatible API changes,"
                • chrisoverzero 1152 days ago
                  Also a great point!

                  I can’t think of a clean alternative other than coming full-circle back to the “developer advocacy” solution, with its clear problems. Someone smarter than I am probably has it in the palm of their hand, though.

                • AaronFriel 1152 days ago
                  • detaro 1152 days ago
                    On the other hand, from what I understand they do ship precompiled wheels for many platforms, just missed one lots of people use in their CI setups (Alpine, which uses musl and thus isn't compatible with other Linux wheels - personally I think that's an odd choice but whatever, people do it)? Easy to imagine that many more people compiled it than they expected.
      • rodgerd 1152 days ago
        > The reply to that by the Gentoo guy (who started the Github issue) was that package maintainers cannot follow every single mailing list of every dependency.

        It seems reasonable that, if you are the packager for critical packages, that you follow critical dependencies?

        If the problem is that the distro is supporting so many things that the folks working on it can't keep up - well, that's precisely the author's point: stop pretending that you can support HPPA and MIPS or whatever as well as you can support x86_64. But you don't get to tell a million people that they have to have a less secure Python because 3 people have a toy in a closet they want treated as a first class citizen.

        • FridgeSeal 1152 days ago
          The number of people in the original thread who didn’t appear to be version pinning and then getting upset that a package that they directly relied upon automatically upgraded is eye-watering.
        • eyelidlessness 1152 days ago
          > stop pretending that you can support HPPA and MIPS or whatever as well as you can support x86_64

          And then the corresponding uptightness will be “FOO_PROJECT is aligned with the Intel monopoly”, and just as many people will be unhappy. You can see this in many recent threads about Apple not providing free access to M1 documentation for alternative OSes they’re under no obligation to support.

          • rodgerd 1151 days ago
            That was, of course, a complaint about Linux back in the day. It turned out nobody cared enough to stop Linux development.
            • eyelidlessness 1151 days ago
              That’s pretty much verbatim the reply I had when people on here were prognosticating it for the M1, other than adding that they also promoted Linux virtualization in the announcement.
        • throwdbaaway 1152 days ago
          Alright, let's do some digging...

          On 2013-03-21, urllib3 added an optional dependency to pyopenssl for SNI support on python2 - https://github.com/urllib3/urllib3/pull/156

          On 2013-12-29, pyopenssl switched from opentls to cryptography - https://github.com/pyca/pyopenssl/commit/6037d073

          On 2016-07-19, urllib3 started to depend on a new pyopenssl version that requires cryptography - https://github.com/urllib3/urllib3/commit/c5f393ae3

          On 2016-11-15, requests started to depend on a new urllib3 version that now indirectly requires cryptography - https://github.com/psf/requests/commit/99fa7bec

          On 2018-01-30, portage started to enable the +rsync-verify USE flag by default, which relies on the gemato python library maintained by mgorny himself, and gemato depended on requests. So 5-6 levels of indirection at this point? I lost count.

          On 2020-01-01, python2 was sunset. A painful year to remember, and a painful migration to forget. And just when the year was about to end...

          On 2020-12-22, cryptography started to integrate rust in the build process, and all hell broke loose - https://github.com/pyca/cryptography/commit/c84d6ee0

          Ultimately, I think mgorny only has himself to blame here, by injecting his own library into the critical path of gentoo, without carefully taking care of its direct and indirect dependencies. (But of course it is also fair game to blame it on the 2to3 migration)

          In comparison, few months before this, the librsvg package went through a similar change where it started to depend on rust, and it was swift and painless without much drama - https://bugs.gentoo.org/739820 and https://wiki.gentoo.org/wiki/Project:GNOME/3.36-notes

      • klyrs 1152 days ago
        Folks with an eye towards backwards compatibility typically don't implement breaking changes without a year of deprecation warnings emitted by the software. Contrast that to a notice posted in a sub-basement 6 months before instituting a breaking change. The shock and alarm do seem warranted IMO.
        • detaro 1152 days ago
          How do you do a deprecation warning for a new language being a build dependency? What does it look like? "your build environment is being deprecated"?
          • tom_mellior 1152 days ago
            In C you can implement a "your build environment is being deprecated" message like this:

                #ifndef I_UNDERSTAND_THAT_SOON_RUST_WILL_BE_MANDATORY
                #error "Starting with version xxx this package will need Rust to compile. Recompile with -DI_UNDERSTAND_THAT_SOON_RUST_WILL_BE_MANDATORY to acknowledge that you understand this warning."
                #endif
            
            Anyone building from source would get notified in a way that's impossible to miss but easy to turn off.
            • marcinzm 1152 days ago
              And thousands upon thousands of automated CI jobs and docker container builds fail. You're basically causing massive developer stress to anyone who automatically compiles your package. Most would consider that a bad tradeoff to help a tiny fraction of your users who'd be impacted and also refuse to follow your mailing list.
              • tom_mellior 1152 days ago
                If the change is transparent to all but a tiny fraction, you can of course guard the above with

                    #if !(defined(__AMD64__) || defined(__AARCH64__))
                
                (or whatever the accepted names of those platform macros are).

                For whatever it's worth, if the C code in the cryptography package didn't already contain the above check, along with a some hurdle requiring the user to compile with -DNOT_OFFICIALLY_SUPPORTED_TARGET, then the article's point is false: If you don't prevent users from compiling your security-relevant, reliant-on-exact-memory-semantics software on System/390, then you are implicitly supporting System/390.

                And for that matter, the Rust code should probably contain a similar check: Even if someone ported Rust to System/390, the cryptography library shouldn't magically start working there, unless the developers actually test there.

              • eyelidlessness 1152 days ago
                > And thousands upon thousands of automated CI jobs and docker container builds fail. You're basically causing massive developer stress to anyone who automatically compiles your package.

                This isn’t a major source of stress: you include migration instructions and even tooling to automate it. The thing that’s stressful is when your build fails with completely unexpected errors and no indication of what went wrong.

                Loudly announcing breaking changes is disruptive to some extent, but not doing so either means more disruption or nothing can ever change at all.

                • setr 1152 days ago
                  Isn't this basically equivalent to what they did? Their actual action was to include a dummy rust requirement just to break/warn people who wouldn't be prepared for it as an actual dependency.
              • naniwaduni 1152 days ago
                You're going to do that a few versions down the line anyway. Why not ahead of time?

                The only downstreams this affects more are the ones who decide to suppress the warning for now and ... put it off until you release the change that actually required breakage.

              • ohgodplsno 1152 days ago
                These developers cause themselves stress by building against the latest library without a care in the world. Lock your dependencies, regularly upgrade them while looking at patchnotes, and it won't be a problem.
              • casept 1152 days ago
                CI builds should be locked to exact versions anyways, for the sake of reproducibility.
            • unanswered 1152 days ago
              This is for all intents and purposes exactly what the cryptography maintainers did, except they did one better: they made sure this release is capable of validating the new configuration. They made rust an optional, but on-by-default, dependency. That is the equivalent of #error plus validation. They made it possible to turn this dependency off. That is the equivalent of -DI_UNDERSTAND.
            • detaro 1152 days ago
              True, that's possible, but quite brutal and I'd expect that would merely have lead to shouting and people going wild over their builds breaking a year earlier.
          • klyrs 1152 days ago
            Yes. It's pretty simple to insert a stub module which is imported in a try/catch, where failure to load results in a warning.
            • detaro 1152 days ago
              Hm yeah, that sounds like a bit of a hassle to set up (have it build where possible but not fail the build), but otherwise an elegant solution, since it would pass cleanly for people that have a suitable environment.
        • Twirrim 1152 days ago
          > project mailing lists, Github issue tracker, project IRC channel, documentation (changelog, FAQ), and Twitter channels

          It's hardly a notice posted in a sub-basement 6 months before instituting a breaking change. They communicated via numerous mechanisms.

          • klyrs 1152 days ago
            It's a sub-basement from the perspective of folks for whom this is a second-order dependency. And it sounds like there are many more of them than there are folks that caught wind of this.

            Regardless. The common practice is a year of emitting warnings from the software, that the end-user will see. That's prominent and allows package maintainers enough time to work around the upcoming breakage. Six months notice on a mailing list is simply not prominent enough, and it's half the standard year which puts undue strain downstream.

  • Fordec 1152 days ago
    I do think that we're going to see more of this. This is just a relatively early example. Rust and LLVM bring things to the table that make them inevitable if we as an ecosystem, on the whole, value privacy and security. This is where the rubber of the ethos hits the road. C for all it's good work and history, is a leaky mess and the source of so many zero-days, especially the bad bad state actor level ones.

    If we are to move to a more abstracted and safer system, the ideas behind LLVM are just going to be a fact of life in years to come. The solution to this is to either fund greater LLVM integration (or similar that isn't on the scene yet) or accept the status-quo. I choose the former, but I sincerely hope the future direction leaves as few out in the cold as possible through the effort of smart people. But protecting hobbyists is a stretch goal in my mind compared to improving the security and privacy of the global interconnected world we're in.

  • JoeAltmaier 1152 days ago
    Don't know about crypto. But any open source package, ported to a new environment (one not explicitly tested by the maintainter) is going to have issues. That's the nature of software.

    E.g. I just added pnglib to an embedded assembly. Has a display among other things. Wanted to put compressed PNG images into (limited) flash, then decompress into (plentiful) RAM at boot.

    Of course pnglib didn't build for my environment. Never mind it has a C library. There are 50+ compile switches in pnglib, and I was setting them in a pattern that wasn't part of the tested configurations. It didn't even compile. Once it compiled (after some source changes) it didn't run. More source changes and it would run until it hit some endianness issues. Fix those, and it would do (just) what I wanted, which was to decompress certain PNG image formats with limited features, once.

    No problem. That was my goal, achieved. But at no time did I blame the maintainers for not anticipating my use case.

    I would say this: maintainers, flag compile-time options that aren't tried both ways in your test environments. To give me some chance of estimating how hard my job is going to be.

    • tptacek 1152 days ago
      The software in this story is working as intended in its new configuration. It's not reasonable to ask for features to be flagged off on the off chance that someone is running your code on an S/390.
      • JoeAltmaier 1151 days ago
        If your test cases don't try it, I'd recommend either removing the code or documenting that the flag is not optional? For industrial code that would be an ordinary, expected process. But open source gets a lot of passes on process.
  • ris 1152 days ago
    Problem is that ARM & AArch64 were considered a "weird architecture" by the non-Android linux stack until really very recently, and the migration to a new architecture is still not plain-sailing in many peoples' experience. Without the assumption that most open source package authors are implicitly trying to be architecture-independent to some degree, we will literally all be stuck on x86_64 for the rest of our lives (the migration to amd64 itself I remember as being a number of years of people working on "unsupported" stuff FWIW).

    For users on "weird architectures" to be petitioning that a move the pyca authors are making is causing inconvenience to them is perfectly reasonable in my eyes.

    • simias 1152 days ago
      I don't think it's a good comparison. The architectures mentioned in the pyca/cryptography repo are:

      - Alpha: introduced 1992, discontinued in the early 2000s

      - HP/PA: introduced 1986, discontinued in 2008

      - Itanium: introduced 2001, end of life 2021

      - s390: introduced 1990, discontinued in ~1998

      - m68k: introduced 1979, still in use in some embedded systems but not developed at Motorola since 1994.

      ARM was once not as popular as it is nowadays but it was never moribund and in my experience has always had decent tooling and compiler support. Furthermore I'm sure that if tomorrow HP/PA makes a comeback for some reason, LLVM will add support for it. Out of the list I'd argue that the only two who may be worth supporting are Motorola 68k and maybe Itanium but even then it's ultra niche.

      I personally maintain software that runs on old/niche DSPs and I like emulation, so I can definitely feel the pain of people who find new release of software breaking on some of the niche arch they use (I tried running Rust on MIPS-I but couldn't get it to work properly because of lack of support in LLVM for instance). These architectures are dead or dying, not up-and-coming like, say, RISC-V which has gained some momentum lately.

      But while I sympathize with people who are concerned by this sort of breakage, it's simply not reasonable to expect these open source projects to maintain backward compatibility with CPUs that haven't been manufactured in decades. As TFA points out it's a huge maintenance burden: you need to regression test against these architectures you may know nothing about, you may not have an easy way to fix the bugs that arise etc...

      >open source groups should not be unconditionally supporting the ecosystem for a >large corporation’s hardware and/or platforms.

      Preach. Intel is dropping Itanium, HP dropped HP/PA a long time ago. Why should volunteers be expected to provide support for free instead?

      It's like users who complain that software drops support for Windows 7 when MS themselves don't support the OS anymore.

      • ndesaulniers 1152 days ago
        SystemZ, what was s390x/390 seems relatively well supported by IBM and Red Hat in my experience.
        • azernik 1152 days ago
          See footnote 6 in the original article:

          "That’s the original S/390, mind you, not the 64-bit “s390x” (also known as z/Architecture). Think about your own C projects for a minute: are you willing to bet that they perform correctly on a 31-bit architecture that even Linux doesn’t support anymore?"

      • PurpleFoxy 1152 days ago
        Is it not reasonable to throw this back at the CPU makers? If you want to bring out a new cpu architecture, port all the compilers to it before you start selling it.
        • msla 1152 days ago
          The problem is that chipmakers have historically made their development environments closed-source and, often, not very pleasant to work with. Maybe this is more of a problem with demonstration boards meant primarily for embedded systems people, but if you rely on TI, for example, to provide a compiler, they'll give you a closed-source IDE for their own C compiler which may or may not be especially standards-compliant.

          I hesitate to imagine what it would take to get a hardware maker to contribute patches to LLVM.

          • PurpleFoxy 1152 days ago
            And we have seen, that is to the detriment to the chip maker. It’s said that ATMEL chips became way more popular than PIC because of AVR Dude and cheap knock off programmer boards on eBay. A modern day architecture would be competing with these already established open source tool chains so they would either remain obscure like FPGAs are now, open source their stuff, or be on the scale of Apple or Microsoft where they are able to outcompete them open source stuff (for what purpose though)
        • tedunangst 1152 days ago
          If new CPU makers are expected to update all existing compilers, wouldn't the counterpoint be that new compiler writers are expected to support all existing CPUs?
          • PurpleFoxy 1152 days ago
            IMO it depends who stands to gain from it. If you make a new compiler, you need to make sure x86 and ARM work because that’s what most of your users will be using. There is almost no gain in adding support to some ancient cpu that no one uses anymore.

            On the other side, if you make a new cpu architecture, all of your users (people buying the chip) will gain from porting compilers.

            No one is expected to do anything (unless they are being paid). It’s just logical for people to work this way.

          • wmf 1152 days ago
            Sure, but that wouldn't have helped in this case since 68K, Alpha, PA-RISC, and S/390 were not "existing" CPUs at the time Rust was invented.
      • monocasa 1152 days ago
        FWIW, s390 wasn't really discontinued in 1998. There's still new s390 chips being designed and used.
        • zozbot234 1152 days ago
          s390 is the 31-bit-only variant, that has been discontinued for some time. Modern variants are 64-bit based, and still supported.

          All that being said, it's quite worthwhile to include these "dead" architectures in LLVM and Rust, if only for educational reasons. That need not imply the high level of support one would expect for, e.g. RISC-V.

          • ndesaulniers 1152 days ago
            Two architectures currently being added to LLVM are m68k and csky. I don't think either are that new (I thought csky was, but it was explained to me by Linux kernel architecture folks that it has old roots from Motorola, with folks from alibaba using that for 32b but moving to riscv for 64b).
            • sanxiyn 1152 days ago
              Yes, csky is mcore derivative. It's not entirely compatible, like m68k and ColdFire.
          • monocasa 1152 days ago
            Lots of 32 bit code still gets run on these machines.
            • Thorrez 1152 days ago
              Could you expand on that? Are you saying that s390x can run binaries compiled for s390 and that today binaries are being compiled to s390 for the purpose of being run on s390x?
              • monocasa 1148 days ago
                Yes to both (at least for user mode code, or "problem mode" in IBM parlance. Kernel and hypervisor code is 64-bit only on newer chips). There's something like a 30% average memory savings for 32-bit code, so if your program its in 2GB, it's a win on these massive machines that'll be running 1000s of VMs at close to100% load. Nice for your caches too.
    • darksaints 1152 days ago
      But it is unreasonable, for one of two reasons:

      1) If the architecture is in active production, there is someone somewhere trying to make money by selling it. If they are intent on only supporting proprietary compilers, they need to accept the consequences of that decision: users won't use their hardware because they can't use the software that they want to use. If they want the architecture to be widely used, they have a fiduciary obligation to ensure that they have reliable and well tested backends to major compilers.

      2) If the user is using old architectures that are no longer in production or no longer supported, there isn't ever any reasonable expectation of continuing software support. You're stuck with old software, full stop.

      In the case of your objection, AArch64 and ARM manufacturers have the obligation to develop openly available backends for their architectures. And they've taken that seriously, as should any newcomer architectures.

      • zozbot234 1152 days ago
        > If the user is using old architectures that are no longer in production or no longer supported, there isn't ever any reasonable expectation of continuing software support. You're stuck with old software, full stop.

        That's not a very reasonable POV. Many of these architectures are very well understood and very easily supported via emulation. There's no need to run them on actual hardware, especially if you aren't dealing with anything close to bare-metal quirks.

    • steveklabnik 1152 days ago
      Incidentally, aarch64-unknown-linux-gnu became a Tier 1 supported platform in Rust recently, in part because of the support of ARM themselves.

      (My day job involves a lot of Tier 2 ARM work, and I don't personally run into any more bugs than Tier 1 platforms. YMMV.)

    • rodgerd 1152 days ago
      > Problem is that ARM & AArch64 were considered a "weird architecture"

      The ARM world is a blizzard of proprietary, undocumented implementations with limited support for the upstream kernels, often can boot only a vendor-specific distro that is quickly abandoned, and full of boards that blink in and out of existence at the drop of a hat. It absolutely is a weird architecture.

      > For users on "weird architectures" to be petitioning that a move the pyca authors are making is causing inconvenience to them is perfectly reasonable in my eyes.

      Yes, this is exactly the sense of entitlement that the author is talking about when he describes the destruction of people's interest in working on open source.

      • casept 1152 days ago
        That may be true for peripherals, but all a compiler has to care about is the core ISA. The board zoo is very much not relevant.
        • rodgerd 1151 days ago
          But to meaningfully support a thing - per the original author - "compiles on a version of the ISA" (and there are many of those for ARM) - and "actually works as intended" do have to care about things like "this ARM core runs these extensions. This ARM core is really just a coprocessor to a binary blob processor. This ARM core is buggy as fuck but no longer supported by the vendor" matter. Where's your source of randomness for crypto, just as a starting point?

          People want - to borrow from the BSD world - FreeBSD levels of support for specific chipsets and features, with OpenBSD levels of support for security, and NetBSD levels of portability. These are not compatible outcomes, and folks should stop pretending that they are.

      • ris 1152 days ago
        > this is exactly the sense of entitlement

        Now come on with your "entitlement". It's not as though we're talking about some random people who made some little package for their own use and decided to make it available in case anyone else found it useful, and now the community demands from them are becoming too much and are something they never asked for. This is a group that have named themselves the Python Cryptographic Authority and have chosen the prominent pypi package name of just "cryptography". They couldn't have done any more to encourage the broader community to depend on it and make it a core part of their stack.

        In comparison, I couldn't imagine the python core team (also largely unpaid) doing this with one of their stdlib modules and then dismissing those objecting as "entitled".

        (FWIW I'm not particularly interested in taking a side in this issue, but think your labelling as "entitled" is unhelpful)

        • darksaints 1152 days ago
          I agree with the idea that it is entitled. Hell, Python itself is only directly supported on a couple of architectures and operating systems. It has even fewer "tier 1" targets than Rust does! It is made available in source format only for other packagers to use as they see fit, but it is not python's responsibility to support it. Why should a library maintainer feel any obligation to support platforms that the language doesn't provide first class support for?
    • phire 1152 days ago
      But ARM/AArch64 did always have good compiler support. ARM was the second arch added to llvm.
  • h2odragon 1152 days ago
    If I fire up an Alpha CPU today, I'm not expecting that the latest versions of all my favorite free software is going to run on it. Asking maintainers to "fix it for me" then would be unreasonable. Part of running hardware that far out of date is hunting for the last versions of anything that supported it; whether that was FOSS or commercial its still the way the world is and has been.

    It'll be interesting to see how the Rust community responds to this; are they so eager to absorb everything that they'll put effort into supporting niches to get more users,or will they take the opportunity to be "opinionated" and exclusive and shed the effort of catering to obsolescence?

    • cbmuser 1152 days ago
      > If I fire up an Alpha CPU today, I'm not expecting that the latest versions of all my favorite free software is going to run on it. Asking maintainers to "fix it for me" then would be unreasonable.

      No one does that. What people complain about is that code that used to be perfectly portable for years suddenly becomes locked to a very limited set of targets with the argument that memory safety is more important than anything else.

      > It'll be interesting to see how the Rust community responds to this; are they so eager to absorb everything that they'll put effort into supporting niches to get more users,or will they take the opportunity to be "opinionated" and exclusive and shed the effort of catering to obsolescence?

      Rust just needs to help get one of the several alternative Rust implementations based on gcc officially supports similar to gccgo.

      Then the portability issue will have been fixed once and for all and Rust will be chosen even for code on obscure targets such as Elbrus 2000, Sunway or Tricore.

      • pornel 1152 days ago
        The code that used to be perfectly portable still exists, fork it and keep using it.

        But if you're asking for a project to be maintained, then it means you want maintainers to put more work to keep new code working on an old platform. Constraints of old/niche platforms cause extra work for developers when adding new features or improving security.

      • msbarnett 1152 days ago
        > What people complain about is that code that used to be perfectly portable for years suddenly becomes locked to a very limited set of targets with the argument that memory safety is more important than anything else

        As TFA points out, this is a mistaken understanding of the situation. What we have here is code that gave the illusion of being “perfectly portable” (while not actually being written to target or tested against the peculiarities of niche architectures like Itanium and PA-RISC it happened to successfully compile on) being replaced with a new version that only build on machines its authors have actually given any consideration to the security properties of.

        That this inconveniences people is obvious. Why they imagine this is a net security loss for them is less obvious – the older C versions still exist, and any concerns that they’re missing out on new security updates are swamped out by the fact that the older versions may well never have behaved securely because nobody from the project was ever writing the code with PA-RISC’s memory and instruction ordering properties in mind to begin with.

      • Ar-Curunir 1152 days ago
        > Then the portability issue will have been fixed once and for all

        That’s just not true. These targets may make different assumptions about various low level things such as memory ordering, byte-width, behavior on overflow, etc. While C might be okay to defer to the architecture on these questions, Rust is more strict.

        I personally think you’ll just end up with a bunch of broken binaries.

      • rodgerd 1152 days ago
        > What people complain about is that code that used to be perfectly portable

        Literally the whole point of the article is that this assertion is bullshit. It was never perfectly portable. It merely happened to compile, and maybe work, and maybe actually was secure.

        > Rust just needs to help get one of the several alternative Rust implementations based on gcc officially supports similar to gccgo.

        Who is "Rust"? Why would they pour money and effort into this? Would the gcc community finally ignore rms' demands to make gcc as hostile as possible to implementing new front ends?

        > Then the portability issue will have been fixed once and for all and Rust will be chosen even for code on obscure targets such as Elbrus 2000, Sunway or Tricore.

        This does not solve the problem the author describes.

      • JulianMorrison 1152 days ago
        I think the point this article is making is that the portability was a sham and a mirage from the get go.

        Yes, it might compile on an oddball architecture, but a lot of that would be the autotools build system and the C compiler fudging around important details that could easily leave you with something that kinda-sorta runs but is a wide open security flaw. Or that runs for a while and then breaks in ways nobody could have predicted because you aren't meant to try to run cryptographic software on something out of the museum of historical computing.

      • epage 1152 days ago
        What if, instead of Rust, they adapted C99 or newer which is not supported on all obscure platforms. Should they be shamed for using the quality of life improvements that make it easier to maintain the project because someone used their project on an unexpected platform?
      • longcommonname 1152 days ago
        They can pin an older version and have it work.
    • guerrilla 1152 days ago
      Your question makes me wonder what niche Rust is trying to fill. Anything that can't bootstrap literally everything then will never replace C.
      • nicoburns 1152 days ago
        I think the primary goal is Mac/Windows/Linux/BSD/iOS/Android on x86/ARM (32bit and 64bit) variants. That covers 99% of consumer and server computing. And aside from anything else: making that secure would be a huge win.

        But it's not like Rust doesn't have wide platform support. For example, it's already possible to run Rust on Risc-V. And it's is improving all the time.

      • qbasic_forever 1152 days ago
        The niche of software that doesn't take over your computer with malware because you made an off by one error.
      • ithkuil 1152 days ago
        Wonder if rust (or llvm) could just have a C backend as a fallback for unsupported architectures. Perhaps some stuff would be slow but likely faster than flat out emulation
        • steveklabnik 1152 days ago
          llvm had a C backend at one point, but my understanding is that it bitrotted and was removed. I think there's been some work to bring it back? Not 100% sure.
      • h2odragon 1152 days ago
        I take no position on Rust other than interested observer; thus the question. There's several factions there, i think the "lets make a better language and spread it everywhere so it gets used" faction ares going to be opposed by the "opinionated zealots of Correct Thinking" and i wonder which gets steamrolled.
        • guerrilla 1152 days ago
          What's the "Correct Thinking"?
  • thesuperbigfrog 1152 days ago
    It seems like the easiest answer is to fork the cryptography library:

    - current maintainers and those who are on supported architectures can use the Rust implementation. The current maintainers no longer want to maintain the C implementation and that is their prerogative as this article describes.

    - new maintainers and those on unsupported architectures can continue to use the C implementation. Not everyone in the current user base (to include some distribution maintainers) is able to use the new Rust implementation at this time, but they still need the library.

    It's not ideal, but it seems like the only practical way ahead that meets everyone's needs.

    • Lazare 1152 days ago
      > It seems like the easiest answer is to fork the cryptography library

      I would suggest that not only isn't the easiest answer, it's not an answer at all. Because...

      > new maintainers and those on unsupported architectures can continue to use the C implementation

      ...there will be no new maintainers. This came up a few times three weeks ago, and so far, a (small!) number of people have hopefully suggested that it would be nice if "someone" volunteered, none of them have actually followed through, and as far as I can tell, interest as only waned since.

    • Chyzwar 1152 days ago
      There is a high chance that there is no pyca/cryptography users on these niche platforms that Gentoo is trying to support. If there are any, they should pay for support not expect maintainers to support theirs fridge platform.
      • toast0 1152 days ago
        If Gentoo doesn't work on fridges, what's up with this:

        > The Gentoo/s390 Project works to keep Gentoo the most up to date and fastest s390 distribution available.

        That's a declaration of support, and if that's not what they mean, they could list some limitations on their wiki. [1]

        A lot of system distributions declare some platforms as supported and others as best effort and still others as probably not working, but you're welcome to try. That's reasonable, of course, but it's nice if you're upfront about it.

        [1] https://wiki.gentoo.org/wiki/Project:S390

        • Chyzwar 1152 days ago
          Then this is up to Gentoo and IBM to support the latest versions on the platform discontinued in 1998!. It is unreasonable to expect pyca/cryptography maintainers to support these platforms that are not even supported by python itself.
        • cp9 1151 days ago
          sounds great! I look forward to gentoo and IBM stepping up to the plate to maintain support
      • sanxiyn 1152 days ago
        pyca/cryptography was an indirect dependency of Gentoo's package manager, Portage. Portage is written in Python. So by definition, there were users.

        "was", because after this incident, careful review revealed that it isn't necessary, and dependency got removed. So yes, probably no users now.

      • viraptor 1152 days ago
        > support theirs fridge platform.

        I love this typo in context of s390!

      • msla 1152 days ago
        Speaking of fridge platforms, does LLVM target any PIC ISAs?
  • nerdponx 1152 days ago
    No free work for platforms that only corporations are using. No, this doesn’t violate the open-source ethos; nothing about OSS says that you have to bend over backwards to support a corporate platform that you didn’t care about in the first place.
  • arithmomachist 1152 days ago
    > Companies should be paying for this directly: if pyca/cryptography actually broke on HPPA or IA-64, then HP or Intel or whoever should be forking over money to get it fixed or using their own horde of engineers to fix it themselves.

    This about sums it up.

  • titzer 1152 days ago
    I thought very hard about this problem as I've developed Virgil [https://github.com/titzer/virgil] over the years. Bootstrapping any system, not the least of which a new programming language is a hard problem.

    Virgil has an (almost) hermetic build. The compiler binaries for a stable version are checked into the repository. At any given revision, that stable compiler can compile the source code in the repo to produce a new compiler binary, for any of the supported platforms. That stable binary is therefore a cross-compiler. There are stable binaries for each of the supported stable platforms (x86-darwin, x86-linux, JVM), and there are more platforms that are in the works (x86-64-linux, wasm), but don't have stable binaries.

    What do you need to run one of the stable binaries?

    1. JVM: any Java 5 compliant JVM

    2. x86-linux: a 32-bit Linux kernel

    3. x86-darwin: a 32-bit Darwin kernel*

    [*] sadly, no longer supported past Mavericks, thanks Apple

    The (native) compiler binaries are statically-linked, so they don't need any runtime libraries, DLLs, .so, etc.

    Also, nothing depends on having a compiler for any other language, or even much of a shell. There is test code in C, but no runtime system or other services. The entire system is self-hosted.

    I think this is a decent solution, but it has limitations. For one, since stable executables absolutely need to be checked in, it's not good to rev stable too often, since it will bloat the git repo. Also, checking in binaries that are all cross-compilers for every platform grows like O(n^2). It would be better to check in just one binary per platform, that contains an interpreter capable of running the compiler from source to bootstrap itself. I guess I'll get to that at platform #4.

    • skybrian 1152 days ago
      I’m wondering if bootstrapping from WebAssembly would make sense someday, under the assumption that everyone has a browser? (Though a stand-alone interpreter is preferable.)
      • titzer 1152 days ago
        That's not a bad long-term plan (if there is a lightweight standalone Wasm interpreter), but Wasm is not quite ubiquitous enough. Hopefully!
    • pabs3 1152 days ago
      I think it would be much better to do Bootstrappable Builds instead of checking generated files into the repo. If no-one else can reproduce the builds of those files, then it will be hard to trust them.

      http://bootstrappable.org/ http://reproducible-builds.org/

    • breakfastduck 1152 days ago
      Question from a point of ignorance - why would you target 32bit for something that is being actively developed?

      Are we not at a point where 64bit should be the expected target?

      • jcelerier 1152 days ago
        Webassembly for instance is a 32 bit target. So are most arm hobby boards (even if hw is 64 bit, they ship with 32 bit OSes)
      • eqvinox 1152 days ago
        A lot of "IoT" devices are 32bit. And there's no reason for them not to be. (There's 8bit ones too.)
      • titzer 1152 days ago
        I bootstrapped on the JVM first, and then the first native bootstrap was around 2011. I am almost finished with my x86-64-linux port.
  • pdimitar 1152 days ago
    So, we have to choose between...

    - A tiny community of hobbyists willing to support niche architectures that have zero relevance to any mainstream computing,

    OR

    - Embrace a newer, stricter ecosystem with more guarantees and clearer communicated support tiers that's also constantly improved upon by a big number of both dedicated volunteers who donate their time and effort, and paid professionals. The only tradeoff: it supports less architectures. For now.

    Am I understanding the article correctly?

    If so, I am definitely in favor of the latter, and I think many others are as well.

    • alacombe 1152 days ago
      It's interesting that the community praising multiculturalism favors hegemony of only a few computer architecture. Portability used to be paramount of design...
      • pdimitar 1152 days ago
        Everybody can do whatever they like with their time. But if they want their exotic CPU architectures then they can support them themselves, no?

        However, demanding mainstream tools to lag behind because of said exotic architectures is unrealistic. At one point we all want to progress and advance our craft, especially the one that pays our bills.

        I'm not against multi-culturalism. But we can't have it at the expense of everybody else outside your small bubble having their tooling hampered and/or lagging behind on features that a lot of us need for commercial work (and not only, I'd argue).

        Backwards compatibility is like everything else: it can't be praised as an absolute value and damn everything else.

        • alacombe 1151 days ago
          > However, demanding mainstream tools to lag behind because of said exotic architectures is unrealistic

          Rust is mainstream on HN, not in the real world. Because "Uber" or whatever unicorns uses it doesn't make you mainstream.

          https://madnight.github.io/githut/#/pull_requests/2020/4 - Rust is barely reaching 1%.

          • pdimitar 1151 days ago
            I strongly disagree on those metrics and I would question their coverage with the real world out there.

            Rust is getting more and more prevalent and I'm saying that as a person that has barely worked in only one SV company for the last 5 years.

            I'm working outside the mainstream companies and I'm still seeing Rust gathering mindshare all the time wherever I go. And I'm not even hired for Rust positions.

            Anecdotal for sure, I'll agree, but your observation is no less anecdotal than mine.

            Rust brings very real advantages to the table and seeing people rebelling against it only on principle (and not on merit) is getting increasingly baffling. Feels like an emotional rebellion versus resistance based on facts and merit.

            • alacombe 1151 days ago
              Show me a metric which makes Rust a relevant language.

              > Anecdotal for sure, I'll agree, but your observation is no less anecdotal than mine.

              I haven't seen Rust in any "top X language" news. Prove me wrong.

              • pdimitar 1151 days ago
                Do you participate in any language popularity study that you stumble upon? I know I don't. Add 99% of all my colleagues ever don't as well.

                Even if I found a study that corroborated my observation I'd still not trust it. I don't make a habit out of supporting dubious studies only because they support my point of view.

                From what I've seen for 19 years of career, most working programmers refuse to participate in such studies.

                Hence I don't trust them either way. They work with a non-representative sample of the population. Not a big enough sample for the study to be valid.

                • alacombe 1151 days ago
                  Instead of shooting the messengers, please provide me a non-anecdotal reference about Rust being relevant.
                  • cycloptic 1151 days ago
                    Have you seen the stack overflow survey over the last few years? It has some interesting data.

                    https://insights.stackoverflow.com/survey/2020#technology-mo...

                  • pdimitar 1151 days ago
                    This will not go anywhere. :)

                    I prefer to look around -- this has always been giving me much more objective info throughout my entire career.

                    I get your skepticism but you are not arguing in good faith. I already asserted that to me those language popularity contests are dubious and non-representative.

                    If you disagree with that premise then we have zero common ground and can't discuss the topic. ¯\_(ツ)_/¯

                    • alacombe 1151 days ago
                      > This will not go anywhere. :)

                      Of course, you don't have any argument, so you're posturing and running away. Very immature.

      • wmf 1152 days ago
        Most CPU architectures are "fake diversity"; for example both Alpha and ARM64 are 64-bit little-endian with a weak memory model. Sure, S/390 is 31-bit and supports BCD while in PA-RISC the stack grows upwards and IA-64 is VLIW, but these are trivia that are not comparable to the diversity of human cultures. For decades programmers have wasted their time porting software to different-but-not-better architectures, mostly for the benefit of the vendors who fragmented the market in the first place instead of standardizing.
        • alacombe 1152 days ago
          > For decades programmers have wasted their time porting software to different-but-not-better architectures

          Microarchitecture, register layout, ABI also constitute differences which have real-world uses, Not to mention sheer competition to avoid architecture-rot. Only targeting intel & ARM opens yourself to problems, cf. the upcoming nVidia hell ARM is about to experience.

          Just because Rust-preaching (without practicing it, of course) Starbucks-spipping average HN readers don't know about it doesn't make it inexistent or necessarily wrong-think.

          • pdimitar 1152 days ago
            > Microarchitecture, register layout, ABI also constitute differences which have real-world uses

            They do. I started coding on a 6502-based machine and used machine code to find prime numbers, some 27 years ago. I've used 1-2 other non-mainstream CPUs (whose names I don't even know) before diving neck-deep into the mainstream. It was fun, absolutely. It has potential, absolutely. Was it realized? Nope.

            However, I can't resist but asking: if those things do have their uses then why didn't the hobbyists support them through patches to GCC / clang and LLVM?

            Don't get me wrong. If you tell me we are stuck in a local maxima in CPU architectures, I'll immediately agree with you! But what would you have the entire industry do, exactly? Business pays our salaries and they need results in reasonable timeframes. Can you tell the guy who is paying you: "I need 5 years to integrate this old CPU arch with LLVM so we can have this feature you wanted last month", with a straight face?

            > Just because Rust-preaching (without practicing it, of course) Starbucks-spipping average HN readers don't know about it doesn't make it inexistent or necessarily wrong-think.

            That is just being obnoxious and not arguing in good faith. Example: I do use Rust, although not 100% of my work time.

            You should try the Rust language and tooling -- and I mean work actively with it for a year -- and then you could have an informed opinion. It would make for a more interesting discussion.

            Do I like how verbose can Rust be? No, it's irritating.

            Do I like how cryptic it can look? No, and it wastes time mentally parsing it (but it does get better with time so 50/50 here).

            Does it get stuff done mega-quickly and safer than C (and most C++)? Yes.

            Does it have amazing tooling? Yes.

            Does it get developed more and more and serve many needs? Yes.

            Does it reduce security incidents? I'd argue yes although I have no direct experience. Memory safety is definitely one of the largest elephants in the room when security is involved.

            ---

            You have a very wrong idea about the average Rust user IMO. I don't like parts of the whole thing but it helped me a lot several times already -- and it gave me a peace of mind. And I've witnessed people migrating legacy systems to it and showing graphs in meetings that demonstrate that alarms and error analytics percentages plunged to 0.2% - 2% (and they were always 7% - 15% before).

            Just resisting something because it starts going mainstream is a teenager rebellion level of attitude and it's not productive. Do use Rust yourself a bit. Then you can say "I dislike Rust because of $REASON" and then we can have a much more interesting discussion.

            • alacombe 1151 days ago
              > However, I can't resist but asking: if those things do have their uses then why didn't the hobbyists support them through patches to GCC / clang and LLVM?

              They didn't decide to create a whole new language and make everything dependent on it.

              At some point, when you reach a critical mass, you have to spend more on seemlessly "irrelevant" tasks, like supporting other architecture. Don't shift the problem away by making ridiculing it, own your shortcomings.

              • pdimitar 1151 days ago
                Okay, that's a more fair and balanced point of view.

                However, let's not forget one of the main points of original article: nobody promised those people that their dependency's dependencies will never change. The crypto authors made a decision to go with Rust. If dependents want to continue using it, they have to adapt or stop using it.

                As I've said above: backwards compatibility is an admirable goal but it doesn't override everything.

                • alacombe 1151 days ago
                  > As I've said above: backwards compatibility is an admirable goal but it doesn't override everything.

                  You'll never get a job at either Microsoft, or in any system jobs where backward compatibility is paramount (say, the Linux kernel) for millions, if not billions. Just going the Apple "fuck you" way is arrogant at best, disillusioned at worst, especially when you're an irrelevant language.

                  • pdimitar 1151 days ago
                    I don't think the discussion will ever get anywhere if we only compare polar opposites.

                    I'm not advocating for either extremity, what about you?

                    • alacombe 1151 days ago
                      Backward compatibility is, by definition, an all or nothing binary deal. You can't have it otherwise.

                      [and yes, this is gonna be unpopular in a post-modernist era where everything get constantly redefined and where there is no such thing as "meaning".]

                      • pdimitar 1147 days ago
                        Sorry that your work has made you so frustrated. It sounds stressful. IMO you should consider exiting your current company or area. Judging by your comments, you are pretty jaded (and set in your ways).

                        I am not interested in discussing extremes as mentioned in two separate sub-threads now but you do sound like you need a break. Good luck, man.

  • dfox 1152 days ago
    I think that main issue in this is the pyca/cryptography library itself and its unfortunate name. It bundles three only marginally related things (high-level symmetric encryption API, X.509 handling and bunch of cryptographic primitives) things into one library that is first result on google for "python cryptography".

    The end result of this is that another libraries which need only some small part (which is probably better provided by some other python crypto package) depend on the whole library.

  • ndesaulniers 1152 days ago
    This comes up a lot with my work on compiling the Linux kernel with LLVM. So much so I've made a tier list to describe what I picture as a Venn diagram of support: https://clangbuiltlinux.github.io/

    Also recently seen on this topic:

    https://people.gnome.org/~federico/blog/librsvg-rust-and-non...

    https://www.reddit.com/r/rust/comments/lfysy9/pythons_crypto...

    • eqvinox 1152 days ago
      Does the "powerpc" listed really mean "ppc64le" and the "s390" really mean "s390x"? Certainly feels like it, but I'm not sure...

      If you're already making such a nice overview, could you maybe not mar it with such ambiguity? Kinda defeats the purpose :(

      [Ed.: even "x86" is confusing. x86_64 does work, right?]

      • ndesaulniers 1152 days ago
        No. The kernel doesn't make such distinctions. The convention used is what the kernel sources call them under arch/. See arm64 vs aarch64 for example.
        • eqvinox 1152 days ago
          I guess that means that overview is for kernel developers. Those directory names are meaningless to me as an user...

          FWIW, I'm actually asking out of a real need. I have two powerpc boxes (no, not ppc64le) here, which we use for testing big-endian compatibility. But last I checked, LLVM supported neither 32-bit PowerPC nor big-endian PowerPC.

          Actually... I guess the real argument is that the kernel directory names aren't sufficient. Because they specify architecture families, not architectures.

          [And, funnily enough, LLVM does actually seem to support 32-bit BE PowerPC. Now I'm really confused.]

          • ndesaulniers 1152 days ago
            > But last I checked, LLVM supported neither 32-bit PowerPC nor big-endian PowerPC.

            How long ago did you check? I've been working on this for 3-4 years, and we've had coverage of kernel builds for 32b BE ppc for at least 2 years.

            I just checked our coverage, looks like we test ppc64, ppc64le, and ppc (32b BE). So no coverage of 32b LE atm, but I think llvm recently gained support for the relevant triple.

            Try it and let us know if it boots!

  • carapace 1152 days ago
    Something's fishy here...

    This is the first I've heard of the whole thing, so forgive me if I'm just bloviating.

    As a political move to advance the cause of memory-safe languages, as Alex Gaynor clearly intends†, this is obviously kind of a fiasco.

    The arguments about who's going to pay for safer software are uninteresting. It's like arguing over who is going to run into the burning building to save the baby. Put some liability laws in place and see what you get?

    As for Rust not supporting "weird architectures", are y'all serious about being the new contender or not? C isn't going to give up the title without an epic fight. Love it or hate it, to a first approximation C is programming.

    †I do not mean that in a negative way. FWIW I'm referring to https://www.usenix.org/conference/enigma2021/presentation/ga... which I just read. Personally, I'm at "Bargaining" stage, I think we can still "turd polish" C into something and Rust seems to me to be waaaaaaaay too complex (to replace C. It sure is fun though, eh?)

    • acdha 1152 days ago
      > As a political move to advance the cause of memory-safe languages, as Alex Gaynor clearly intends†, this is obviously kind of a fiasco.

      I’m not sure about that: there’s been a few people vocally complaining but it doesn’t seem like they’re getting much traction. I’d see this working as well for other maintainers saying “people using hardware which has been dying since the 90s aren’t a major constituency and I’d like the huge safety and productivity benefits of Rust”.

      I find it interesting that you describe Rust as too complex to replace C, when I generally see it as the reverse: I’ve written C off and on since the early 90s and it’s almost always felt like a chore because you have to do so much yourself that newer languages do for you. Rust hit a better comfort level after a couple hours because it was so much more productive that I could start making progress on design refactoring which had been postponed due to how tedious it would have been to do it before.

      • carapace 1152 days ago
        > I’m not sure about that: there’s been a few people vocally complaining but it doesn’t seem like they’re getting much traction.

        You don't see the problem with that?

        People (who already use a memory-safe language!) shouldn't be complaining when their crypto gets rewritten in a memory-safe language. If they are, someone shat the bed.

        And if your users are complaining but getting no traction, that is a (second!) shitting of the bed.

        > other maintainers saying “people using hardware which has been dying since the 90s aren’t a major constituency and I’d like the huge safety and productivity benefits of Rust”.

        We all want safe software. People aren't upset by choices being added, they are upset about something they (thought) they had that has now been taken away. If you break a bunch of people's stuff and then tell them "fuck you, pay us" or "your platform isn't hip enough" it's kind of a dick move, eh? The "optics" are bad.

        - - - -

        In re: complexity of Rust vs. C: The measure of complexity of a system isn't the ergonomics of the tooling, it's the time/effort for a newbie to achieve competence, eh?

        • acdha 1151 days ago
          > People (who already use a memory-safe language!) shouldn't be complaining when their crypto gets rewritten in a memory-safe language. If they are, someone shat the bed.

          Or they’re resistant to change and are going to complain about anything which means they have to learn something new. I mean, if you listened to the current crop you’d think that C builds worked perfectly everywhere whereas it hasn’t even been a week since I’ve to debug a C-based extension install on an Alpine container.

          In reality, a large fraction of Python cryptography users switched without issues. They just didn’t take to the forums to say that it was working fine or that they were not unhappy because they didn’t derive some portion of their self image from mastery of a half century-old systems programming language.

          > In re: complexity of Rust vs. C: The measure of complexity of a system isn't the ergonomics of the tooling, it's the time/effort for a newbie to achieve competence, eh?

          I’d really seriously question whether it’s correct to assume that it’s easier to learn C plus 50 years of add-on libraries to get close to Rust-level functionality and internalizing all of the patterns people use to work around the unsafe and/or unfixable bits. That Rust newcomer will almost certainly have multithreaded code working safely while their C counterpart is acquiring some deep debugging experience. The higher level structures are a huge win and the frictional cost of not having core language support for basic tasks like text processing mean a lot of frictional costs go from being easily missed runtime mistakes or hard to debug crashes to IDE warnings or compile time errors.

          • carapace 1151 days ago
            > That Rust newcomer will almost certainly have multithreaded code working safely while their C counterpart is acquiring some deep debugging experience.

            Yep. Which one will be better at debugging when things inevitably go wrong?

            The happy path is not the measure of complexity.

            Let me put it another way: Which would be simpler to implement from scratch, C or Rust?

            • acdha 1151 days ago
              > Yep. Which one will be better at debugging when things inevitably go wrong?

              I think you're dramatically under-estimating the amount of extra work a C developer has to do, ranging from not having many data structures to not having a package manager and thus needing to write a lot more code themselves. If C forced developers to be good at debugging we'd know by now — and I'd see less printf() debugging — and being forced to cobble together more of what you get out of the box with Rust means that those skills are less portable because different programs have different combinations of conventions, macros, and libraries in use. A C developer with a lot of Windows experience is going to have a rougher time debugging a Linux program (repeat for MacOS, iOS, *BSD, etc.) than a Rust developer simply because the limited language set means even basic tasks like text processing follow project-specific conventions.

              • carapace 1151 days ago
                Rust is more complex than C. It's not a controversial statement.

                You're arguing past that, saying that Rust is more ergonomic, or has more built-in functionality, or is more cross-platform, or has one true package manager, etc. All of that may be true but it doesn't make Rust simpler than C.

                In any event, if Rust wants to take over the world but disdains "weird architectures" or "dying" hardware, well, we'll see how that plays out for them.

                I hope we do get safer software one way or another, so I wish the Rust folks well. That's why I'm pointing out that this is lousy politicking. I'm not trying to hate Rust.

                • acdha 1151 days ago
                  > All of that may be true but it doesn't make Rust simpler than C.

                  Note that this is not a claim I made - in part because it’d start getting into questions of how you define “simple”. The core C language is certainly smaller, although internalizing some of the undefined bits certainly takes some time, but the question at hand was “competence” which, in a thread about security software, I have been treating as the ability to write code which not just runs but is secure. Based on how routinely C programs have exploitable bugs, I would argue that the average developer using a memory-safe language goes past the point of “can write something trivial which executes” to “can write code doing something real which is not insecure” faster.

                  • carapace 1147 days ago
                    Let's say I have some new piece of hardware that is exotic enough that no high-level language is available for it yet. I think it's obvious that writing a C compiler is generally an easier task than writing a Rust compiler. As long as that's true, and I can leverage C code on my new hardware much more easily than Rust code, Rust can't replace C.

                    An "average developer using a memory-safe language" can't go anywhere if that language doesn't run on the machine.

                    • acdha 1147 days ago
                      Do you think average developers implement C compilers and toolchains on novel hardware frequently? That’s a pretty niche case and the vast, vast majority of C code is running on a handful of well supported architectures.

                      It’s also not like that’s trivial even for C - I remember how many decades it took companies employing large numbers of engineers to do so – and these days a better question is something like how long it would take to implement an LLVM backend.

                      • carapace 1145 days ago
                        Here's an example of the kind of thing I'm talking about, from the HN front page today:

                        "Why I rewrote my Rust keyboard firmware in Zig: consistency, mastery, and fun" https://kevinlynagh.com/rust-zig/

                        I apologize in advance if it seems like I'm moving the goalposts. You're right that implementing a C compiler isn't a common or trivial task. My point is that implementing a Rust compiler is a much more complex task.

                        > the vast, vast majority of C code is running on a handful of well supported architectures.

                        Irrelevant. How much C vs Rust (vs Zig or D or ...) is running on the long tail of hardware?

                        FWIW, if Rust displaces C on the lion's share of machines, that's great. I'm not against Rust, or in favor of C.

                        > these days a better question is something like how long it would take to implement an LLVM backend.

                        Yes, absolutely, I agree.

                        Ideally you would have a program that takes as input a machine description and emits as output a correct Rust (or C or Zig or D or ...) compiler for that machine.

        • mlindner 1149 days ago
          > The measure of complexity of a system isn't the ergonomics of the tooling, it's the time/effort for a newbie to achieve competence, eh?

          Honestly it takes a very very long time to become competent in C. I do C for my day job (or rather did until recently) and I still don't call myself competent in the language. There are so many intricacies that can bite you and you're constantly learning it. I used to love C, now I hate it.

    • lmm 1152 days ago
      > As a political move to advance the cause of memory-safe languages, as Alex Gaynor clearly intends†, this is obviously kind of a fiasco.

      On the contrary, I think this is a brilliant move; it's pulled C fans into a blatantly unreasonable position, making it clear how untenable their position is.

      > As for Rust not supporting "weird architectures", are y'all serious about being the new contender or not? C isn't going to give up the title without an epic fight. Love it or hate it, to a first approximation C is programming.

      Nonsense. Serious programmers (e.g. those doing it professionally) have already mostly moved on from C. Open-source is bound up with Linux and C for historical reasons, and also because the top mainstream languages were not open-source until recently, and so is disproportionately behind the times.

      • WoodenChair 1152 days ago
        > Open-source is bound up with Linux and C for historical reasons, and also because the top mainstream languages were not open-source until recently, and so is disproportionately behind the times.

        The top mainstream languages (C, C++, Java, JavaScript, Python, C#, etc.) have all had at least one open source implementation for at least a decade. A decade is not a long time in the life of programming languages but it’s also not “recently.”

        • lmm 1151 days ago
          I remembered the issue with Java; it's not really open-source because it's covered by patents that are only licensed for implementations substantially derived from OpenJDK.
        • lmm 1152 days ago
          I'll admit that GPLed Java is much older than I thought, but open-source C# was certainly not first-class until 2016, and IMO not really until late last year.
          • WoodenChair 1151 days ago
            Mono (open source implementation of C#) is much older than that.
            • lmm 1151 days ago
              Indeed, but it was never first-class. A lot of libraries weren't available or didn't behave correctly.
              • WoodenChair 1150 days ago
                The original post said languages not “top implementation.”
                • lmm 1150 days ago
                  I said the languages "weren't open-source". The only reasonable interpretation of whether a language "is" open-source is whether it has first-class open-source implementations and tooling. Otherwise we'd say things like "Windows is open-source" because ReactOS exists.
                  • WoodenChair 1149 days ago
                    Putting Mono in the same category as ReactOS is disingenuous at best. Mono was such a good open source implementation that Microsoft eventually bought company behind it and canonized it (Xamarin).
                    • lmm 1145 days ago
                      My experience was that you couldn't take a random C# project and run it on Mono and expect it to work. I don't want to diminish the technical effort that went into Mono, but it would be misleading to say that open-source C# worked without further qualifications.
                      • WoodenChair 1145 days ago
                        Kind of like it would be misleading to say that most popular programming languages were not open source until recently.
                        • lmm 1143 days ago
                          I don't think it is. What does it mean for a language to be open-source, if not that there are one or more open-source implementations of that language which support most or all of the extant ecosystem for that language.
      • carapace 1151 days ago
        > Serious programmers (e.g. those doing it professionally) have already mostly moved on from C.

        Thank you for the LOL. It's been a rough day and I needed that.

        > On the contrary, I think this is a brilliant move;

        It is not.

        I commend to your attention Tao Te Ching, chapter 17.

        https://www.egreenway.com/taoism/ttclz17.htm

  • reidacdc 1152 days ago
    I'm not hugely knowledgeable about LLVM, but it was my understanding that a major benefit of it was separating out high-level from low-level concerns.

    There are lots of practical issues, obviously, but the dream was that the LLVM intermediate representation would take the role of C in the author's description, with the major difference that nobody has to actually hand-write anything in it.

    So from that point of view, isn't the robust, long-term solution to this and related issues to build LLVM back-ends for all the "weird" architectures?

    I'm surprised it's not mentioned as a possible way forward.

    • qbasic_forever 1152 days ago
      Yes, the reality is all those weird architectures stopped being developed and actively supported 20 years ago. If someone wants to get some old Itaniums and Alphas and such together to hack in support, go wild and have fun. But the reality is the venn diagram interesection of people knowledgeable enough to create a LLVM backend, people motivated to support vintage architectures, and people that actually have the hardware available and ready to test is basically zero. If people in the community are passionate about it happening then organize and make it a reality.
      • rodgerd 1152 days ago
        It's also that - per the article - it goes beyond "willing and able to implement in LLVM". Because that doesn't let the Rust devs or the Python devs test on those vintage platforms.
    • tom_mellior 1152 days ago
      LLVM IR is not platform independent: https://releases.llvm.org/8.0.0/docs/FAQ.html#can-i-compile-...

      There is no "compile to LLVM once, run anywhere with an LLVM backend". If your C frontend doesn't know that you want to compile for System/390, it will not be able to generate LLVM code that you can expect to turn into a working System/390 binary.

      • astrange 1152 days ago
        There are attempts to make portable IRs, like PNaCL and WebAssembly, but it's difficult to make something both portable and super-performant at the same time. You'd probably need to have some kind of optional SIMD, and variant function support so that both the SIMD and scalar versions exist and are both hand-optimized. Then paper over some other issues like unaligned memory access and the amount of masking in shift counts.

        That would be enough to get it good on all of x86/ARM/RISCs I think, but to go further and support Alpha or VLIW machines well you'd need to include a lot of metadata in the IR to provide aliasing info, a problem I don't think anyone has taken seriously.

        (Everyone always says "this language is good enough as long as you're not writing video codecs or something!". Well, I am writing video codecs or something.)

        • tom_mellior 1152 days ago
          I'm not sure about SIMD. It's possible to recover (more likely, create by loop unrolling) parallelism that can be turned into SIMD code on the LLVM level. That's what LLVM already does.
          • astrange 1151 days ago
            Autovectorization for SIMD is, well, not great. The worst problems with it are when you run it against already SIMDed code (it tends to mess it up), which doesn't apply here, but it also tends to fail a lot and it relies on a lot of memory aliasing info. That's why it works well in Fortran, which is much looser than C.

            I think a reasonable portable bytecode would have stricter memory rules than C and so would be harder to optimize like this.

            So that's why I proposed having variants, but you could also invent some abstract vector operations and then scalarize them if they're not available. That's how shader languages do it.

    • kps 1152 days ago
      The problem is that LLVM IR is not stable across releases, so keeping an out-of-tree back end up to date would be expensive.
  • avereveard 1152 days ago
    > Your user base is unhappy

    It isn't the owner user base if it comes from an unsupported third party build.

    • woodruffw 1152 days ago
      > It isn't the owner user base if it comes from an unsupported third party build.

      Maybe this is strictly true, but it isn't how the maintainer/packager model has historically worked: packagers do point users to the upstream for troubleshooting, and the upstream points prospective users to the packagers for installation. It's a marriage of convenience (and philosophy), but it doesn't imply the arbitrary patches and/or changes to the project itself are somehow supported by their upstreams.

      • alisonkisk 1152 days ago
        No. Package manager should never send a bug reporter upstream. It's the package manger's job to investigate the issue and file a bug with upstream including proper context, unless the bug is reported on multiple distributions.
        • account42 1151 days ago
          That is just unreasonable. For build issues, especially ones that the packager can reproduce - making sure that all that is reported properly makes sens. But most bugs are not that - playing the telephone game here helps no one.
    • Lammy 1152 days ago
      While technically true, that doesn't mean users will understand that and send their bug reports to the right place. For some historical examples of maintainer burnout and community clashes, see QuodLibet vs Gentoo in 2005/2005: https://bugs.gentoo.org/101619 / https://bugs.gentoo.org/124595

      And GAIM (Pidgin) vs Gentoo in 2003 (https://bugs.gentoo.org/35890), where we can see that nothing ever really changes, RE: "You don't happen to be using an architecture other than x86, do you?"

      • ziml77 1151 days ago
        Wow, what a shitty exchange between Gentoo maintainers and Quod Libet's developer. They actually told the dev to do the work of fixing the distro's packages!

        Is the Gentoo community still that awful 15 years later?

        • Lammy 1151 days ago
          Nah, this was like funroll-loops-dot-info era and we all grew up. I don't know what distro Kids These Days prefer.
  • yellowapple 1152 days ago
    > open source groups should not be unconditionally supporting the ecosystem for a large corporation’s hardware and/or platforms.

    Then by this logic I'm going to stop developing against pretty much every ISA aside from maybe RISC-V, and even that's a stretch. x86 users can piss off until Intel, AMD, and/or VIA cough up some dough to my single-digit-user-count FOSS projects.

    ...obviously this idea, if taken to its logical conclusion, basically means the death of open source software development as we know it. The author had a decent point until not only suggesting to refuse to support "niche" platforms entirely, but to make that suggestion the "most important".

    As a counterexample to the author's point, see OpenBSD's support for platforms unsupported by Rust. The rationale there is not for monetary gain, but for two key reasons:

    1. People want to be able to run OpenBSD on whatever hardware they've got lying around, and when they figure out how to do it they might as well help others do the same.

    2. In line with the author's point about how most C developers make platform-specific assumptions about the code they write, OpenBSD targeting a bunch of "niche" platforms with oddball conventions helps catch those assumptions early - and with them, any lurking security bugs deriving from those assumptions failing.

    • ohgodplsno 1152 days ago
      Sigh. The HackerNews standard of pushing some logic to its limits and going "HA, GOTCHA" is getting tiring.

      x86, x86_64, ARM are used by billions of people around the world. It's easy to see that there is actual value for end users by supporting these platforms. John Doe from accounting can easily use your open source software and it will benefit him.

      The clear benefits of supporting z/OS, aside from helping IBM push out more of their mainframes and making more money, are unclear. Supporting PA-RISC, POWER or any other esoteric architecture whose sole purpose is to make their vendors sell million dollar support contracts should not be anyone's priority, and if indeed they do want this software on their non-standard, non-widely used ISA, they can use some of these millions to fund LLVM development.

      • yellowapple 1151 days ago
        > Sigh. The HackerNews standard of pushing some logic to its limits and going "HA, GOTCHA" is getting tiring.

        So is the Hacker News standard of ignoring the entire point just to nag about said supposed standard. Sigh.

        Said entire point, specifically, being that there are far more reasons to develop and maintain software beyond "value for end users", and that even when targeting that specific reason, developing against esoteric cases does ensure the software is more robust even for "normal" use cases.

        (And mind you, logic - like software - should be pushed to its limits, because that's the most surefire way to identify its flaws - like, for example, the failure to consider that nearly all computing platforms in existence and use today were and are the product of some large corporation that could be paying your bills but won't, or the failure to consider that it's users, not hardware vendors, who are generating support requests for their hardware)

  • platformlover 1152 days ago
    "No free work for platforms that only corporations are using."

    The opposite is true.

    HP stopped supporting hppa 8 years ago. Intel had last orders for ia64 CPUs more than a year ago.

    There are no companies who would pay for anything, today maintainance of such platforms is entirely an open source community effort.

  • SAI_Peregrinus 1152 days ago
    Packagers often fork software, but confusingly don't give the fork a new name.

    If an upstream project includes build scripts, then they're part of the project and any changes to the build scripts constitute a fork. Any patches to the actual code are even more clearly a fork.

    The article lists the following 5 things packagers sometimes do:

    1. Build your project with slightly (or completely) different versions of dependencies

    This doesn't necessarily require a fork. If you use static linking of dependencies it mostly does, but things like libc are often dynamically linked.

    2. Build your project with slightly (or completely) different optimization flags and other potentially ABI-breaking options

    If you're providing a build script, and they're changing this, it's a fork.

    3. Distribute your project with insecure or outright broken defaults

    If they're changing the provided code or configuration defaults, it's a fork.

    4. Disable important security features because other parts of their ecosystem haven’t caught up

    It's a fork.

    5. Patch your project or its build to make it “work” (read: compile and not crash immediately) with completely new dependencies, compilers, toolchains, architectures, and environmental constraints

    It's a fork.

    Project packagers should rename projects they fork. Project authors should make it easy to rename forks, preferably with a single location that defines the project name.

    Forks aren't a bad thing. But not labeling them correctly leads to quite a bit of confusion.

    • Spivak 1152 days ago
      If you’ve ever poked around the spec files for Fedora, CentOS, or RHEL you know that this wouldn’t work. Everything is extensively patched. You might as well just call everything rhel-$project. Some projects genuinely have a hundred separate patches.

      Linux distros simply wouldn’t work without this. Once you have two pieces of software that depend on different versions of libfoo or two different versions of glibc it’s over. Nix is doing heroic work in this space but it’s also doesn’t care that you have 12 different versions of zlib that in an enterprise world still need security backports because upstreams don’t do that.

      • dralley 1152 days ago
        This is hardly unique to Fedora / CentOS / RHEL. Debian does the same, even more aggressively in some ways, such as breaking up libraries into many smaller independent packages. At that point it truly is a fork.
      • genuine_smiles 1152 days ago
        > You might as well just call everything rhel-$project.

        Is this a bad option?

        • zajio1am 1152 days ago
          If every package in RHEL is named rhel-$project, and every package in Debian is named deb-$project, then these prefixes do not add any relevant information, just inconvenience. It is already generally understood that packages from distributions are patched by distributions.
      • wbl 1152 days ago
        Static link to files in standard places and bring the bugs upstream.
    • PurpleFoxy 1152 days ago
      Debian used to do this for Firefox. It leads to confusion with users. “I want to install Firefox, why is there no Firefox in the repos”.
      • em-bee 1152 days ago
        mozilla used their trademark to force debian to do this, because debian made changes to firefox that mozilla didn't approve of.

        that issue has since been resolved and official firefox is now again distributed with debian.

  • gsnedders 1152 days ago
    "The absence of official builds means that" [incomplete sentence]
    • woodruffw 1152 days ago
      Whoops, that was a transposition. Thanks, fixed.
  • qwerty456127 1151 days ago
    Why can't we just develop an LLVM backend to output C code?
  • hedora 1152 days ago
    I’ve noticed I can predict a lot about the quality of code an engineer produces by listening to their opinions on testing in strange environments.

    People interested in stamping out undefined / nondeterministic behavior / compiler warnings welcome bug reports from platforms that find new bugs.

    They also are more conservative about including dependencies, and their code is much, much more maintainable over time.

    • mlindner 1149 days ago
      > People interested in stamping out undefined / nondeterministic behavior / compiler warnings welcome bug reports from platforms that find new bugs.

      Why not use a language that doesn't have such problems in the first put (hint, any language that is not C or C++).

    • alisonkisk 1152 days ago
      Someone dedicated to debugging complex software on strange environments probably writes no code at all, since they spend all their time investigating bugs.

      Your comment is rudely dismissive of people giving you the product of their labor for free.

      • asguy 1152 days ago
        Do you disagree with their personal experience? Does it not match yours?

        Their experience matches mine. It’s one of the reasons I appreciate projects like Net and OpenBSD, who actively keep old hardware around. They find bugs, by actually exercising their code base in unplanned ways (e.g. endian differences, alignment constraint differences).

        • sgift 1152 days ago
          It doesn't match mine since there's usually far more work to do than time available to do it. Most projects have to heavily triage bug reports and "your program fails on my platform that no one has built in 30 years" has a very low rate of "helpful to anyone else but the bug reporter".
          • asguy 1152 days ago
            Do you never stop and think “why is our code base busted in this person’s environment?” I mean, it could just be that the bug filer just hasn’t done their due diligence on upgrading (why won’t win 3.11 play crysis).

            It also could be that your code quality sucks and you’re writing something non-portable.

            • egil 1152 days ago
              But why would the code quality suck if features or maintenance of supported systems was prioritized ahead of arcane architectures? Time is a limited resource, so why should extreme portability be considered the holy grail?
              • asguy 1151 days ago
                Did anyone write "extreme" portability? You can pick a ton of random architectures (e.g. PowerPC) that aren't common, but aren't extreme. The bugs that they find can be useful, and I evaluate them when I get them.
      • Lvl999Noob 1152 days ago
        IMO, you both might be correct. A programmer welcoming of bug reports from other platforms and spending time solving them wouldn't write a lot of code, but the code they do write, would probably be very maintainable (unless the fixes were completely different code paths for different platforms).
  • ncmncm 1152 days ago
    It Seems To Me that a little more attention to cross-compilation support would go a long way toward extending support to all the weird architectures.

    Really, running a language toolchain on the target machine, where the target machine isn't what everybody has, is largely pointless. We each have loads of machines to run toolchains on.

    Early in the life of a language, with a compiler meta-circularly written in it, the compiler is the biggest program in the language, so may seem like an effective early test of a port. But, it must be said, meta-circularity is, in large degree, wankage. There are much better choices to demonstrate the merits of your new language than coding its compiler; a compiler is not the persuasive demonstration of language merit it once was. Writing your language's compiler in an unstable and unreliable new language, making it subject to its own bugs and early inefficiencies, is a poor way to help a new and fragile user community up to speed.

    How did we get on this meta-circularity fetish, anyway? It's easy to see how it happened with C, but aping what C did is no formula for success, anymore. C is, or once was, a pretty simple language to compile, so its compiler wouldn't much benefit from being implemented in a more powerful language, if there had been any. That cannot be said of modern languages, where all the help you can give them is just enough to make a usable compiler. Notably, LLVM is coded in (a dialect of) C++, even the project's C compiler.

    A cross-toolchain, coded in C++, built to run on Linux amd64 and nowhere else, ought to be all you need to bring up full support for any language on any architecture. As we say about low Earth orbit, it puts you halfway to anywhere in the universe. Even where you don't have Linux, you have a VM you can run it in. Cross-development skills take some investment, but they pay forever after. The overwhelming majority of places you could run code nowadays can't meaningfully host a development environment anyway. Cross skills enable you to program the microcontrollers that manage about everything in the world, today, and phones, and HPC supercomputers alike.

    A cross-built program can not only be built on your desktop host machine. It can be debugged from there, running in an emulator or on the actual target hardware. The debugger runs on the host, with access to all the source code. What we call a "debug stub", a tiny fragment of code is inserted in the program or target environment that implements the minimal primitives a debugger needs: mostly just peek, poke, and run; maybe watchpoints if the target supports that. The debugger talks to the stub over a network port, or a serial cable, or by blinking lights, if necessary.

    Linux on amd64 didn't take over the world, with no marketing budget in sight, by being the best place possible place to run programs. (It's just adequate, and better at it than Windows.) Linux got there by being the best possible place to develop software. Make it the best place to develop in your language, whatever the target, and you are halfway to anywhere.

  • alisonkisk 1152 days ago
    This yet another example of why gift economy members need to understand that you don't owe anyone support. If you publish As-Is, thank you. If you promise to support the OS/Hardware in your lab, thank you. If you accept patches from users with weird use-cases, thank you.

    If you get too much bug spam, you need to set up filters and auto replies and volunteer helpers to help you find the reports you care about.

    You don't owe anyone support.

  • nickysielicki 1152 days ago
    There’s a weird mental leap that rust evangelists, militant atheists, and hyper-progressives have in common. I like rust as much as the next guy but the zealotry is unbelievable.

    It’s almost getting to the point that I’m rooting for rust to fail and for C++ to just continue to get better, just so that these people who claim to take memory safety so seriously might shut the fuck up about it.

    The gentoo folks “complaining” about this have contributed more to LLVM (and, by extension, rust) than 95% of the people arguing that they’re in the wrong.

    edit: what I ultimately have an issue with is tearing down the past because we see a brighter future on the horizon. The baby is being thrown out with the bath water.

    • dang 1151 days ago
      "Eschew flamebait. Don't introduce flamewar topics unless you have something genuinely new to say. Avoid unrelated controversies and generic tangents."

      https://news.ycombinator.com/newsguidelines.html

    • steveklabnik 1152 days ago
      Rust has contributed a lot to LLVM, see here for a recent summary: https://twitter.com/pcwalton/status/1366058442276790274
      • cbmuser 1152 days ago
        What Rust needs is an alternative, gcc-based implementation similar to gccgo.

        This will solve the problem immediately and allow Rust to be used on a much greater variety of targets, including obscure targets such as Tricore which are used in the automotive sector.

        Only if Rust code runs everywhere, it will also be deployed everywhere.

        • oivey 1152 days ago
          Rust is on LLVM for a reason: implementing new languages on top of GCC is hard. It seems much more reasonable for unusual architectures to contribute to LLVM. If companies want to sell their own architecture, they should provide the support themselves, rather completely relying on free contributions from the OSS community.
        • steveklabnik 1152 days ago
          Yes, you and I have spoken about this a few times over the years :) Glad to see the m68k work is still ongoing.

          I too am pro getting Rust into gcc. We'll see how the effort goes.

        • josefx 1152 days ago
          And the moment that exists most of the issues the author currently ascribes to C will also be true for Rust.
      • wizzwizz4 1152 days ago
        True, but most Rust zealots (including me, though I'm reformed) haven't contributed anything to LLVM.

        Most Rust developers aren't Rust zealots. It's largely people who are new to the language, I think.

        • steveklabnik 1152 days ago
          The original person said "evangelists" and then slid into "zealotry." And I find a lot of anti-Rust folks seem to think anyone who likes Rust is a "zealot." YMMV.

          I think that this is just a hard conversation to have, with a ton of different groups who all want different things, and have different incentives.

          • nickysielicki 1152 days ago
            I like rust. What I don't like is people who pretend that unsafe rust is any better than C++ or C.
            • jcranmer 1152 days ago
              But it is. Unlike C/C++, Rust does not have undefined signed integer overflow or strict aliasing rules. Furthermore, you can do a few more things with pointers that are undefined in C/C++ (e.g., implementing offsetof the naïve way in C/C++ is undefined behavior, but would not be in Rust [assuming you use raw references, which are still in the process of being added]).

              In general, there's a slice of behavior that's undefined behavior in C/C++ that isn't undefined behavior in Rust, and for people who want C to really be "portable assembly," Rust is arguably a slightly better choice as a result of being less likely to accidentally trip up on undefined behavior.

            • wizzwizz4 1152 days ago
              Unsafe Rust is harder to predict the behaviour of than C – at least, when you're doing completely off the wall stuff like re-using the stack in two threads. You can keep it contained, though, and so long as your unsafe Rust is keeping the language's invariants, the code is safe; therefore, you know where to start looking when there's trouble to be found.

              I think it depends on what you're trying to do.

    • staticassertion 1152 days ago
      > It’s almost getting to the point that I’m rooting for rust to fail and for C++ to just continue to get better, just so that these people who claim to take memory safety so seriously might shut the fuck up about it.

      lol idk kiiiinda sounds like you might be the one taking things too seriously?

    • ojnabieoot 1152 days ago
      I feel like the author is not being that zealous! More to the point, he is not really focusing on memory safety so much as his frustration with the nature of the complaints RE: Rust. Perhaps he is being pithy and snarky enough to hurt people’s feelings, but his points about C being “organizationally” unsafe are well-established and not controversial.

      And his broader point is one that C evangelists should take seriously - the fact that C can be compiled on all sorts of esoteric architecture does not mean that every open-source C program is supported on those architectures. There is a serious risk with using a crypto library on instruction sets the author isn’t supporting, and the responsibility is on the package distributor or consumer (depending), not the maintainer.

      • cbmuser 1152 days ago
        The point is: If Rust is supposed to replace as _the_ systems programming language, it needs to be as portable as C.

        There is no point in arguing what architectures are considered obscure and which are not since there a lot of fields of applications in industry and research which use architectures most people never heard of such as Elbrus 2000 or Sunway.

        • DasIch 1152 days ago
          C is more portable than Rust when you define portable as "it compiles". If you define portable to mean "works correctly" I expect C is about as portable as Rust in the context of most applications.
          • WoodenChair 1152 days ago
            > C is more portable than Rust when you define portable as "it compiles". If you define portable to mean "works correctly" I expect C is about as portable as Rust in the context of most applications.

            I suspect you have a more narrow definition of “works correctly” than the C advocates do.

            • Jetrel 1152 days ago
              I do suspect that a lot of "our CI says this compiles on platform X" situations, for quite a few programs, are targeting arch/os combos that genuinely aren't getting tested, with only the occasional hobbyist poking at it every few months.

              It wouldn't surprise me at all of most of those just crash-on-launch - or crash when you try to do anything, even though the CI builds them just fine and they pass the automated tests.

              For an awful lot of boutique platforms, it's usually "literally one person" that drove the work to port it to that thing, and when that person's no longer actively doing the work, bitrot goes wild.

        • pornel 1152 days ago
          Rust doesn't need to replace 100% of C, just like C hasn't replaced 100% of Fortran or Pascal.
    • the_only_law 1152 days ago
      I didn’t think it was possible but the anti rust crowd managed to become even more obnoxious than the Rust evangelism strike force
      • dang 1151 days ago
        Please don't take HN threads further into flamewar. The perpetuation/escalation actually does more harm than the original post, which is why the site guidelines ask you not specifically not to do this.

        https://news.ycombinator.com/newsguidelines.html

    • alisonkisk 1152 days ago
      Posted in wrong thread? This comment seems off topic.
      • nickysielicki 1152 days ago
        A great deal of this article is about C being a "cancer", "a perpetually unsafe development ecosystem", whose cancerous properties are enabled by the absolute horror of it being portable.

        The implied replacement is Rust.

        • woodruffw 1152 days ago
          > A great deal of this article is about C being a "cancer", "a perpetually unsafe development ecosystem", whose cancerous properties are enabled by the absolute horror of it being portable.

          Author here: my sibling already explained the language, but I specifically chose "cancer" because I remember seeing the UNIX-haters handbook use that phrase (I'm not exactly a UNIX hater, but it's always stuck with me). I write C and C++ professionally (and I like it that way!), and I think it's perfectly fair and accurate to refer to their spread as cancerous.

          > The implied replacement is Rust.

          No. Rust was an example. The replacement is any memory-safe language; Rust just happens to have done a great job getting much of the tooling right from the get-go.

          • timidger 1152 days ago
            That is not a good reason for choosing such charged language. You can get your point across without comparing a venerable (if flawed) programming language to one of the most feared class of diseases that kills millions each year.

            I think there are good points you make that people should hear, but less people will listen if you turn them away by making grotesque comparisons.

            • woodruffw 1152 days ago
              > You can get your point across without comparing a venerable (if flawed) programming language to one of the most feared class of diseases that kills millions each year.

              I'm going to be pedantic with this: calling something "a cancer" is not comparing it to cancer. The phrase "X is a cancer" is bombastic and inflammatory, which is intentional. It's neither a simile nor grotesque, at least in my dialect of English.

              As for the actual point: I see no reason to venerate C. Nobody should labor under the false pretense that we, as an industry, are brilliant enough to reliably write safe, cross-platform C. We haven't managed to do it for the last 50 years and, given the current state of static analysis on C, I don't have any particular hope for the next 50. I'm going to keep on writing it, but I don't intend to venerate it or convey that expectation on anyone else.

              • Jetrel 1152 days ago
                > As for the actual point: I see no reason to venerate C. Nobody should labor under the false pretense that we, as an industry, are brilliant enough to reliably write safe, cross-platform C. We haven't managed to do it for the last 50 years and, given the current state of static analysis on C, I don't have any particular hope for the next 50. I'm going to keep on writing it, but I don't intend to venerate it or convey that expectation on anyone else.

                This absolutely hits the nail on the head. This is masterfully put.

                The problem with a great many successful things in the world is there's a very human tendency to "saint" them and attribute a sort of mystical infallibility/ineffability to them. As though they weren't just "fit for the purpose and in the right place at the right time", but rather "a work of genius - the right solution for all time, now and forever." (C.f "end-of-history-fallacy").

                That's how technology stagnates - if we, as a community, can't admit something's got room for improvement, it simply won't.

        • oivey 1152 days ago
          At least part of the point was that C only appears to be portable. The abstract machine concept itself is leaky, lesser used compilers have serious bugs, build scripts aren’t portable across architectures, dependency management is poor, etc.
          • astrange 1152 days ago
            C programs would be a lot more safely portable if there was a pervasive testing culture, preferably with some new language features, so eg you could see a program doesn't actually work without 64-bit pointers.

            (Without language features it's annoying to write tests because you end up exporting things that don't otherwise need exporting.)

        • alisonkisk 1152 days ago
          It's a "cancer" in the sense that it spreads everywhere and people use C software even in environments where it was never intended to work, and get hurt by it.
  • 0xdeadfeed 1152 days ago

        But C4, cancer that it is, finds its way onto every architecture
    
    I stopped reading here. If you can’t be objective about a topic, don’t bother posting it on a neutral platform.
    • saagarjha 1152 days ago
      This was clearly not meant to be a value judgment.
  • acmj 1152 days ago
    If I were the maintainer, I would create cryptography2 and put the original cryptography into the bugfix mode. It is time to move to a safer language, but breaking backward compatibility is always bad.
    • AlphaSite 1152 days ago
      It’s continued to support the architectures which it promises to support and its API compatible.

      I don’t think making a change which breaks a third party, out of tree, port is breaking backwards comparability, this feels more like someone depending on internal implementation details.

    • epage 1152 days ago
      What is breaking backward compatibility? Using new python features? Using new C features? Bug fixes?

      To have any sanity as a maintainer, you have to draw the line somewhere.

  • panny 1152 days ago
    >outside of hobbyists playing with weird architectures for fun

    These are also the most likely people to be able to remove dependency on your project making it less relevant. That's what happened with Gentoo. They didn't bother forking. They found an alternative and python-cryptography is now dead to them.

    This doesn't even get into what a bad idea it is to build crypto code with mystery binaries like rustc in the first place. Yes, you can bootstrap rust from source if you are really brave/stupid. Nearly nobody does it though, because rust seems purpose built to make you dependent on the official binaries. Building rust version N requires building version 1-N. Latest version is something like 50, which makes bootstrapping it absurd.

    Memory safety is nice, but Java has been around for decades. Memory safety didn't solve everyones' problems. People still use C for good reasons. Breaking distros won't win Rust lots of friend.