OpenSSL Security Advisory

(openssl.org)

138 points | by arkadiyt 43 days ago

10 comments

  • mattwilsonn888 43 days ago
    I'm not a low level expert, but I keep seeing mention of bugs related to memory access which have potentially severe security implications and I wonder what benefits of whatever C Language dialect is worth this type of risk.

    If Rust or Haskell really make these issues far less prevalent, why not use them? Genuine question, I know nothing is so simple.

    • chasil 43 days ago
      One big reason that I know are patents.

      Elliptic curve was a minefield of patents when Sun Microsystems carefully crafted the OpenSSL implementation to avoid any patent infringement. The resulting source code was vetted by their attorneys.

      If any of those patents are still in force, then a naive implementation could infringe.

      "Sun Microsystems has donated ECC code to OpenSSL and the Network Security Services (NSS) library..."

      https://lwn.net/Articles/174784/

      https://en.m.wikipedia.org/wiki/ECC_patents

      A lot more attention was likely paid to this code than to heartbleed.

      • mattwilsonn888 43 days ago
        Wow wasn't aware of that. Its funny to hear and agree with people proclaiming "privacy a right" and have the basic, fundamental technology behind make hairy the implementation of it.
        • chasil 43 days ago
          I found out about that a few years ago.

          If you run "openssl ecparam -list_curves" on a RHEL clone, you will only see p-256, p-384, and e-521 (and e-521 was only added in v7).

          If you build libressl and run the same command, there are dozens (and Canada doesn't allow software patents, so it's legal there).

          On OpenBSD 7.1, I see this result:

            $ openssl ecparam -list_curves | wc -l
            102
          
          Note the 2 Oakley curves take up 8 lines. There are also 17 matches in this output for NIST curves.
      • tptacek 43 days ago
        OpenSSL is the reference implementation of TLS, so when teams propose new TLS features (like the heartbeat extension), they tend to add them to OpenSSL. At the time of Heartbleed, OpenSSL got nothing resembling the attention it does today, so it wasn't insane to think that some random proof-of-concept code might find its way in.
    • jart 43 days ago
      Really? Are we really going to turn this into a Rust vs. C flamewar when the code in question is most likely Assembly generated by Perl? https://github.com/openssl/openssl/blob/2e3e9b4887b5077b949c... In the recent release they changed a `jb` to `jbe` in that file which could be related. It's hard to tell what code is actually the culprit. They had a similar file in Perl which appeared to be for fused-multiply add avx512 that got removed at some point, possibly with a C rewrite, so you could be somewhat right. Either way there really should be more transparency about how what's written in release notes matches up with the code that actually changes between tags.
      • yjftsjthsd-h 43 days ago
        ... if that summary is materially accurate, then it would make the first occasion on which I would genuinely believe that rewriting a code base purely in C was a security improvement.
      • mattwilsonn888 43 days ago
        Ironically this is the most inflammatory response I've seen thus far - not like being emotional makes something more likely to be incorrect anyways...
      • dc-programmer 43 days ago
        This is horrifying information. I was only aware of the C code that is 99% macros
        • staticassertion 43 days ago
          It's not extremely uncommon to find "assembly pushed together by perl" tbh
          • dc-programmer 43 days ago
            I believe it. But it makes source coding auditing and static analysis incredibly difficult.
    • ggm 43 days ago
      You're not wrong, but OpenSSL represents "API/ABI" dependency. People coded to this. OpenSSL is also "s/w by accretion" as people added and removed things over time.

      The original core, SSLEAY, Was an exemplary instance of somebody outside the core cryptographic community (Eric was, IIRC working as a systems programmer in a psychology department at UQ) -And was hand coded to be both algorithmically faithful to export restrictions (ITAR) and machine code optimisations: it was FAST. The code had to implement both the export restricted brainded reduced keylength and the "illegal to export" algorithms.

      Peter Guttmans library was known in some ways to be "cleaner" -But didn't gain traction.

      A lot of OpenSSL is history.

      Recoding in a type safe language, and with a mind to risks is good. But remember, another attack pattern is differential analysis: Code in crypto has to do some things like present equal cost CPU burden across different paths, to defeat attacks including the side=channel of finding hotspots in the VLSI mask and working out what it does from the leakage of information there.

      Its complicated. Rust or Haskell Alone, won't make something like OpenSSL inherently risk-free, it may introduce new risks in closing off these ones.

      Still worth discussing. Just not necessarily a no-brainer.

      • zgs 43 days ago
        Most problems I've seen have been in the way OpenSSL is used not in OpenSSL itself. Sure OpenSSL has issues but naïve use of cryptography in the wrong way is *far* more prevalent.
    • tialaramex 43 days ago
      Beyond the simple matter of Rust being much newer than OpenSSL, one concern for some cryptographic primitives is the timing side-channel.

      https://en.wikipedia.org/wiki/Timing_attack

      In high level languages like Rust, the compiler does not prioritise trying to emit machine code which executes in constant time for all inputs. OpenSSL has implementations for some primitives which are known to be constant time, which can be important.

      One option if you're working with Rust anyway would be use something like Ring:

      https://github.com/briansmith/ring

      Ring's primitives are just taken from BoringSSL which is Google's fork of OpenSSL, they're a mix of C and assembly language, it's possible (though fraught) to write some constant time algorithms in C if you know which compiler will be used, and of course it's possible (if you read the performance manuals carefully) to write constant time assembly in many cases.

      In the C / assembly language code of course you do not have any safety benefits.

      It can certainly make sense to do this very tricky primitive stuff in dangerous C or assembly, but then write all the higher level stuff in Rust, and that's the sort of thing Ring is intended for. BoringSSL for example includes code to do X.509 parsing and signature validation in C, but those things aren't sensitive, a timing attack on my X.509 parsing tells you nothing of value, and it's complicated to do correctly so Rust could make sense.

      • SAI_Peregrinus 43 days ago
        C does not make any guarantees about timing. It's not considered "observable behavior" by the standard.

        The usual way to create constant-time code for C is to inspect the output assembly for a number of (compiler, options, host system, target system) tuples and verify that it will take constant time on all of them. Even that isn't enough in general, since there are other side-channels. The "Hertzbleed" attack exploits variable execution time in "constant time" code due to CPU dynamic frequency scaling being dependent on the input (secret) data. That effectively means that power side-channels are remotely observable.

        • tialaramex 43 days ago
          > The usual way to create constant-time code for C is to inspect the output assembly

          Sure, I hoped that sort of thing was implicit in what I wrote, some people do it, perhaps they should not, but they clearly feel like it's their best option. In particular for the context: writing this code in Rust doesn't help and would usually make it harder.

          If we don't want to hand roll machine code, maybe somebody should make yet another "it's C but for the 21st century" language with constant time output as a deliberate feature, like maybe the const flag on your functions means produce constant time machine code or error - rather than "You can execute this function at compile time". (Not necessarily a serious syntactic suggestion, just spit-balling).

          • SAI_Peregrinus 43 days ago
            The problem is that ISAs don't support any sort of side-channel resistance mode, so even hand-rolling assembly (or machine code) won't fix every possible leak. If such a mode could be added, then any language could add appropriate intrinsics to set it.

            More likely is that cryptography-specific instructions (like AES-NI or ARM's SHA hash instructions) will get added for more relevant operations.

      • zaarn 43 days ago
        You can write Assembly in Rust. The point would then be to shut down the usage around it so that you can only do it safely. Of course it won't stop issues but atleast you know which parts are risky and which aren't. There is also Miri, which allows you to check for Rust code that behaves memory unsafe.

        And it's also quite possible to writing timing resistant code in Rust. Rust is not as high level as people think, it lets you get right down to the machine level with no issue.

      • jerry1979 43 days ago
        What do you think of crates like subtle[1] which bills itself as "Pure-Rust traits and utilities for constant-time cryptographic implementations."

        [1] https://crates.io/crates/subtle/

        • jcranmer 43 days ago
          Speaking as a compiler developer: compilers for standard languages make no attempts to guarantee constant-time properties, nor to provide any primitives that can be used implement constant-time guarantees. Indeed, the developers are likely to be moderately hostile to proposals to add such primitives--it is a pretty different beast if you're worrying about constant-time.

          As such, if you're using a standard compiler for a language to implement a constant-time guarantee, you need to be doing verification of the resulting assembly to make sure that it actually is constant-time. If you're not doing that, or you're not verifying the generated assembly itself, your constant-time guarantee is not worth the paper it's printed on. Even if it's not even printed on any paper.

          • jart 43 days ago
            Speaking as a compiler user, how much longer until we can expect better support for Mixed Boolean Arithmetic (MBA) simplification from compilers? For example, it's problematic that GCC and Clang aren't able to tell that `((((x ^ y) | (~(x ^ y) + 1)) >> 63) - 1)` is equivalent to `x==y` or that `(((x ^ ((x ^ y) | ((x - y) ^ y))) >> 63) - 1)` is equivalent to `x>=y`. We need MBA simplification because the algebra engines don't support it and malware authors frequently use it to obfuscate programs. But if we could plug it into a C compiler that shows us in assembly what the code is actually doing, then it would help a lot with software analysis.
            • zgs 41 days ago
              Compilers that did this would break a lot of the constant time code.
    • adrian_b 43 days ago
      This memory corruption bug, which is specific to Cannon Lake or newer CPUs with AVX-512, does not appear to have any relationship with the high-level programming language used for OpenSSL.

      The bug is either in an assembly language sequence, or at most it can be due to incorrect use of compiler intrinsics.

      Changing the programming language cannot eliminate such bugs. Only either a much more clever compiler, able to use efficiently all the existing instructions implemented by the CPU, thus removing the need for using assembly language, or a much higher-level kind of assembly language, could help against such bugs.

      A more practical method would be to always use extensive fuzzing tests for all such functions written in assembly language.

    • barsonme 43 days ago
      Rewriting something like OpenSSL isn't exactly trivial for many of the same reasons rewriting a large, widely used software library (especially one written in C) isn't trivial. Except in this case you need cryptography experts that are familiar in Rust (or whatever language) to review the code.

      As a developer you should use prefer libraries written in safer languages (Rust, Go, etc.). But that's not always possible given business/environment/etc constraints.

    • mlinksva 43 days ago
      It takes time and resources, but Rust is being used, e.g., https://www.memorysafety.org/initiative/rustls/ which has been being worked on at least 6 years https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu... ... I'd hope and guess that resources are increasing, especially as governments realize that software security is their problem too.
    • staticassertion 43 days ago
      Rust is newish, so it just wasn't an option. Haskell carries a heavy runtime around so it's not suitable for a lot of environments, nor is it easy (possible?) to package as a dynamic library.

      But yeah, for new projects, choose memory safe languages.

    • jmull 43 days ago
      What are you suggesting, though?

      Rewrite it in Rust? That might be worth it in the long run, but it would be a considerable effort that would likely take a long time to become a feasible replacement for openssl. E.g., it seems likely to me that it would suffer from more bugs until it reached a certain level of maturity.

      It would take a focused, long-term, sustained effort by experts to achieve and would still have many ways it could fail.

      I would guess we'll get a chance to find out if this could work, though. I think someone must be attempting this already.

      Of course, then the next level of "safe" language will come out, with more guarantees, and we think about rewriting to that.

      Personally, I think a more productive, much shorter path, would be to use a safe layer on top of the existing C language, and port the existing openssl to that.

    • dcow 43 days ago
      There are native TLS implementations in pure Rust and Haskell. They work great, use constant time “subtle” crypto, and are used regularly in the respective communities. OpenSSL is the C library and that’s not likely to change since people still write C.
      • yjftsjthsd-h 43 days ago
        I was given to understand that one of rust's selling points was the ability to "export" C compatible libraries. Can we "just" backport a crypto library that exposes a mostly openssl-like API+ABI?
        • rapsey 43 days ago
          There is a library that provides the openssl C API on top of rustls. I can't find it at the moment however.
    • nly 43 days ago
      There are alternatives to OpenSSL written in Rust but, like it or not, OpenSSL already set the defacto standard API used by portable applications.

      Even alternatives in C or C++ don't get a look in. GnuTLS? Mozilla NSS? Libtls out of openbsd? Libretls port? Nobody cares.

      Even though the OpenSSL API is terrible, nobody wants to support multiple TLS backends in their application. Particularly if they cross platforms.

      This is one reason I personally believe standardizing good APIs is more important than implementation.

  • NateLawson 43 days ago
    Here's a good summary of the flaw:

    https://guidovranken.com/2022/06/27/notes-on-openssl-remote-...

    Note that the bug is only in 3.0.4, which was released June 21, 2022. So if you didn't update to this version, it's unlikely you're vulnerable.

    • Arnavion 43 days ago
      You're talking about the first CVE. The second one affects 1.1 too.

      Thankfully I can't imagine anyone using AES-OCB.

      • keithwinstein 43 days ago
        Mosh uses AES-OCB (and has since 2011), and we found this bug when we tried to switch over to the OpenSSL implementation (away from our own ocb.cc taken from the original authors) and Launchpad ran it through our CI testsuite as part of the Mosh dev PPA build for i686 Ubuntu. (It wasn't caught by GitHub Actions because it only happens on 32-bit x86.) https://github.com/mobile-shell/mosh/issues/1174 for more.

        So I would say (a) OCB is widely used, at least by the ~million Mosh users on various platforms, and (b) this episode somewhat reinforces my (perhaps overweight already) paranoia about depending on other people's code or the blast radius of even well-meaning pull requests. (We really wanted to switch over to the OpenSSL implementation rather than shipping our own, in part because ours was depending on some OpenSSL AES primitives that OpenSSL recently deprecated for external users.)

        Maybe one lesson here is that many people believe in the benefits of unit tests for their own code, but we're not as thorough or experienced in writing acceptance tests for our dependencies.

        Mosh got lucky this time that we had pretty good tests that exercised the library enough to find this bug, and we run them as part of the package build, but it's not that farfetched to imagine that we might have users on a platform that we don't build a package for (and therefore don't run our testsuite on).

      • mwint 43 days ago
        As a non-crypto-nerd: How viable is it to make a “safe” OpenSSL, which just doesn’t support all the cipher modes (?) that the HN crowd would mock me for accidentally using?
      • kiririn 43 days ago
        Tell that to Mumble! OCB is still one of the fastest / ‘best’ encryption from algorithm perspective, if you can ignore the patents
      • yuhong 43 days ago
        The patents has recently been expired.
    • SoftTalker 43 days ago
      From that link:

      "BoringSSL, LibreSSL and the OpenSSL 1.1.1 branch are not affected. Furthermore, only x64 systems with AVX512 support are affected."

      • josteink 43 days ago
        Also from that link:

        > the vulnerability has only existed for a week (HB existed for years) and an AVX512-capable CPU is required.

        So I'm guessing the real world impact here is near zero?

        What systems or distros are shipping this week old version already?

  • userbinator 43 days ago
    This is a good example of the balance between using software so new it contains insecurities, and so old that it contains insecurities. It sounds like the bug was introduced in a release less than 2 weeks ago. Personally, I prefer the known unknowns more than the unknown unknowns.

    As for the AES OCB bug, it sounds like something that's effectively not used at all in practice, which might explain why it's stayed unnoticed for so long.

    • gerdesj 43 days ago
      Quite.

      I tend to err on patch often and worry about fallout afterwards. All software vendors I deal with (MS, Canonical, Arch, Gentoo, Debian, RPi, Novell err SuSE etc) all do a decent job.

      Fixing something like dialogue boxes going weird is one thing. Faking a kicking out of a bunch of Russians out of your honeypots is another thing.

      (lol etc)

    • staticassertion 43 days ago
      It's a good example of C forcing impossible tradeoffs. "Either use software that's old and has known bugs or update and get all of the new ones".
  • Sirened 43 days ago
    Intel, in a shocking move, has preemptively patched this vulnerability in silicon by deprecating AVX512 months ago :P
  • smegsicle 43 days ago
    i heard everyone is calling it AVXECUTIONER

    > Note that on a vulnerable machine, proper testing of OpenSSL would fail and should be noticed before deployment.

    so is 'proper testing' included in the default build script or..?

    • userbinator 43 days ago
      It's been a while since I've looked at OpenSSL source, or read its compilation instructions in particular, but I remember it tells you to "make test" or similar to run the tests before installing.
      • djbusby 43 days ago
        Even on systems like Gentoo which build everything the make test option isn't default.
    • daenney 43 days ago
      This statement really rubs me the wrong way.

      It sounds an awful lot like “you’re responsible for catching our screw-ups” and it’s a bit rich to tell people to do proper testing while the project itself failed to do so before letting this land.

      • zgs 43 days ago
        Not at all.

        To be vulnerable, you need to build on a non-vulnerable machine which passes the built in tests, then you need to deploy to a vulnerable one and finally you have to not verify that the deployment works.

        Absolutely not what you are implying.

        • daenney 43 days ago
          I agree about the chain, but it’s incredible easy for this to happen with how most things are distributed as binary packages. Most folks won’t be running the test suites on the end system.
        • enkrs 43 days ago
          Wouldn’t building on non-vulnerable system also compile the bibary with non-vulnerable instructions? Does a non AVX512 system really build an executable that calls AVX512?
          • zaarn 43 days ago
            OpenSSL does a lot of runtime detection of what a system is capable of, so I would suspect this can happen.
    • cperciva 43 days ago
      I don't know if it's in the default build script, but it doesn't really matter -- most people install precompiled binaries.
      • aeyes 43 days ago
        So nobody running a precompiled binary of their favorite Linux distribution should be affected because distributors should run the full test suite across all supported architectures when they package the binary?

        Debian for example shipped vulnerable packages: https://security-tracker.debian.org/tracker/CVE-2022-2274

        • cperciva 43 days ago
          It's one thing to run the full test suite across all supported architectures. It's quite another to run the full test suite across all supported CPUs. The vast majority of x86-64 CPUs do not trigger this bug.
        • yabones 43 days ago
          Should be noted that the only version marked as vulnerable, "Bookworm", is the "testing" version that has not been officially released yet and has no "security policy" other than best-effort. Its purpose is for testing the next stable release, not for everyday use. Vulnerabilities in the stable or even oldstable releases are fixed much faster and tested much more thoroughly.
          • smegsicle 43 days ago
            still seems like they should run build testing as part of that 'best-effort' (assuming that's what is meant by the advisory's 'proper testing')
        • civil_engineer 43 days ago
          What the heck, Debian?
          • smegsicle 43 days ago
            "openssl security team accuses debian of not performing proper testing"
    • zgs 43 days ago
      If you build on the machine you will deploy to, the tests won't pass. Problem solved.

      If you build on an earlier machine where the tests pass and deploy to a later one and then don't check that the deployment works, you are at risk.

      I think proper testing covers both options.

  • kissgyorgy 43 days ago
    If you are using the Python "cryptography" library, make sure you don't have the version 37.0.3, as it's compiled against the vulnerable OpenSSL version 3.0.4
  • baby 43 days ago
    And that's why I would not use OpenSSL in a secure project
  • egberts1 43 days ago
    Whew! I only allow Cha-Cha algo.
    • chasil 43 days ago
      If a CPU implements AES-NI (or equivalent) machine instructions, then AES128-GCM will be faster, and it's implemented in TLSv1.3.

      If you value security, then I'd prefer chcha20-poly1305. If you need speed, then use what your CPU gave you.

      https://soatok.blog/2020/05/13/why-aes-gcm-sucks/

      https://lwn.net/Articles/681616/

      "The GCM slide provides a list of pros and cons to using GCM, none of which seem like a terribly big deal, but misses out the single biggest, indeed killer failure of the whole mode, the fact that if you for some reason fail to increment the counter, you're sending what's effectively plaintext (it's recoverable with a simple XOR). It's an incredibly brittle mode, the equivalent of the historically frighteningly misuse-prone RC4, and one I won't touch with a barge pole because you're one single machine instruction away from a catastrophic failure of the whole cryptosystem, or one single IV reuse away from the same."

  • aaaaaaaaata 43 days ago
    undefined