An Almost-Secret Algorithm Researchers Used to Break Thousands of RSA Keys

(algorithmsoup.wordpress.com)

420 points | by williamkuszmaul 1899 days ago

16 comments

  • schoen 1899 days ago
    This describes research published in 2012 by Arjen Lenstra et al. (https://eprint.iacr.org/2012/064.pdf), which relied on a scalable n-way GCD algorithm that Lenstra's team thought best not to explain to readers (in the hope that the attack wouldn't be quickly replicated for malicious purposes). Coincidentally, another team (Nadia Heninger et al., https://factorable.net/) published extremely similar research in a similar timeframe, without withholding details of that team's GCD calculation approach.

    The Heninger et al. paper explains quite a lot about where the underlying problems came from, most often inadequately seeded PRNGs in embedded devices. As the linked article mentions, other subsequent papers have also analyzed variants of this technique and so there's not much secret left about it.

    If people are interested in learning about the impact of common factors on the security of RSA, I created a puzzle based on this which you can try out, which also includes an explanation of the issue for programmers who are less familiar with the mathematical context: http://www.loyalty.org/~schoen/rsa/

    Notably, my puzzle uses a small set of keys so you can do "easy" pairwise GCD calculations rather than needing an efficient n-way algorithm as described here (which becomes increasingly relevant as the number of keys in question grows).

    • ikeyany 1899 days ago
      I attempted to do your puzzle but the link was blocked by my employer's firewall:

      > URL: http://www.loyalty.org/~schoen/rsa/

      > Block reason: Violence/Hate/Racism

      So... what exactly is loyalty.org all about?

      • casefields 1899 days ago
        "www.loyalty.org itself is the server for the web site of Californians for Academic Freedom, the group I founded to oppose the California loyalty oath, which is still a non-negotiable requirement for anyone who wants to work for the State of California -- including student employees of the University of California. "

        http://www.loyalty.org/~schoen/

        That's the most controversial thing about the page. In my view it's not a big deal to have that stance but some people don't like others who rock the boat when it comes to the status quo.

      • schoen 1899 days ago
        I don't know why my web site was categorized that way by your employer's firewall.
        • ikeyany 1899 days ago
          Thanks, I'll look into that.
          • joering2 1899 days ago
            Could you tell me the brand? I would like to test few sites that I believe are much worse than loyality.
      • dunham 1899 days ago
        I'm guessing it's a false positive from using an IP similar to a sketchy site. It looks like the IP points to Linode.

        For what it's worth, I enjoyed the puzzle when I completed it three years ago. A practical exercise cracking the RSA keys followed by a small classic code breaking puzzle.

      • inciampati 1899 days ago
        I really doubt that shoen's page is any issue. Perhaps you can get to it in the way back machine.
        • semi-extrinsic 1899 days ago
          My experience with these firewalls/blacklists is that they have quite a few random false positives. Requests for fixing them one-by-one seems to be a recurring theme on our company chat.
          • ndnxhs 1898 days ago
            I would leave a company that used an internet blocker. They constantly get in the way of real work and benefit no one. The only valid use case I see is attempting to stop idiots downloading malware and installing it on company computers
            • semi-extrinsic 1898 days ago
              In more "traditional" companies that have software dev departments, they are extremely common in my experience, so switching companies wouldn't help much if you're in such a field.

              If 90% of the company employees only use IE and MS Word and are actually likely to install malware, and the last 10% are software engineers/data analysts/numerical simulation people, the latter are gonna have to work within/around these types of constraints.

              • ndnxhs 1898 days ago
                Part of the reason I dont work at such places. Wouldn't want to be somewhere where I am treated like an idiot.
                • semi-extrinsic 1898 days ago
                  I guess we all find different optima. To me it's not really being treated like an idiot, it's "we need to have a policy that takes into account that most of the people are idiots, you smart guys work around it if necessary".

                  I don't know where you work, but off-hand I would postulate that working at "such places" also has advantages over where you work (e.g. better work-life balance, living in an area where property prices are not insane, etc.). As I said, we find different optima in these tradeoffs.

          • michaelmrose 1899 days ago
            On a personal device it's reasonable to use a VPN.
            • acct1771 1898 days ago
              On a personal device, it's reasonable to use LTE.
              • michaelmrose 1898 days ago
                Not always feasible. Not everyone has a cash to spend on data and lives in an area with good wireless coverage.
            • semi-extrinsic 1898 days ago
              It might be the case that some people employ VPN, or SOCKS proxies over ssh to their home computer, or similar schemes. I wouldn't know.
      • KwanEsq 1899 days ago
        Maybe a particularly simplistic auto-categoriser mistook the few eighty-eights (88s) on the root page[0] for their use as Nazi symbolism[1]?

        [0] http://www.loyalty.org/

        [1] https://en.wikipedia.org/w/index.php?title=88_(number)&oldid...

        • kragen 1898 days ago
          For context, Seth Schoen is not only very tolerant, one of the least racist USAns I know, but also Jewish.
      • croh 1899 days ago
        mine too interesting. which firewall do your employer use ? here it is sonicwall.
      • gtsteve 1899 days ago
        It appears to be a personal website; I believe it is miscategorised.
    • swish_bob 1898 days ago
      Nice puzzle. Feels like a good exercise for stretching the brain.
    • lurkertroll 1899 days ago
      This website does a great job explaining how encryption works and how it's vulnerable to bad RNGs. Well done.
  • phw 1899 days ago
    This idea has since been applied to several other domains. Last year we had a look at all archived RSA keys of Tor relays: https://nymity.ch/anomalous-tor-keys/

    We found that several thousand relays that shared prime factors (most part of a research project), ten relays shared moduli, and 122 relays used a non-standard RSA exponent, presumably in an attempt to manipulate the Tor network's distributed hash table, which powers onion services.

  • lipnitsk 1899 days ago
    Another group did a talk on this at DEF CON 26 last year: https://research.kudelskisecurity.com/2018/08/16/breaking-an...

    They analyzed over 340 million keys from the web.

    > As part of the presentation given at DEF CON 26, one of the outputs was Kudelski Security’s Keylookup application. On this site, you can submit your own public keys and have them tested against our dataset. We will let you know if your key is vulnerable to Batch GCD and ROCA attacks. If your key is in our database, we will be able to give you an answer immediately, if it is not, you may have to wait a bit until the tests complete.

    > https://keylookup.kudelskisecurity.com/

    • tptacek 1899 days ago
      I don't think ROCA is related to the vulnerability in the article?
      • lipnitsk 1899 days ago
        Correct, but the main point of their talk and research was focused on Batch GCD processing of hundreds of millions of keys, with ROCA analysis done in addition.
  • truantbuick 1899 days ago
    What are the characteristics of those who generated an RSA key sharing a prime factor? Can they be linked back to a few bad CSPRNG implementations?

    What are practical steps to be responsible about it?

    It's contrived, but I just imagine that if I'm generating some particularly important keys, that I should somehow find a way to give /dev/urandom a kick of some kind. Even if that were possible, it's more likely to make things worse than better. Still, it makes me a little paranoid to even hear about theoretical weaknesses -- especially like collision attacks. I have no idea how long it takes for the CSPRNG to get properly seeded. Does it take a microsecond after booting? 10 minutes? A day?

    • tptacek 1899 days ago
      At the time, there was uncertainty about the root cause, but yes, I think it's been traced back to a set of specific CSPRNGs.

      You do not need to give urandom a kick of any kind; once the KRNG is seeded, urandom will for all intents and purposes perpetually feed you secure random bytes. It's likely your distro already goes through some effort to make sure urandom is seeded by the time you start up a shell.

    • z3t4 1899 days ago
      Some RNG's use the time of the day in milliseconds as seed, I guess those are easy to brute force. I guess it's all about the size of the seed and it's randomness!?
  • jMyles 1899 days ago
    A lot of people over the years have gotten the message that "use /dev/urandom and forget about it" is the final, end-all, be-all for secure random number generation.

    In fact, this ideology (and that's what it is - an ideology) has been trumpeted right here on HN, in some cases by people who repeatedly seem to comment on topics that they don't fully understand. Security is hard, but there's also a high reputational value on being perceived as an authority on the topic. As a result, there are some nuggets of "wisdom" that require asterisks next to them, including this one.

    Even though "just use /dev/urandom" is almost always true, it isn't always true. In fact, the universe of cases where some form of blocking entropy is needed (and again, this is a very tiny set) is growing, not shrinking.

    https://security.stackexchange.com/questions/186086/is-alway...

    • Shoop 1899 days ago
      How does a blocking CSPRNG mitigate this attack?

      Again, it makes no sense to say that a CSPRNG can start "running low" on entropy.

      Here's what djb has to say about this:

           Cryptographers are certainly not responsible for this superstitious nonsense. Think about this for a moment: whoever wrote the /dev/random manual page seems to simultaneously believe that
      
          (1) we can't figure out how to deterministically expand one 256-bit /dev/random output into an endless stream of unpredictable keys (this is what we need from urandom), but
      
          (2) we _can_ figure out how to use a single key to safely encrypt many messages (this is what we need from SSL, PGP, etc.).
      
          For a cryptographer this doesn't even pass the laugh test.
      
      (Unless you're talking about the early boot seeding problem that /dev/urandom has on linux, which is a very real problem).

      For reference, here's the classic source I think you're referring to: https://www.2uo.de/myths-about-urandom

      • FartyMcFarter 1898 days ago
        The point is not about "running low on entropy", it's about the possibility of not having enough to begin with. At boot.
    • aidenn0 1899 days ago
      Part of the problem is that neither /dev/urandom nor /dev/random on linux do what most people want. /dev/urandom never blocks even right after you've spun up a fresh machine that has not seeded the PRNG, while /dev/random blocks very conservatively, to the point where it is not useful for certain things.

      The proper approach for high-volume random numbers is probably to seed a userspace PRNG from /dev/random but that's extra work, particularly in a concurrent program.

      • tptacek 1899 days ago
        Please do not seed a userspace RNG from /dev/random. Most major crypto RNG attacks trace back to userspace RNGs. If you can trust /dev/random to provide a seed for your userspace RNG, then by definition you can also from that point on trust urandom as well.

        (getrandom(..., 0) is probably the right long-term solution).

        • baby 1897 days ago
          BTW, I've read that before (don't use userspace RNG). Can you point me to a few problems that arose because of userspace RNG?
          • tptacek 1897 days ago
            Yeah, next time we're drinking.
            • baby 1897 days ago
              Now I remember about forking and VM cloning issues.

              Also, you live a bit far from me, but see you at Black Hat maybe :D

        • jMyles 1898 days ago
          > (getrandom(..., 0) is probably the right long-term solution)

          Glad to see you've come around.

          • tptacek 1898 days ago
            Huh?
            • jMyles 1898 days ago
              First, man - every time I have replied to you on HN, our discussion has turned into a flame war of one variety or another. I don't have time for that at this moment (nor the inclination for it, ever). Ideally, I'd just like to get along with you. We have many common friends; I'm confident they'll tell you that I'm a pretty relaxed guy and easy to get along with. And I'm pretty careful about refraining from stepping into discussions unless I have a good enough command over the material to avoid making false statements or giving bad advice. So I'm just asking - please just be civil. Let's stop clogging HN with attacks and instead just discuss (and achieve consensus on) best practices. Agreed?

              Now: as recently as 5 months ago, when you and I discussed the matter, you replied to a post that expressly suggested `getrandom(..., flags=0)` (what you called the "blocking variant", although it's not that simple) and dismissed it in favor of /dev/urandom (and also strangely referred to security.stackexchange as "the wrong stackoverflow boards"). [0]

              This was bad advice. It's good to see that you have come around to the position taken by the Python core team and just about everybody else that using getrandom(..., 0) in a best practice in this situation.

              0: https://news.ycombinator.com/item?id=17786496

              • aidenn0 1898 days ago
                I assumed from context that he meant "GRND_RANDOM" as the "blocking variant" in that thread, which is different from flags=0.
              • tptacek 1898 days ago
                That's not at all what that thread says, and this is an extremely weird and unwelcome addition to this thread.
                • baby 1897 days ago
                  it was hilarious though
      • jMyles 1899 days ago
        I think that the approach taken in Python as of PEP524[0] gets it about right. And it achieves a very similar level of exposure and performance across platforms as well.

        0: https://www.python.org/dev/peps/pep-0524/

      • jMyles 1898 days ago
        > Part of the problem is that neither /dev/urandom nor /dev/random on linux do what most people want.

        Agreed, but `getrandom(..., 0)` largely does. And thus, Python's `urandom` is also a good fit for very nearly every use case for randomness inside business logic.

        > The proper approach for high-volume random numbers is probably to seed a userspace PRNG from /dev/random...

        At a high-level, I don't agree. I think that for high-volume random numbers the best solution is nearly always a trustworthy hardware RNG.

        I also think that, while it's possible to properly seed a userspace PRNG from /dev/random, it's rarely a good idea because 1) it's (at least relatively) difficult to do without introducing other vulnerabilities, and 2) if you trust /dev/random as a seed, then you have other viable options in every case.

        If you have a need to use a userspace PRNG - and more importantly, the wherewithall and self-awareness to do it properly, then you are almost certainly in a position to need to seed it from an oracle other than your system's /dev/random.

    • est 1897 days ago
      it's worth, now everyone is dong that in docker.
  • cbhl 1899 days ago
    PuTTYgen, being a Windows application, assumes that a mouse is connected and prompts the user to move the mouse to generate entropy during key creation.

    By comparison, ssh-keygen documents the SSH_USE_STRONG_RNG environment variable -- but then recommends against its use (!) since it can cause key generation "to be blocked until enough entropy is available".

    • hackcasual 1899 days ago
      > "to be blocked until enough entropy is available"

      which is fine. The idea that entropy is a consumeable resource is a crypto myth that needs to die.

      • tlb 1899 days ago
        It's not consumable, but it can be in short supply right after boot. So /dev/random should block until there's sufficient entropy, but never after that.
        • cbhl 1899 days ago
          If I recall correctly, at one point, the network interface was used as a source of entropy. Then someone demonstrated that sending the right sequence of network packets to a machine would let you control the key that got generated. So they removed it.

          Then folks discovered -- in production -- that some cloud computing environments just don't get any other new entropy after boot, and so instances would hang on generating SSH host keys.

          Some folks went to /dev/urandom. Other folks decided to seed instances with entropy from another computer (with fancy names like "cloud entropy service"). And then someone had to decide how that machine gets entropy (like plugging in an FM radio into the mic jack).

        • cperciva 1899 days ago
          The problem with "block until you have enough entropy" is that the amount of entropy you have is literally unknowable.
          • tux1968 1899 days ago
            How can that be true? The kernel at least, knows how much you have and knows to block consumption until it crosses the desired threshold.
            • cperciva 1899 days ago
              The kernel knows how many bytes you've fed into it. But it has no fundamentally sound way of computing a lower bound on how much entropy those bytes contained; the best approaches involve calculating an upper bound and then say "well hopefully it's at least X% of this".
        • wbl 1899 days ago
          RDRAND is a thing
          • clarry 1898 days ago
            What's the alternative on ARM?
  • userbinator 1899 days ago
    IMO a bit clickbaity of a title --- I thought there was a recent breakthrough in integer factorisation, when in fact this is really attributable to insufficient randomness.
    • schoen 1899 days ago
      A bigger problem for many readers might be that this is a new post, mainly summarizing research from 2012. So it's not really appropriate to say "(2012)", but it also doesn't announce new discoveries since that time.
  • gtsteve 1899 days ago
    On this topic, I recall reading earlier computers (of the 80s) used radio tuners tuned into nothing to generate random numbers. Of course, you need some way to randomly choose the frequency but then what comes out should be pretty unpredictable.

    I believe Random.org uses an approach similar to this. What is so special about this approach that we couldn't install it as a card in a desktop for example?

    • amlozano 1899 days ago
      Modern desktops have had RNGs built into their chips since Ivy Bridge. Beyond that, they have plenty of decent sources for random numbers such as network traffic. Similarly, mobile devices can use sensor noise to create random numbers pretty well.

      The devices people are concerned with are things like embedded devices and sometimes virtual devices.

    • amenghra 1899 days ago
      A zener diode costs $0.01 and gives pretty good randomness. You mix network traffic, user input, other sensors when available and you are gold.
  • daedalus2027 1899 days ago
    hi, I attempted to recreate the algorithm in the post:

    https://github.com/daedalus/misc/blob/master/testQuasiLinear...

  • sempron64 1899 days ago
    This is very interesting. I wonder what key generators were used to create the insecure keys.
    • schoen 1899 days ago
      See https://factorable.net/weakkeys12.extended.pdf, which includes some identifications of some of the responsible devices. (Two groups of researchers published independently on this issue at nearly the same time.)
      • Scoundreller 1899 days ago
        Bottom left of page 9 for RSA and middle left of Page 10 goes through which manufacturers were the mode.
      • lixtra 1899 days ago
        TLDR:

        Devices that create TLS and SSH keys just after boot when there is not enough entropy.

        • loeg 1899 days ago
          That's not a great takeaway.

          The takeaway is: Linux's /dev/random and /dev/urandom interfaces are both broken, in the sense that neither is reliable for embedded developers during certain early boot conditions. Some of that was maybe worse in 2012 than today, but the fundamental interface properties have not changed.

          Tl;dr: Use getrandom() instead of /dev/[u]random. Do not use GRND_RANDOM. Do not use GRND_NONBLOCK.

          • Scoundreller 1899 days ago
            But maybe better pre-2009:

            > Surprisingly, modern Linux systems no longer collect entropy from IRQ timings. The Linux kernel maintainers deprecated the use of this source in 2009 by removing the IRQF_SAMPLE_RANDOM flag, apparently to prevent events controlled or observable by an attacker from contributing to the entropy pools.

            > Although mailing list discussions suggest the intent to replace it with new collection mech- anisms tailored to each interrupt-driven source [21], as of Linux 3.4.2 no such functions have been added to the ker- nel.

            > The removal of IRQs as an entropy source has likely exacerbated RNG problems in headless and embedded devices, which often lack human input devices, disks, and multiple cores. If they do, the only source of entropy—if there are any at all—may be the time of boot.

            • X6S1x6Okd1st 1899 days ago
              Removing something from an entropy pool because it could be used by an attacker seems really odd to me.
              • schoen 1899 days ago
                It may have been inspired by arguments like https://blog.cr.yp.to/20140205-entropy.html (although that one was published five years later than the change we're talking about).
                • clarry 1898 days ago
                  That would be rather misguided though. It's not like a device is going to know the exact time the interrupt handler routine would run, hash it, and go back "whoops, better not fire an interrupt yet because it won't be on the right tick!"
            • loeg 1899 days ago
              No. The fundamental problem — random blocks arbitrarily after seeding, and urandom does not block even before seeding — existed prior to the 2009 change.
          • pera 1899 days ago
            For keys that are generated during early/first boot conditions, couldn't one wait until /proc/sys/kernel/random/entropy_avail is greater than 256 and then read from urandom? IIRC getrandom still isn't universally available on SoCs kernels.
          • SEJeff 1899 days ago
            I was 1/2 way through reading your comment and was thinking, "Yeah, but they can use the getrandom() that was ported from OpenBSD", but you obviously beat me to it. Take your upvote!
            • loeg 1899 days ago
              OpenBSD's interface is getentropy(), but yeah, basically the same idea! I believe Linux was the second actor and intentionally made getrandom() a superset of the getentropy interface (i.e., no EINTR below 256 bytes).

              As a result, writing a getentropy() shim around getrandom is feasible; FreeBSD and Linux (glibc) have done so.

  • sublupo 1898 days ago
    > The authors hint in a footnote that at the heart of their computation is an asymptotically fast algorithm, allowing them to bring the running time of the computation down to nearly linear; but the actual description of the algorithm is kept a secret from the reader

    How could something like that pass peer review? Their claim is effectively unable to be reproduced.

    • andreareina 1898 days ago
      If I'm reading the numbers right, even using the slow way you'd expect to break on the order of tens of keys per day. Thus the claim that "two out of every one thousand RSA moduli [...] offer no security" is easily verified. The secret algorithm doesn't compute secret data that can't be verified.
    • codezero 1898 days ago
      It’s not clear to me that the journal it was published in is peer reviewed. Not all are.

      https://www.springer.com/computer/theoretical+computer+scien...

      It’s useful to have such places to publish things, but unfortunate that it’s not clear whether it’s peer reviewed.

    • rincebrain 1898 days ago
      A number of CS publications (or non-CS publications with custom software) don't include the source/enough exact parameters/... to reproduce their results.
    • ric2b 1898 days ago
      Maybe the reviewers had access.
  • sliken 1899 days ago
    Anyone know of a service (https://factorable.net/keycheck.html was discontinued) or code that would help identify particularly weak keys?
    • sliken 1898 days ago
      Actually factorable.net does have code. Planning to analyze the 1200 user keys I have around to see if any are weak. From what I can tell you can just convert public keys into hex and feed them into their program.
      • anomalroil 1898 days ago
        Yes, but it will only batch-factor the 1200 keys you'd be feeding it with. For such factorization attacks to work best, you need a dataset as large as possible.
    • spiridow 1898 days ago
      • sliken 1898 days ago
        Tried a few, doesn't seem to handle more than one key well. Worried that 1500 submissions would cause a DoS. After a few it stopped giving me results saying computations were queued.
  • incompatible 1899 days ago
    Could you use a similar idea to go after bitcoin keys? If so, you may not be able to crack any particular key, but you could steal the bitcoins from the ones you did crack.
    • schoen 1899 days ago
      As sowbug and tptacek mention, there are no common-factor attacks against Bitcoin keys. However, that doesn't mean that the PRNGs used in cryptocurrency implementations aren't a concern. The most famous incident was probably this

      https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018...

      (you can find subsequent journalism about the effects of this if you're interested).

      There have also been other cryptocurrency PRNG attacks that weren't as high-profile as this issue.

      • schoen 1899 days ago
        I wish I had remembered the "Biased Nonce Sense" paper (which I heard about last week but didn't read!), which someone else mentions downthread at

        https://news.ycombinator.com/item?id=18917385

        https://eprint.iacr.org/2019/023.pdf

        (This is a nonce reuse issue rather than a common-factor vulnerability, but one reason for nonce reuse can also be CSPRNG seeding problems.)

      • tptacek 1899 days ago
        Java SecureRandom: a userland CSPRNG!
        • pvg 1899 days ago
          This is JS. The (current-ish) Java ones use the OS-provided facility.
    • sowbug 1899 days ago
      With a tiny number of exceptions, Bitcoin private keys are just random sequences of 256 bits. They don't involve primality. So unless a shared CRNG is truly awful, there shouldn't be any reason why keys would cluster around certain values.
      • tptacek 1899 days ago
        Which is basically the thesis of the underlying "Ron Was Wrong Whit Was Right" paper --- that discrete log systems are safer than RSA. Since then, there's been more work done on the pitfalls of prime generation, perhaps most notably the "Prime And Prejudice" paper, which is really excellent and more interesting technically than this article:

        https://eprint.iacr.org/2018/749.pdf

        New systems should generally avoid RSA, for this reason among several others.

        • pbsd 1899 days ago
          Isn't the point of "Prime and Prejudice" that primality tests that are perfectly adequate for RSA key generation are inadequate to verify the primality of adversarial inputs, in protocols like negotiated DHE or whatnot?
          • tptacek 1899 days ago
            Dammit. I just wanted a cool link to Prime and Prejudice!

            You're right, of course.

        • AlexCoventry 1898 days ago
          > generally avoid RSA, for this reason among several others.

          What are the others?

    • anoncrypto 1899 days ago
  • skookumchuck 1898 days ago
    After decades of problems with seeding RNGs, why isn't there a electronic circuit that gets a seed from quantum noise or something like that? The circuit could be part of the CPU or support chips.

    After all, amplifiers are always trying to increase the signal/noise, and the basis of the reliability of digital circuits is avoiding the noise. Instead, a circuit can amplify the noise and sample it.

    • tatersolid 1898 days ago
      There is. RDRAND for x86/x64 has been in all Intel/AMD for several years.

      Most ARM SoC have some equivalent device, but they are nonstandard and require driver support.

      Even the TPM chips in basically every desktop, laptop, and server for over a decade have hardware RNG. Again driver support is needed.

      The problem is cheap “blue plastic boxes” may not have a hardware RNG, nor will Virtual machines or containers. Writing code to figure out what RNG is available and how to use it is a nightmare so few people do it.

      This is why most security people say “use the OS CSPRNG always”. That way user-space code doesn’t have to carry all the platform specifics with it. And presumably integrating the hardware RNG can be done once at the OS layer.

  • betolink 1899 days ago
    I don't know why this is still a problem these days, let's just use lava lamps for entropy: https://blog.cloudflare.com/lavarand-in-production-the-nitty...
  • zde 1896 days ago
    > But since they both used the same program to generate random prime numbers, there’s a higher-than-random chance that their public keys share a prime factor

    BS