17 comments

  • denton-scratch 30 days ago
    > network visibility

    This is an odd euphemism. A network that uses plaintext isn't "visible" - I'd use a word like "readable" or "inspectable".

    For encrypted networks, MITMing the encryption breaks the security. That's what it's for. TLS1.3 is supposed to prevent that; circumventing that (as NIST proposes) increases the attack surface. NIST's proposals seem to amount to generating and distributing ephemeral keys over the internal network; but I thought best practice was to keep keys and cryptographic operations inside a HSM.

    Isn't the proper solution to remove MITMing from the compliance rules, stop trying to detect C2 and malware at the router, and instead secure the target servers?

    • tptacek 30 days ago
      It's their own network. The argument you're making here is that the (very good) design goals of TLS 1.3 preempt people's custodial interest in their own networks. That's a weird argument.
      • sam_lowry_ 30 days ago
        Why do they do TLS1.3 inside their own network, then?

        If they work in high-castle mode, they can as well have their network traffic unencrypted.

        Otherwise, as the parent said, they should secure the endpoints rather than rely on COTS to inspect traffic in the hope of detecting malicious patterns.

        P.S. This reminded me of an Intrusion Detection System that silently dropped all traffic to what looked like requests to Spring Boot's /actuator* unless it contained a cookie. Any cookie would suffice, it was just a match on the string "Cookie: " in the headers. It took many hours and a dozen of people across the organisation to figure this out. All but productive work.

        P.P.S. It's Fortiguard and they proudly advertise this feature here https://www.fortiguard.com/encyclopedia/ips/49620

        • tptacek 30 days ago
          They can absolutely just not use TLS 1.3. I don't know how that makes anything better for anyone else though.
      • denton-scratch 30 days ago
        No, that's not what I was saying. These network owners are working around TLS1.3 for compliance reasons: they're required to monitor traffic at the boundary.

        So I'm saying they should not be required to monitor at the boundary (and discard the benefits of TLS1.3); it's dumb to require diminished security. They should be required to monitor; but it's their network, they get to decide how to do it. I guess that means you need compliance rules written by serious people, rather than box-tickers.

        That would make verifying compliance harder; you can't just check that they have blackbox X at the boundary. I can see that the existing setup is cheap-and-cheerful.

      • adgjlsfhk1 30 days ago
        the way you secure a network is to not allow insecure devices to connect. If you mitm the network, all you do is increase the amount that an insecure device can mess with other devices.
        • ndriscoll 30 days ago
          At least for home users, this is not feasible. We're quickly developing a world where the owners of devices have no insight into what they're doing. ECH means your ISP can't monitor you, but even if you're going through cloudflare so the IP doesn't say who you're connecting to, the state can just make cloudflare tell them, so it doesn't protect against state monitoring. And ECH + DOH and cert pinning give tools for malicious devices (i.e. every modern consumer device) to exfiltrate data without the owner being able to monitor/block specific requests.

          The reality is many if not most devices are malicious now. You're protecting against one threat while enabling another.

        • illiac786 29 days ago
          Hmmm, I’m noting that the current industry trend seems to focus on the opposite strategy: assuming compromised devices, assuming breach. « Zero trust », as they like to call it.

          It’s not mutually exclusive with your approach, but it’s definitely the new industry gold standard, rather than trusting vetted devices. Seems they gave up on the vetting.

          • adgjlsfhk1 29 days ago
            I agree that a zero trust architecture makes sense. every device should sanity check requests made to it from every other device, but IMO that works best when you have a secure and encrypted network as a primitive. the network's job should be able to deliver messages security between endpoints.
      • josephcsible 30 days ago
        Whether it's okay to inspect traffic only depends on whether you own at least one of the endpoints. It has nothing to do with whether it's going over your network.
        • tptacek 30 days ago
          We are talking about networks where every authorized endpoint is most certainly owned by the organization doing the telemetry. I don't like it either, but I don't see how anyone's going to make a moral issue out of it --- in fact, this is exactly the kind of thing that tends to infuriate nerds like us when it cuts the other direction, like with sealed remote attestation protocols.
          • josephcsible 30 days ago
            Yes, I believe they own all the endpoints, so I'm fine that they're doing the telemetry at all. But if the method they had in mind for doing the telemetry doesn't require that they do, then I'm opposed to that method in particular.
            • tptacek 30 days ago
              You get that Intel can make the same argument about sealed remote attestation protocols embedded in their chipset, right? You don't tolerate that argument when it's your network hosting sealed protocols, but you do when it's other people's. That's a strange position.
              • josephcsible 30 days ago
                That argument isn't valid for Intel, because when they sell me their sealed chipset, it stops being theirs and becomes mine.
                • tptacek 30 days ago
                  Do you not see how that's exactly what the banks are saying about their own computing infrastructure?
    • sam_lowry_ 30 days ago
      > secure the target servers?

      That's much harder than buying off-the-shelf "security" solutions from the likes of Bluecoat.

    • 1vuio0pswjnm7 30 days ago
      "For encrypted networks, [the owner of the network] MITMing the encryption breaks the security."

      What security, specifically. Security from who/what.

      Let's say a network owned by C comprises computer A and computer B, A is connected to B and B is connected to the internet.

      Computer A runs "apps" controlled by D and not trusted by C. B runs only programs trusted by C.

      Both A and B, i.e., the programs runing on them, are each capable of encrypting traffic.

      Let's say the approach C takes on C's network is to let B handle encryption. Not A.

      The apps running on Computer A want to encrypt traffic but, in C's opinion, that "security" is for the benefit of D not C.

      Computer B encrypts all traffic bound for the internet and decrypts all traffic received from the internet. C does not need D's apps to perform encryption.

      It is C's network. Is there a reason C should not control encryption on C's own network.

      Is there a reason D should be able to run its "apps" on C's network and encrypt traffic that D cannot inspect.

      Would D allow C to run programs on D's network that encrypt traffic so that D cannot inspect it. (Reciprocity.)

      One could imagine the encryption by D's apps running on Computer A is security against D, the owner of the network.

      Any other "security" provided by D's apps encrypting traffic on A is already provided by B.

      (Given the existence of B, encryption by A is unnecessary and redundant.)

  • oneplane 30 days ago
    50 pages in I still haven't found how they propose this helps with WAN traffic or with mTLS.

    All of this seems to assume you always own the server side, which you pretty much don't. Even on page 5 with the summary of the solution it doesn't touch that subject.

    You'd think that if you own the server and the client anyway, you'd just capture it right there if you need to.

    As for just the DH 'server' doing key distribution, that's something we already know how to do and doesn't require "we install nginx on a random server and call it an appliance" style vendors.

  • cipherboy 30 days ago
    https://csrc.nist.gov/pubs/sp/1800/37/2prd

    It seems to be intentional exfiltration of key material (either bounded DH keypairs rather than ephemeral or, more likely, exfil of the symmetric channel key).

  • mmsc 30 days ago
    The irony of it all is that those middleware solutions end up being ridden with vulnerabilities. Can’t wait until some Fortune 500s get popped and all their encrypted traffic is trawled through.
  • acdha 30 days ago
    This is probably the best path forward for getting large enterprises not to block TLS 1.3 deployment but I can’t help but wonder how effective these monitoring systems actually are. There are so many ways to exfiltrate data and attackers have decades of prior art around obfuscating their activity, and it seems incredibly expensive to try to solve this problem at the network level rather than by committing that budget to better controls around sensitive data, locking down clients, etc.
    • lallysingh 30 days ago
      I assume that you do both. Layers of protection are necessary because each layer can leak.
      • acdha 30 days ago
        Possibly, but this is both very costly and imposes a non-trivial risk from creating a single point of failure for your entire network which also does binary decoding of complex data structures. Unless you have an unlimited budget that raises the question of whether it’s likely that there are enough attacks which are simple enough for this approach to catch while still being damaging enough to matter relative to the other things you could do with the same budget and staffing.
  • 1vuio0pswjnm7 30 days ago
    "TLS allows us to send data over the vast collection of publicly visible networks we call the internet with the confidence that no one can see our private information, such as a password or credit card number, when we provide it to a site."

    TLS also allows a so-called "tech" company to send and receive data over someone else's private, local network that is connected to the internet with the confidence that computer users and the network owner on that local network cannot see the contents of the traffic. In order to properly consent to sending data to a so-called "tech" company, it is arguable that the consent should be informed, i.e., the computer user should be able to see what data is being sent. Was the concealment of unconsented data exfiltration the intended purpose of TLS. These so-called "tech" companies have a record of breaching public trust, hiding their surreptitious data collection from public view, in order to generate revenue and profit. This is often made the subject of civil lawsuits and regulatory fines that are rarely if ever successfully challenged. The companies almost invariably give in and pay up.

  • ngrilly 30 days ago
    Those organizations are claiming they rely on zero-trust principles, and yet they fully trust a proxy placed between their users and Internet, able to replace all the TLS certificates of the origin servers with its own certificates, in order to be able to decrypt all the traffic. How can these organizations claim with a straight face they implement a zero-trust architecture and do the exact opposite?
    • tptacek 30 days ago
      "Zero trust" is a term of art. You can't reason about it by appeals to the dictionary. It means a very specific set of things, and it is compatible with TLS interception, gross as that may be.
      • ngrilly 30 days ago
        You're right. If we define zero trust as not trusting by default the users, their devices, and the network perimeter, then yes it's compatible with TLS interception. But if the rationale is that vulnerabilities can happen anywhere, why not extend the principle of "never trust, always verify" to servers and network equipments as well, especially when they can intercept and decrypt everything?
        • tptacek 30 days ago
          No. We don't define "zero trust" that way. That's the opposite of what I just said. "Zero trust" is a marketing label for the ideas in Google's Beyondcorp strategy. It's not a principle that you can extrapolate from this way.
  • filleokus 30 days ago
    As someone (thankfully) not in the loop with these enterprise tools, what exactly is the issue? How did it work before (and what have changed)?

    Can't enterprises just MITM the traffic and sign it with a CA the clients trust? What's the benefit of the previous solutions?

    • tptacek 30 days ago
      By way of example: TLS 1.3 eliminated the RSA key exchange, which breaks passive decryption of TLS, which was a common enterprise security technique.
      • filleokus 29 days ago
        Hmm, thanks. But that only works when the passive eavesdropper has the server private key (right?). That seems quite limiting if you want to have "visibility" into network traffic?

        I don't really understand the full picture / use case here. Is it only for internal traffic, or is it used in combination with some other more active mitm method to act as the server even for e.g gmail.com?

    • ngrilly 30 days ago
      > Can't enterprises just MITM the traffic and sign it with a CA the clients trust?

      This is what Zscaler is doing. I know because my company was (unfortunately) using this.

      • Avamander 30 days ago
        > ZScaler

        Awful company with 0 protections against being abused. They can't handle stopping a DDoS originating from their service I can't imagine them being trustworthy for a full MiTM.

    • dilyevsky 30 days ago
      As far I understand the issue is encryped client hello.

      > Can't enterprises just MITM the traffic and sign it with a CA the clients trust? What's the benefit of the previous solutions?

      This wont work with cert pinning and also is a lot more expensive

      • Avamander 30 days ago
        Boo-hoo?

        If it's not your endpoint then it's not yours to intercept and analyze?

        • dilyevsky 30 days ago
          Did you miss the part where it says “enterprise”?
          • Avamander 30 days ago
            I certainly can't see the part where saying "enterprise" should grant me the right to intercept and analyse someone's traffic.
  • lubesGordi 30 days ago
    I am curious what laws require deep packet inspection on in transit data. Is this mostly to stop 'bad' packets before they reach an endpoint?
    • acdha 30 days ago
      Less laws than policy based on laws. If the law says, for example, that a finance company needs to keep records of their communications for a certain number of years, they might decide that’s best done at the network level to catch if people are using things other than the official messaging systems to discuss their work because the alternative is unpopular moves like saying you can’t check Gmail at work.
    • orev 30 days ago
      Why does there need to be a law? There are many things people do for any number of reasons that have nothing to do with it being “the law”.

      Yes, the purpose is to be able to scan incoming or outgoing packets for malware, data exfiltration, etc.

  • lxgr 30 days ago
    Am I missing something, or is this conflating TLS 1.3 with ephemeral key handshakes (which were available in earlier versions too, albeit not mandatory)?
    • tialaramex 30 days ago
      Because they were made mandatory in TLS 1.3 the organisations who wants this now must do this to use TLS 1.3 whereas previously they'd just disregard all warnings telling them they should have ephemeral keys.

      Specifically, they would use RSA kex, which makes Forward Secrecy impossible and in TLS 1.3 they can't any more.

      • lxgr 30 days ago
        Ah, so this is for organizations controlling the server side (and who were configuring non-ephemeral key exchange methods so far)?
        • tialaramex 30 days ago
          Yes, typically "visibility" is the euphemism used by these people for a technology where they get to decrypt the data in a side channel, either in real time or perhaps from archives for a period.

          e.g. You might own a fibre splitter, take the real data going back and forth between clients and your servers, and just copy it - you can't change the data, those photons left, but you get the same data, and with RSA you could just give an inspection device your private key and it can decrypt all that traffic no problem.

          But without RSA that won't work, and this NIST standard I think specifies how to do it "correctly" with ephemeral keys, which means having a system that is tracking all those keys.

          This means the NIST recommended solution costs more to do than the "old way". But, the banks and similar institutions which demand this are the ones paying for that, not you. And in exchange for that higher cost, this enables Forward Secrecy (data I stole from this system on Tuesday can't be used to decrypt a session on Thursday) and it also significantly bloats the data needed to compromise the whole system - want to read every transaction? You're going to need a lot of space for that whereas with RSA it was a single 4096-bit RSA key.

          • lxgr 30 days ago
            Huh, I'd never considered that but it makes sense that some scenarios might require "MITMing your own traffic" in production (i.e. not just developers PCAPing their own browser HTTPs traffic). Thank you for the explanation!
    • vngzs 30 days ago
      Ephemeral key handshakes are blocked by enterprises that do TLS decryption.
  • bjornsing 30 days ago
    What’s the crucial difference between TLS 1.2 and 1.3 here?

    (I tried eying through the OP but my eyes started bleeding from all the corporate IT jargon.)

    • eldridgea 30 days ago
      The biggest difference I'm aware of is TLS 1.3 encrypts the initial handshake[0] in a way to prevent eavesdropping the hostname of the destination. Prior to that, you could get the hostname via network monitoring if you wanted. Encrypting the TLS handshake didn't maker sense to prioritize though as DNS requests were sent in the clear.

      However with DNS increasingly being encrypted with DoH and DoT, the TLS handshake was one of the only places you could eavesdrop on the destination hostname, until it was removed in 1.3.

      Of course network monitoring will still give you the destination IP, but those are increasingly overwhelmingly destined for a major cloud or CDN provider which doesn't provide much context about the actual destination.

      If you'll forgive the shameless self-promo, I covered a decent amount of this in my Blackhat talk about encrypted DNS a few years back: https://www.youtube.com/watch?v=XCnE2o2pfxs

      0: https://blog.cloudflare.com/encrypted-client-hello/

      • dochtman 30 days ago
        I’m confused — with TLS 1.3 Server Name Indication is still usually sent in the clear, unless you’re also using Encrypted ClientHello, right?
        • gsich 30 days ago
          Correct. Not sure if ECH is still in draft state.
          • tialaramex 30 days ago
            Yes.

            https://datatracker.ietf.org/doc/draft-ietf-tls-esni/

            As you can see this ID currently has "WG state In WG Last Call" which means the Working Group were asked if they have any final stuff that needs changing. After this it could enter a state where it needs word smithing, or it could even just get sent to the IESG and then there's an opportunity for the wider community to chime in.

            [Keep in mind though, the IETF's RFCs don't dictate what gets done, we're agreeing engineering documents here, the implementations do in fact already exist and are in use for some systems, they might change to adopt any hypothetical change in the final RFC, or equally the RFC might be wrong, there's one for how HTTP Cookies should work and it describes how a working group decided they should work - but they just kept working the way they had before anyway]

      • tptacek 30 days ago
        I don't think this is the big issue banks have with TLS 1.3. Nick Lamb's sibling comment is, I think, the crux of this issue.
    • tialaramex 30 days ago
      TLS 1.3 deliberately doesn't have RSA kex (key exchange using the RSA public key encryption system) which was obsolete.

      All TLS 1.3 key agreements will thus be based on information chosen by both parties, which enables Forward Secrecy.

      The people this work is addressing (like big banks) are often part of EDCO (Enterprise Data Center Operators) who tried to get RSA kex put back into TLS 1.3. They failed largely because the IETF isn't a democracy. We aren't a government, we do not believe in Kings, or Presidents, or Voting. We do engineering here, we believe in rough consensus and running code. They also failed because of IETF Best Common Pratice #188, "RFC 7258 : Pervasive Monitoring Is an Attack" which says the IETF should design protocols to resist this stuff.

      • nurple 30 days ago
        Appreciate this overview and absolutely adore the mentality of "rough consensus and running code", BAMFs. It's a shame that NIST, who is supposed to use their data to establish security best practices is expending resources to defeat them(though I guess this isn't our first rodeo in this vein); thank you IETF engineers for being sane custodians of actual security.
        • tialaramex 30 days ago
          People are going to do this, so for an outfit like NIST the question is about harm minimisation.

          Harm minimisation is why you'd have a place where junkies can get clean needles rather than sharing used needles. If you don't like the junk example, how about booze? Americans tried making it outright illegal, didn't go well, but now they have a lot of rules. The rules make the booze not safe - booze is poisonous, but at least less dangerous than it might have been, with fewer inadvertent casualties, less associated crimes, and so on.

          • nurple 30 days ago
            Sure, I can see the harm minimization angle, I guess I just view it like a lot of other gov-promoted weakening of security under a purported harm minimization.

            Reading through the RFC you referenced ends with this interesting conclusion:

               Making networks unmanageable to mitigate PM is not an acceptable outcome,
               but ignoring PM would go against the consensus documented here.  An
               appropriate balance will emerge over time as real instances of this
               tension are considered.
            
            Perhaps this is one of the first instances of this tension, at least re: tls1.3? It also doesn't seem that the IETF is quite as uninterested in considering provisions around network management...

            I find that a bit unfortunate as network management has a long history of think-of-the-children harm mgmt style abuses. Hopefully this mainly means that they're willing to design protocols in a way that facilitate management without weakening them to PM attacks.

            • tptacek 30 days ago
              The idea that NIST is trying to weaken cryptography by regularizing TLS intercept at banks is tinfoil hat stuff. Not only were banks already doing this, but they literally tried to halt TLS 1.3 and re-add RSA key agreement to keep doing it. NIST is trying to minimize harm here.
              • nurple 29 days ago
                I mean, it's not that tinfoil hat as the DRBG debacle showed.

                But I'm not talking conspiracy here, I just feel like providing a 5 volume tech manual on how to do pervasive monitoring under TLS1.3, no matter their stated justification, is antithetical to their purported mission.

                • tptacek 29 days ago
                  It is tinfoil hat, and you mean "Dual EC", not "DRBG" --- "DRBG" is just the NIST term of art for "random number generator".
  • robertlagrant 30 days ago
    I'm really hoping that by the time the Enterprise is out there exploring, we won't still be on TLS 1.3 at all.
  • nonrandomstring 30 days ago
    This is pure tragedy right here.

    "Addressing Visibility Challenges" is a masterpiece of bloviating Orwellian sophistry to describe the bare, brute contradiction now at the heart of "Enterprise"; that we want confidentially but we don't want confidentiality.

    It's encrypted so you don't get visibility. End of. Sort out your trust models, Sort out your end points. Sort of the principles behind your endpoints (viz. loyalty, training etc).

    The solution: a "five volume" guide to how you can have something and not have it at the same time. This is an industry openly at war with itself.

  • betaby 30 days ago
    My ${DAY_JOB} simply MITMing all traffic from the laptop through Netskope. At that point I don't even search work related web from the corporate laptop. Self-inflicted self-invented compliance which goes way beyond laws and regulatons went too far in the enterprise world.
    • unethical_ban 30 days ago
      Inspecting network traffic is not a self invented regulation.

      MITM of network traffic historically was the easiest way to monitor what goes in and out of ones network. It's still pretty easy. It's a corporate resource, the ethics aren't that bad.

      People say to inspect the endpoint. I'm simply not sure the technology is there to inspect data destined to leave an endpoint in clear text. The next step would be for apps to encrypt data before they let the operating system know they want to send data outbound.

      Then the next step is to only allow applications that comply with some sort of framework for content inspection prior to sending stuff over the network. I don't know if there's any thing like that currently.

      • betaby 30 days ago
        > Inspecting network traffic is not a self invented regulation.

        I work for a telecom company registered in NJ. What law says web-developers employee traffic should be intercepted?

        • unethical_ban 30 days ago
          Ah, in this instance, I was thinking of finance, and other industries, where network inspection is required by regulators.

          Perhaps if a network were highly segmented, one could find a way to get away from intercepting all employees. Anyone with access to business data, though? That's the way it is.

      • NicolaiS 30 days ago
        Corporate MITM'ing is always a bad practice, it breaks a lot of TLS (e.g. mTLS) and can't be implemented in a way that will not break legitimate workflows (e.g. cert pinning a untrusted leaf vs the middlebox trusting everything and re-signing with 'real' cert)
      • betaby 30 days ago
        All those are the case today. `curl` on a corporate laptop in intercepted and blocked by CrowStrike for example.
  • beeboobaa3 30 days ago
    > Industries such as finance and health care need to monitor incoming internet data for evidence of malware and insider cyberattacks.

    Citation needed. Why do they need to compromise network security instead of doing it on the endpoints? Because they paid for expensive, crappy, software that can't do that? Sounds like their problem.

    > The latest internet security protocol, known as TLS 1.3, makes it more challenging to comply with these requirements while maintaining web traffic security.

    Good! This is important. Far more important than some orgs needing to buy new software licenses.

    • ngrilly 30 days ago
      > Citation needed.

      I'm wondering as well. My guess: these practices are not mandated by law, but by industry "standards" that are part of contractual agreements, and required by a customer or an insure company, for example. Some of those standards are good, some are a pure product of bad bureaucracy...

    • Avamander 30 days ago
      Neither industry being known for their upstanding or basic security.
  • josephcsible 30 days ago
    IMO such "required audits" should be repealed if they require MITM or persisting ephemeral keys. We shouldn't be mandating lessened security of our most important and sensitive systems.
    • unethical_ban 30 days ago
      How do you propose businesses keep malware and C2 off their network? You'd have corporate endpoints secretly communicating with any old DNS or SSH or web upload endpoint?
      • acdha 30 days ago
        Egress controls and other network boundaries are doing the work there, not MITM. If I can connect to a remote server, I can encrypt my payload before sending it, too. This is a really hard battle to win - you need to store tons of data, have robust analysis systems, rooms full of analysts, etc. before you’re going to be able to tell that, say, the random looking cookie sent to an ad server-like hostname is actually encrypted data, that my Zoom video stream wasn’t company data, or that the “ad” was a control message.

        That last is one of the reasons why I think enterprise ad blocking is an important security measure, and a likely outcome for sensitive jobs will be separating sessions - e.g. if you have general purpose browsing happening on a separate computer, some kind of remote session, etc. you will have a much easier time being able to restrict the network connectivity of the system with more sensitive data.

      • mavhc 30 days ago
        How will intercepting traffic stop that? They can just use another layer of encryption.

        Better to monitor all devices for unusual network behaviour, and monitor the endpoints themselves with antivirus.

        • unethical_ban 30 days ago
          I think it's unacceptable for a business to be told "It's literally impossible to know what is being communicated outbound from your endpoint. We can only do heuristics."
          • Avamander 30 days ago
            If you don't control the endpoint then you can cry about it.
            • unethical_ban 30 days ago
              Hot take. I mean, with TLS decryption, the company does control the endpoint, or at least what the endpoint trusts on a network layer. But people here are crying about that.
  • throwaway458864 30 days ago
    It's unfortunate that the development of the web has had an adversarial nature. There's been a war between those individuals who prize privacy, and organizations that want functionality.

    The law requires certain things. If your protocol doesn't account for those things, then your protocol will be broken to bend to the law's will. It would often be much better to have some small compromise in privacy, rather than lose it all. "All or nothing" has some extreme outcomes.

    Yes, some people do want privacy at all costs. But what about the rest of us? We send postal mail in envelopes and leave them sitting in boxes open to the street. Our phone calls traverse networks unencrypted and are overhead nearby. Our credit cards and secret PINs can be input at public facilities that enable stealing. Our laptops sit at home or work and can be broken into and memory dumped for encryption keys. In practice, 99% of us are completely fine with an acceptable risk of a possible loss of privacy. We help bolster this with laws and punishments should someone violate our privacy. But what we don't do is engineer our lives as if we're all spies hiding from an execution.

    There are practical changes that could be made to allow for better functionality, whilst not having absolute privacy at every conceivable technical level, but still more than enough privacy that what we care about most is still reasonably private. Then there might be enough mild privacy lost to enable organizations the functionality they need, and we would lose less to the "all or nothing" consequences.

    The thing is, there is an extremely small number of people who have the privilege and power to change things, because they're in the room and we're not. Like the generals carving up Africa because they happen to be in the right room. Personally, I think these decisions have fallen to a few people in a room for far too long. I think we should have public, wide ranging discussions about the nature of how we build the underpinnings of our world. If we don't, the consequences could be more "all-or-nothing" that ends up harming more than otherwise.

    • tialaramex 30 days ago
      > The thing is, there is an extremely small number of people who have the privilege and power to change things, because they're in the room and we're not.

      Which rooms? In a lot of cases the situation is that you didn't bother to show up. Not always, but probably more often than you realise.

      The IETF Working Group where TLS 1.3 was designed for example is just an IETF activity. You can literally just do that, it's actually probably harder to participate in Hacker News.

      The "Root Trust Stores" are notionally controlled by a handful of tech businesses. Google, Apple, Microsoft. But, wait, Mozilla also controls one of these "Root Trust Stores" for Firefox and in practice for the Free Unix systems and most Free Software, and what do you know, since they decide behind closed doors we don't know how Google, Apple and Microsoft decide what to do - maybe they each have a thousand smart people deciding - but it sure does seem like they watch what Mozilla does and largely do the same thing. And how does Mozilla decide? An entirely public discussion m.d.s.policy. You could participate in that discussion today.