Internet protocols are changing

(blog.apnic.net)

458 points | by nmjohn 2320 days ago

19 comments

  • jandrese 2320 days ago
    > When a protocol can’t evolve because deployments ‘freeze’ its extensibility points, we say it has ossified. TCP itself is a severe example of ossification; so many middleboxes do so many things to TCP — whether it’s blocking packets with TCP options that aren’t recognized, or ‘optimizing’ congestion control.

    > It’s necessary to prevent ossification, to ensure that protocols can evolve to meet the needs of the Internet in the future; otherwise, it would be a ‘tragedy of the commons’ where the actions of some individual networks — although well-intended — would affect the health of the Internet overall.

    On the other hand, I've done a fair bit of work getting TCP based applications to behave properly over high latency high congestion (satellite or radio usually) links and QUIC makes me nervous. In the old days you could put a TCP proxy like SCPS in there and most apps would get an acceptable level of performance, but now I'm not so sure. It seems like everybody assumes you're on a big fat broadband pipe now and nobody else matters.

    • cjhanks 2320 days ago
      I significantly benefit from QUIC. My home network is exceptionally lossy... and exceptionally latent. ICMP pings range from 500ms (at best) to 10 seconds (at worst) with an average somewhere in the 1-2 second range. Additionally I am QOS-ed by some intermediary routers which appear to have a really bad (or busy) packet scheduler.

      Often, Google sites serving via QUIC are the only sites I can load. I can load HTML5 YouTube videos despite not being able to open the linkedin.com homepage. Stability for loading HTTP over QUIC in my experience is very comparable to loading HTTP over OpenVPN (using UDP) with a larger buffer.

    • peterwwillis 2320 days ago
      > It seems like everybody assumes you're on a big fat broadband pipe now and nobody else matters.

      This is intentional. The powers that be have an interest in moving everyone to faster networks, and they effectively control all new web standards, and so build their protocols to force the apps to require faster, bigger pipes. This way they are never to blame for the new requirements, yet they get the intended benefits of fatter pipes: the ability to shove more crap down your throat.

      It's possible to cache static binary assets using encrypted connections, but I am not aware of a single RFC that seriously suggests its adoption. It is also to the advantage of the powers that be (who have the financial means to do this) to simply move content distribution closer to the users. As the powers that be don't provide internet services over satellite or microwave, they do not consider them when defining the standards.

      • sarah180 2320 days ago
        There's a technical reason for this. The Internet2 project spent a lot of time and effort working on things like prioritized traffic to deal with congested links. They found that it was easier and more cost effective to just add more bandwidth than it was to design and roll out protocols that would deal with a lack of bandwidth.

        For more info, read this: https://www.webcitation.org/5shCiXna8

        A noteworthy quote:

        > In those few places where network upgrades are not practical, QoS deployment is usually even harder (due to the high cost of QoS-capable routers and clueful network engineers). In the very few cases where the demand, the money, and the clue are present, but the bandwidth is lacking, ad hoc approaches that prioritize one or two important applications over a single congested link often work well. In this climate, the case for a global, interdomain Premium service is dubious.

        "Premium service" in this document refers, basically, to an upgraded Internet with additional rules to provide quality of service for congested links.

        (I'm not personally claiming that all these conclusions are correct and that they still apply today, just that there's some backstory here.)

        • wmf 2320 days ago
          I think more bandwidth is better in every case except geostationary satellites due to their unavoidable latency. And in theory those satellites are going to be obsoleted by LEO ISPs.
      • jjrh 2320 days ago
        Not sure who "The powers that be" are, but anyone can propose and contribute to IETF standards. They are called "Request for Comment" for a reason.
        • ocdtrekkie 2320 days ago
          Almost positively that answer is Google. Notice they are behind multiple of the new protocols here (HTTP/2 and QUIC), and are used as an example how bundling DOH with an existing major player can prevent blocking DNS.

          Google is effectively the actual determiner of Internet standards. As the article notes, Google implemented QUIC on their servers and their browsers, and therefore, 7% of Internet traffic is already QUIC-based, despite it not being officially accepted at this point. This is essentially the same as what happened with SPDY at the time.

          As Google controls both the primary source of Internet traffic (up to 35-40% of all Internet traffic, depending who you ask, and a browser share in the what 65% range), Google can implement any new protocol it wants, and everyone else needs to support it or be left out.

          Arguably, the IETF is no longer the controlling organization here: Google is. Should the IETF not approve Google's proposals, Google will continue to use them, and everyone else will continue to need to support them.

          As the parent notes, Google both essentially determines these standards, and has an interested in faster networks so they can shove fatter payloads down the pipe. This is in part due to the ability to implement pervasive tracking, and of course, they operate a lot of content distribution products like YouTube.

          Note that every method here that makes it harder for governments to censor and ISPs to prioritize also makes it harder for people to detect, inspect, and filter out Google's pervasive surveillance.

          • rtpg 2320 days ago
            For sure the "we've implemented this in Chrome already" aspect is a huge aspect in the standards process.

            I do think it's important to say that this isn't a sure-fire deal. We had stuff like NaCl (native code in the browser) that basically died off, and other options as well.

            Inversely, SPDY is not what the standard is, and Google instead pulled stuff it learned into HTTP/2. This seems like a positive aspect to me.

            It's important to be wary of things that Google won't bring up, but overall I feel like we're getting a lot of benefit from having an implementer "beta test" stuff in this way.

          • orf 2320 days ago
            On the upside they provide real world prototyping and testing of protocols at a scale that's never before been available. Standards they submit for consideration will have had a lot of real world usage to iron out kinks and be battle tested. This is a good thing, IMO.
          • jacksmith21006 2319 days ago
            Do not think it is Google fault they run so much of the Internet and also have most browser and mobile phone share.

            Google using the standards process to release new things is exactly what we should want and see. You make it like it is some nefarious behavior. Geeze.

            Google has offered things for standards which were changed by the standards group and Google adopted the change.

            Google could just keep it all closed if they want. Honestly wonder if they will not in the future with posts like yours. Why do it?

          • kelnos 2319 days ago
            I'm very much torn on this phenomenon, and it depends on an org like Google being more or less benevolent when pushing new protocols and tech.

            Designing a new protocol to be used at internet scale is really hard. Having the ability to test that on a significant amount of traffic before refining and standardizing is a huge advantage that few groups on the planet are able to make use of. From the IETF's perspective, I would look skeptically upon a new standard that someone dreamed up but had little to no practical real-world data to show how it behaves in practice.

            But I also want them to avoid ramming standards down the IETF's throat. If, after due consideration, the IETF says "no" to a new standard, I want Google to stand down and abandon it. If the IETF thinks it's a good idea and is willing to work with Google to iron out issues and standardize in a public manner, then that's the ideal outcome.

          • ohazi 2320 days ago
            While everything you state here is correct, and is clearly of serious concern, my view is that the general public doesn't really have the capacity to fight both of these battles at once.

            Right now, I'm convinced we need Google's help to make it harder for governments and ISPs to censor and prioritize.

            After that, we'll deal with Google.

            • taavi013 2320 days ago
              As of today Google with it's BBR + QUIC + Chrome combo and Facebook with their Zero protocol, combined with their extensive data center network can get much better end user experience than any other service provider regardless of any government.

              We can talk about network neutrality whatever we want - but these companies have built technological capabilities to have "little bit bigger share".

              So, probably after some time it is too late to deal with Google:)

              • discoursism 2320 days ago
                > As of today Google with it's BBR + QUIC + Chrome combo and Facebook with their Zero protocol, combined with their extensive data center network can get much better end user experience than any other service provider regardless of any government.

                What is your preferred course here? Google has provided free as in freedom source code for all of the above, and in some cases pushed the code upstream. Granted, it will take small players longer to see the benefits of these techs, since they can't afford to integrate it themselves and must wait for vendors to include it. But small players usually have worse user experience regardless of the mechanism. Proposed standards are just one way that happens.

            • ocdtrekkie 2320 days ago
              I don't understand why people are more afraid of regional monopolies which have only a portion of one country as their scope, than of a global operation like Google.

              I think it's far more important to deal with Google than hand them the keys to the kingdom while fighting smaller fish.

            • marcosdumay 2320 days ago
              Corporations and governments are not two entities you can divide for conquering.
              • ohazi 2320 days ago
                I agree with you, and it feels dirty, but we're going to end up with neither if we don't start with one. If you have a better idea that can even plausibly attempt to get both, I'm all ears.
                • marcosdumay 2319 days ago
                  I have no better idea, but it's a problematic plan. If you go with this, you will be surprised by underground deals that will stop that division, and will very likely mislead you.

                  The good part is that it focus on fixing the government. That's the correct entity to fix.

      • cmurf 2320 days ago
        Content creators make this assumption way more than any other party. Webpage bloat is already several years past qualifying as obscenely bloated, new words need to be used to describe it today.
      • hellbanner 2320 days ago
        Who are the powers that be that benefit from fatter pipes + content distribution?
      • srj 2320 days ago
        What about these new protocols is antagonistic towards satellite/microwave endpoints? If anything I'd think the 0-rtt support would help.
      • benjaminl 2320 days ago
        Then you will greatly relieved to hear that as of 2010 all major browsers cache content delivered over HTTPS.

        https://stackoverflow.com/a/174485/262789

        • XR0CSWV3h3kZWg 2320 days ago
          I assume they were referring to intermediate caching, not client side caching.
          • benjaminl 2320 days ago
            If that is the case, then the GGP is asking for something that would be a huge privacy leak.
      • osrec 2320 days ago
        I don't understand the down votes on your comment. I think a lot of what you're saying makes sense.
    • sliverstorm 2320 days ago
      You see the same thing in simple data consumption patterns. It's become normal for your average app to suck down tens or even hundreds of megabytes a month, even if it barely does anything.

      It's so normalized I figured there wasn't a whole lot I could do about it, until I noticed Nine sync'ing my Exchange inbox from scratch for something like 3MB. Then I noticed Telegram had used 3MB for a month's regular use, while Hangouts had used 10MB for five minutes of use.

      Despite living in the first world, I'm kind of excited for Android's Building for Billions. There's so much flagrant waste of traffic today, assuming as you say that you have a big fat broadband pipe, with no thought to tightly metered, high latency, or degraded connections.

      (I switched to an inexpensive mobile plan with 500MB, you see)

    • himom 2320 days ago
      There’s little real impetus to change widely-used protocols. Job security, product/support sales and developer “consumerism” novelty aren’t valid reasons, but often get pushed with spurious “reasons” to fulfill agendas, despite their cost.

      The only demonstrable needs are bugfixes and significant advances because inventing and imposing complexity in all implementations or damaging backwards compatibility are insanely costly in terms of retooling, redeployment, customer interruptions, advertising, learning curve, interop bugs and attack surfaces.

    • Jedd 2320 days ago
      I work in the app / WAN optimisation space, and periodically do some work with SCPS -- I'm guessing you're referring to the tweaks around buffers / BDP, aggressiveness around error recovery and congestion response?

      I think there'll be a few answers to the problem - first, it'll be slow to impact many of the kinds of users currently using Satellite, second for web apps it's either internal (so they can define transport) or external (in which case they are, or can, use a proxy and continue tcp/http over the sat links.

      Later on I expect we'll get gateways (in the traditional sense) to sit near the dish ... though I also would expect that on that timeframe you'll be seeing improvements in application delivery.

      Ultimately I hope - hubris, perhaps - that the underlying problem (most of our current application performance issues are because apps have been written by people that either don't understand networks, or have a very specific, narrow set of expectations) will be resolved. (Wrangling CIFS over satellite - f.e.)

    • adrianmonk 2320 days ago
      To what extent are the TCP problems you can solve by tweaking manually with a proxy the same problems that QUIC solves automatically? If there's a big overlap, it may not be a real problem.

      For example, you mention latency, but QUIC is supposed to remove unnecessary ordering constraints, which could eliminate round trips and help with latency.

  • shalabhc 2320 days ago
    Another interesting protocol, perhaps underused, is SCTP. It fixes many issues with TCP, in particular it has reliable datagrams with multiplexing and avoiding head-of-line blocking. I believe QUIC is supposed to be faster at connection (re)estabishment.

    https://en.wikipedia.org/wiki/Stream_Control_Transmission_Pr...

    • dragontamer 2320 days ago
      SCTP is a superior protocol, but it isn't implemented in many routers or firewalls. As long as Comcast / Verizon routers don't support it, no one will use it.

      It may be built on top of IP, but TCP / UDP levels are important for NAT and such. Too few people use DMZ and other features of routers / firewalls. Its way easier to just put up with TCP / UDP issues to stay compatible with most home setups.

      • nfoz 2320 days ago
        Why do the routers involve themselves at the transport layer? Can't they just route IP packets and leave the transport alone?

        Firewalls -- whose firewalls are we talking about here? If a client (say, home user) tries to initiate an SCTP connection to a server somewhere, what step will fail?

        • zAy0LfpBZLC8mAC 2319 days ago
          > Why do the routers involve themselves at the transport layer? Can't they just route IP packets and leave the transport alone?

          Because they have to do NAT, at least on IPv4.

          > Firewalls -- whose firewalls are we talking about here? If a client (say, home user) tries to initiate an SCTP connection to a server somewhere, what step will fail?

          Because the connection tracking that's needed to even recognize whether a packets belongs to an outbound or an inbound connection needs to understand the protocol.

        • 0xcde4c3db 2320 days ago
          I haven't tried it and am not terribly familiar with SCTP, but from skimming a few references I suspect it would fail when the NAT logic decides it's never heard of protocol number 132 and drops the incoming INIT_ACK on the floor.
      • xxpor 2320 days ago
        You can tunnel SCTP on top of UDP. Port 9899.

        https://www.ietf.org/proceedings/48/I-D/sigtran-sctptunnel-0...

        • tptacek 2320 days ago
          This is an improvement --- it was dumb of SCTP to try to claim a top-level IP protocol for this --- but only marginally, since lots of firewalls won't pass traffic on random UDP ports either.
          • nfoz 2320 days ago
            Why was that dumb of SCTP? What should it have done instead?
            • tptacek 2319 days ago
              Used a UDP port. Making new IP protocols is literally what UDP is for.
              • JoshTriplett 2319 days ago
                Other than getting through broken routers, what's the advantage of building a protocol on top of UDP rather than IP?

                ("Getting through broken routers" is certainly a significant advantage; I'm just asking if that's the only good reason to not build directly on IP.)

                • tptacek 2319 days ago
                  The only "advantage" to declaring an IP protocol is that you might save 8 bytes and a trivial checksum. Mostly, though, declaring an IP protocol is a vanity decision.
                  • JoshTriplett 2319 days ago
                    I'm wondering the reverse: what's the advantage to building on UDP, other than passing routers that for some reason are inclined to pass UDP but reject IP protocols they don't know? You said that this was "what UDP was for", and I was hoping for some more detail there on why UDP helps. As you said, it's just 8 bytes and a trivial checksum.
                    • tptacek 2319 days ago
                      The advantage to building on UDP is that it gets through (some) middleboxes; that's what the 4 bytes of port numbers buys you. That, and the fact that UDP is designed to be the TCP/IP escape hatch for things like SCTP that don't want TCP's stream and congestion control semantics.
                      • catern 2319 days ago
                        So there's no advantage other than getting through broken middleboxes?
                        • tptacek 2319 days ago
                          No, another advantage is that you can do UDP entirely in userland without privilege.

                          What allocating a new IP protocol says is pretty close to "all bets are off, and we're carefully taking responsibility for how every system that interacts with TCP/IP headers will handle these packets". Since SCTP doesn't need that, there's no upside. It's vanity and bloody-mindedness.

          • xxpor 2320 days ago
            Very true. I'd of course run tests, but I would guess port 80 would work these days because of QUIC. Even port 53 is probably locked down to whitelisted hosts.
          • gsich 2320 days ago
            Private ones usually do. (outgoing)
            • tptacek 2320 days ago
              Corporate ones sure don't. Even outgoing HTTP is passed through proxies on most bigco networks I've worked at.
              • gsich 2320 days ago
                Yes. But I don't think that corporate firewalls matter when it comes to user adaption. If you need to work with SCTP in your company, you'll get your firewall rule, if not you don't.
              • vbezhenar 2320 days ago
                If they want to only allow proxied HTTP, that's their decision and developers should respect that, and not mask everything under HTTP. Their administrators made a substantial effort to forbid everything, why would honest person try to overcome their effort?

                Home routers is another matter, home users don't make conscious decision about it, but in my experience UDP works just fine (because a lot of games use it) and it should be good enough for any protocol.

                • slrz 2319 days ago
                  Because in many cases, those administrators don't even realize they're breaking the network. They just run a botched config that they inherited from their predecessors or something like that.

                  You might say that they're clueless and shouldn't be allowed near networking equipment (and you might even be right) but it's not going to change a thing. For the foreseeable future, working around broken environments is all we can do.

                  • gsich 2319 days ago
                    It's not. Talk to them. If you really need port XYZ open because your task requires that, you'll get the port open.
      • pfranz 2320 days ago
        To your point, IPv6 has been around 20 years, the whole time we know we're running out of IPv4 addresses and adoption is still around 20%.

        However, the high turnover for mobile phones has allowed more aggressive changes to the networking stack. Perhaps this, in addition to IPv6, would make something like SCTP easier to adopt widely?

        • 24gttghh 2320 days ago
          "Still" seems a bit disingenuous when considering the current trajectory IPv6 adoption is on. [0] Yah it's happening slowly, but it does seem to be pushing ahead.

          [0] https://www.google.com/intl/en/ipv6/statistics.html

          • hinkley 2320 days ago
            That graph has a couple doglegs that make it look exponential.

            The last dogleg was January 2015. And since then it’s been linear (with a little stall this month) at about 5% of the Internet converting per year. That’s another 15 years to convert the rest, unless there’s a new dog leg up.

            Also percentages don’t work the way humans think they do. Especially when the number of devices is constantly climbing. That may just indicate that some fraction of new hardware is ipv6 but little old hardware is being updated.

            We may well have ~2 billion machines on ipv4 pretty much indefinitely, slowly being diluted by addition of new hardware.

            • pfranz 2320 days ago
              Agreed. I say IPv6 is inevitable, but it's not like the IPv4 addresses will expire.

              The economics going forward will be interesting. I've seen some low-end VPSes charge a lower rate for machines that are IPv6-only.

              • mavendependency 2320 days ago
                Cloudflare enabled domains can tunnel ipv6-only hosts for ipv4 clients.
          • JBiserkov 2320 days ago
            Offtopic, but did you notice the 3-4% spikes on Saturdays? And the dips on workdays.

            I assume workplaces have a lower adoption rate due to enterprise inertia. Or is there another explanation?

            • pfranz 2320 days ago
              You're absolutely right. I've read in other articles that IPv6 jumps up on weekends. IPv6 adoption is incredibly skewed. It's much higher in things like cellular networks and in developing countries. I think the weekend jump is attributed to cellphones (not sure if I'm connecting the dots or if that's what the consensus is).
          • pfranz 2320 days ago
            How is that disingenuous? I don't think it matters what the curve looks like when it spans 10 years and ends up at 20%. There were implementations released 10 years previous to the graph's start. I wasn't implying it would never happen, just that even for something that's inevitable like IPv6, adoption is glacial.
            • morsch 2320 days ago
              I think disingenuous is the wrong word; but the curve does matter: 20% in 10 years indicates 100% in 50 years if the curve's a line, but it's not.

              I dove into the world of curve fitting (wee!) and my prediction[1] for 95% IPv6 adoption is around the year 2025: https://imgur.com/a/LyBJn (fitted to the logistic curve[2], x=0 is basically 2010, y is percent adoption)

              [1] Which you should completely trust because I've been doing this for all of 20 minutes!

              [2] https://en.wikipedia.org/wiki/Logistic_function#In_economics...

              • pfranz 2320 days ago
                Thanks for taking the time to do that!

                Lets say 20% adoption means we're 40% of the way through the transition. The slope for IPv6 looks better and better every day, but overall it's not a great adoption story for tech that inevitably has to happen. (not to discourage or minimize the all the hard work done in getting IPv6 this far)

      • X86BSD 2320 days ago
        Which is frankly horrifying considering the reference implementation was released in FreeBSD 7. That really ought to scare people from purchasing any of those routers/firewalls that don’t support it.
        • kelnos 2319 days ago
          It's not a matter of time in the wild, it's a matter of adoption and cost priorities. Until the past several years, most SoHo routers were super constrained wrt CPU, memory, and ROM, so adding support for a new transport layer protocol would involve an unacceptable cost increase in what has rapidly turned into a race-to-the-bottom commodity industry.

          So you end up with a chicken-and-egg problem: router manufacturers aren't going to add support for it unless there's sufficient demand, and there can't be sufficient demand because very few people can use and rely on it.

    • IshKebab 2320 days ago
      Interestingly SCTP is used for WebRTC's data channel. Emscripten uses it to emulate UDP.
  • ilaksh 2320 days ago
    It seems that widely deploying TLS 1.3 and DOH can provide an effective technical end-around the dismantling of net neutrality. So we should be promoting and trying to deploy them as widely as possible.

    Of course, they can still block or throttle by IP, so the next step is to increase deployment of content-centric networking systems.

    • topspin 2320 days ago
      It seems to me that all of the changes described in this story will contribute to thwarting intermediaries and their agendas. HTTP/2 and its "effective" encryption requirement are proof against things like Comcast's nasty JavaScript injection[1]. QUIC has mandatory encryption all the way down; even ACKs are encrypted, obviating some of the traditional throttling techniques. And as you say TLS 1.3 and DOH further protect traffic from analysis and manipulation by middlemen.

      Perhaps our best weapon against Internet rent seekers and spooks is technical innovation.

      It is astonishing to me that Google can invent QUIC, deploy it on their network+Chrome and boom! 7% of all Internet traffic is QUIC.

      Traditional HTTP over TCP and traditional DNS are becoming a ghetto protocol stack; analysis of such traffic is sold to who knows whom, the content is subject to manipulation by your ISP, throttling is trivial and likely to become commonplace with Ajit Pai et al. Time to pull the plug on these grifters and protect all the traffic.

      [1] https://news.ycombinator.com/item?id=15890551

      • zb3 2320 days ago
        But I, as an user, want to be able to block domains, inject scripts and see what Chrome is sending to Google on my own devices (which is what Google doesn't want me to do). That's why I can't support these protocols...
        • JoshTriplett 2320 days ago
          You, as a user, absolutely can. An ISP or network administrator who does not control the endpoints, on the other hand, cannot, by design. That's a feature.
          • fiddlerwoaroof 2320 days ago
            What if I want to use my router to block telemetry domains? Or other malware sites? It’s looking like the only way forward is running my own CA to mitm all encrypted traffic.
            • JoshTriplett 2320 days ago
              > It’s looking like the only way forward is running my own CA to mitm all encrypted traffic.

              Correct. Middleboxes should be presumed hostile; if you control the endpoints you can install a MITM CA, but it's safer to put what you want directly on the endpoint.

            • zb3 2319 days ago
              Which will fail if apps check public keys manually, and is also not very efficient. I think we'll need to patch applications directly, but the good news is that since many people will need this, those patches will probably be developed.
            • slrz 2319 days ago
              What? You control the endpoint. No need to MITM when you can just make your browser do what you want.
              • fiddlerwoaroof 2319 days ago
                Then I only solve the issue on my browser: what about my phone, and all the random other programs that phone home?
            • aoeusnth1 2320 days ago
              That seems superior anyway - you could keep blocking domains even when you're on the go.
              • kuschku 2320 days ago
                Can I? On Android, apps now can decide if they want to accept user-installed CAs, or not.

                So if an app is hostile (say, all the Google apps), then I have no way to intercept their traffic anymore.

                • pas 2320 days ago
                  You can decide to install the app or not.

                  And you can put your CA into the system CA store if you have root. (You can make an Android image, so technically the requirement is unlocked - unlockable - bootloader.)

                  • kuschku 2320 days ago
                    Unlocking the bootloader makes the device permanently fail the strictest SafetyNet category.

                    Apps can and will refuse to run in that situation.

                    Modifying /system will make every SafetyNet check fail, as result Netflix, Snapchat, Google Play Movies, and most banking apps will refuse to run.

                    I can decide to install the app or not? How do I go about replacing Google's system apps with my own, without preventing above mentioned apps from running? I can't. And I can't buy reasonable devices withou Google Android, due to the non-compete clause in the OEM contracts.

                    • pas 2319 days ago
                      Then don't buy it. Or support efforts like Librem (and LineageOS and Magisk).

                      http://www.androidpolice.com/2017/07/16/safetynet-can-detect...

                      You can walk into your bank and access the services. Or call them. Or use their browser based service, right?

                      Google and a lot of developers made the choice to restrict user freedom for more security.

                      I don't agree with it, but it's what it is. A trade off.

                      Of course, you can sign your own images and put the CA into the recovery DB and relock the bootloader on reasonable devices. ( https://mjg59.dreamwidth.org/31765.html )

                      Or at least you used to be able to.

                • zb3 2319 days ago
                  You can patch the app using apktool/smali or even use JTM[0], but I prefer just blocking their traffic using iptables.

                  [0] - https://github.com/Fuzion24/JustTrustMe

                  • kuschku 2319 days ago
                    That also invalidates some SafetyNet verification, unless I abuse the new dex/apk signature verification vulnerability.

                    Either the user controls the program, or the program controls the user.

              • vbezhenar 2320 days ago
                Easier approach is to use your own DNS server and blacklist those domains.
        • jacksmith21006 2319 days ago
          Incorrect. You can on your end but Google stops the people in the middle which is what we want.
          • zb3 2319 days ago
            How can I decrypt QUIC traffic then? Do Wireshark and mitmproxy support QUIC? Will DOH respect /etc/hosts files?
            • slrz 2319 days ago
              Patch your program to dump the keys or traffic?
              • zb3 2319 days ago
                I know that I could edit the source code and recompile the program. I know I could disassemble the binary, find out addresses of functions and then use things like uprobes to dump/modify registers/memory. I know that in theory I could write my own version of mitmproxy that supports QUIC. But I don't have time to do all that, and that's why I speak against those protocols (which changes nothing anyway)
    • ori_b 2320 days ago
      > It seems that widely deploying TLS 1.3 and DOH can provide an effective technical end-around the dismantling of net neutrality.

      If you don't think about it, it may seem that way. But until everyone sends all their data over tor, or some other system that obscures which IP you're trying to get to, it's still easy to filter.

      There's (within epsilon of) zero motion I've seen towards obscuring IP addresses, for good reason.

    • jstanley 2320 days ago
      By content-centric networking systems do you mean IPFS et al?
      • ilaksh 2320 days ago
        Yes, IPFS, but actually there a huge number of perhaps lesser-known projects that do all sorts of things with content-oriented-networking. It's been a pretty big research topic.
    • lathiat 2320 days ago
      Unfortunately not really; Net Neutrality mostly focuses around the semi-bigger services who in most cases will have at least one of a dedicated AS number; dedicated IP ranges or dedicated physical network links they can limit the capacity of. Which is traditionally how the game has been played.

      Think Netflix/Comcast.. no hiding what that traffic is.

  • gumby 2320 days ago
    Let's just hope that future innovations (and, more perniciously, "innovations") reinforce the end-to-end principle. A major weakness of the 2017 Internet is its centralization.

    The DNS-over-http discussion in this post mention that in passing, though I wonder if this treatment might not be worse than the disease.

    • ocdtrekkie 2320 days ago
      The DOH example, in particular, only conveys it's benefits if centralized to something governments are hesitant to block. This is an example of "innovation" specifically designed to centralize. There's maybe a handful of companies with enough influence that countries would hesitate to block in order to block DOH.
  • feelin_googley 2320 days ago
    "DOH" is not going to work very well as an anti-censorship protection unless they also fix the SNI problem in TLS.
  • frut 2320 days ago
    This is just depressing. Sure, sell us out to big corporations by not implementing proper features in protocols like HTTP/2 so we can get tracked for decades to come. Yet, represent freedom by yet another cool way to "fool" governments. When historians look back at what happened to the Internet, or even society, they are going to find that organizations like the IETF was to busy with romantic dreams of their own greatness to serve the public. It's like people leaned nothing from Snowden.
    • pjc50 2320 days ago
      > sell us out to big corporations by not implementing proper features in protocols like HTTP/2 so we can get tracked

      What are you referring to here?

      • frut 2320 days ago
        Authentication mostly. The lack of which is the major reason why the majority of us are still typing passwords into boxes in the browser and send them over the Internet in contradiction to best practices. Doing away with that would potentially solve a lot of problems, like phishing, but also replace cookies. Meaning it would be much harder to track users across the Internet threatening not only the revenue of major player but also their dominance since being able to handle security issues is a major advantage for them. So instead of fixing the problem at the source, we have security people recommending password managers and the EFF making cookie blockers.

        Essentially every geek I have ever talked to support standards, decentralization, community efforts etc. Yet, here we have the company that has more influence than anyone else over the Internet almost single-handedly designing the protocol.

        • wmf 2320 days ago
          Google gave us HTTP/2 but they also gave us U2F. But they didn't give us soft U2F so everyone still uses passwords instead.
        • ryukafalz 2320 days ago
          There's already a protocol for that[0], just almost nobody's using it. Which is a real shame, because with a cleaner UX and more adoption it could be a serious win.

          [0] http://webid.info/

          • ubernostrum 2319 days ago
            Mozilla tried with Persona (née "BrowserID"), which had similar goals. It didn't go anywhere, even with Mozilla's support behind it.
    • xingped 2320 days ago
      What features are missing that should be implemented?
      • teddyh 2320 days ago
        Not the OP, but omitting support for SRV records in HTTP/2 was a terrible missed opportunity, as I’ve written about here before:

        https://news.ycombinator.com/item?id=8404788

        https://news.ycombinator.com/item?id=8550133

        I quote myself: “It really is no surprise that Google is not interested in this, since Google does not suffer from any of those problems which using SRV records for HTTP would solve. It’s only users which could more easily run their own web servers closer to the edges of the network which would benefit, not the large companies which has CDNs and BGP AS numbers to fix any shortcomings the hard way. Google has already done the hard work of solving this problem for themselves – of course they want to keep the problem for everybody else.

        • Shoothe 2319 days ago
          I would also like to see SRV record support in HTTP/2 but IIRC Mozilla did some telemetry tests and found out that a significant amount of DNS requests for SRV records failed for no reason (or probably for reasons mentioned in this submission). Unfortunately I can't find a source link for that claim right now.
          • teddyh 2319 days ago
            I know of two rather large users of SRV records already: Minecraft servers and (the big one) Microsoft Office 365. I’m less than convinced that resolution of SRV records is that broken.
            • Shoothe 2319 days ago
              Do you mean accessing Office 365 via browser uses SRV records or something different?
              • dylz 2319 days ago
                o365 general services (lync skype, outlook, ... / exchange autodiscover) uses SRV a fair bit.

                365 is not just the browser suite

                • Shoothe 2319 days ago
                  Yeah but the services that you mentioned are used mostly by enterprises. It's still possible that SRV lookups are broken for large amount of consumers that are not enterprises.
                  • teddyh 2318 days ago
                    And I wonder who that could be, if that is even true.
        • stock_toaster 2320 days ago
          Yeah, I agree that this was a really unfortunate omission.
      • Shoothe 2320 days ago
        Support for Client Certificates.
  • collinmanderson 2319 days ago
    > Finally, we are in the midst of a shift towards more use of encryption on the Internet, first spurred by Edward Snowden’s revelations in 2015.

    Personally, I'd say it was first spurred by Firesheep back in 2010, but the idea of encrypting all websites, even content-only websites may have been Snowden related.

  • signa11 2320 days ago
    the author is responsible for the 418 teapot incident in the early 2000's. though i am sure he is a swell guy :)
    • stmw 2320 days ago
      Indeed, he was great in web services standards back in the day, and still a good writer. ;-)
  • mikevm 2320 days ago
    Regarding throughput, see UDT (http://udt.sourceforge.net/) which does reliable data transfer over UDP.
  • g-clef 2319 days ago
    I'm really struck by how hostile to enterprise security these proposals are. Yes, I know that the security folks will adapt (they'll have to), but it still feels like there's a lot of baby+bathwater throwing going on.

    DNS over HTTP is a prime example: blocking outbound DNS for all but a few resolvers, and monitoring the hell out of the traffic on those resolvers is a big win for enterprise networks. What the RFC calls hostile "spoofing" of DNS responses enterprise defenders call "sinkholing" of malicious domains. Rather than trying to add a layer of validation to DNS to provide the end user with assurance that the DNS request they got really is the name they asked for (and, in theory, allow the enterprise to add their own key to sign sinkhole answers) instead DOH just throws the whole thing out...basically telling enterprise defenders "fuck your security controls, we hate Comcast too much to allow anyone to rewrite DNS answers."

    "Fuck your security controls, we hate Comcast" is, I think, a bad philosophy for internet-wide protocols. (That's basically what the TLS 1.3 argument boils down to also...and that's a shame.)

    • slrz 2319 days ago
      As implemented, all these "enterprise security" things are mostly indistinguishable from malicious attacks. Of course they break when you start tightening security.

      Forging DNS responses is a horrible idea (and already breaks with DNSSEC). I have a hard time to comprehend how this can be considered a reasonable security measure.

      • g-clef 2318 days ago
        > I have a hard time to comprehend how this can be considered a reasonable security measure.

        OK, let's walk it through.

        Task: block access to "attacker.com" and all it's subdomains. Reason: Maybe it's a malware command and control, maybe it's being used for DNS tunneling, whatever. Blocking a domain that's being used for malicious behavior is a reasonable thing for an enterprise to want to accomplish.

        Option 1: Block by IP at the firewall. Problems: Attackers can simply point the domain to another IP, so you're constantly playing whack-a-mole and constantly behind the attacker. Also, if it's a DNS tunnel the DNS answer is what's interesting, not the traffic to the actual IP. Result: Fail, doesn't solve the problem.

        Option 2: Block by DNS Name at the firewall. Problems: Requires the firewall to understand the protocols involved, which they have shown themselves to be inconsistent at, at the best of times. Also, doing regex on every DNS query packet(in order to find all subdomains) doesn't scale. Result: Fail, doesn't scale.

        Option 3: Block with local agent. Problems: Tablets, phones, appliances, printers can't run a local agent. Result: Fail. Not complete coverage

        Option 4: Block outbound DNS except for approved resolvers, give those resolvers an RPZ feed of malicious domains. Problem: Clients have to be configured to use those resolvers, but otherwise none. Result: Pass. It's standards compliant, and DNSSec isn't an issue since the resolver never asks for the attackers DNS answer, so they never get the chance to offer DNSSec.

        That's why option 4 (or some variant of it) is popular in enterprises. It accomplishes the task in a standards-compliant way, and covers the entire enterprise in a way that scales well.

        DOH blows this up. So, the question becomes: in a world with DOH, how is an enterprise supposed to completely and scalably block access to "attacker.com" and all its subdomains? So far, the answer has been "you don't." I think that is a really shitty answer to someone who's trying to accomplish something reasonable.

        • Dylan16807 2316 days ago
          If the attacker can get new IPs, they can get new domains. Why is pure domain-blocking a goal in the first place?

          The one-size-fits-all answer with DOH is the same as without it: Tell your devices to use/trust the MitM.

  • wojcikstefan 2320 days ago
    This reads less like “Internet protocols are changing” and more like “Google is changing the Internet to their own benefit”.
    • kaplun 2320 days ago
      Do these specific changes from Google impact negatively the community? Otherwise, IMHO good ideas, are good ideas regardless where they come from.
      • ori_b 2320 days ago
        Yes, generally they're some combination of overly complicated technically, difficult to use without layers and layers of heavy dependencies, are poorly thought out, or solve Google-specific use cases.
        • zAy0LfpBZLC8mAC 2319 days ago
          Well, the complexity is a problem, but I don't really see that as Google's fault. The only chance to evolve the network is by building on stuff that works despite all the hostile middle boxes, and that necessarily requires quite a bit of complexity, unfortunately. In the long term, it seems to me like QUIC is a better idea than everyone individually having to work around idiocies all over the internet, as that is not exactly a zero-complexity game either.
  • jedisct1 2320 days ago
    I'm pretty excited about DNS over TLS. Ahaha no, that's so tacky, I meant DNS over QUIC of course. Sorry, I meant iQUIC. Ah no, it's not even there, but it will suck compared to DOH, DNS over HTTPS.
  • provost 2319 days ago
    No mention of BGP in the article?
  • shawndrost 2319 days ago
    ELI5: Does DOH threaten the great firewall?
  • adictator 2320 days ago
    Isn't ws:// also a new-ish protocol that is not supported yet by many browsers natively at least?
  • sarmad123 2319 days ago
    good to see
  • jjjaslin7 2320 days ago
    Best Buy Online – Flights, Hotel, Tours & Travels, Fashion, Health & Beauty and Electronics https://buff.ly/2na35ma

    Amazing sale online shopping - Click Links

    Best places to buy Apple products on Black Friday https://buff.ly/2hSjypp

    Shop Online - Global Marketplace Merchant Explorer https://buff.ly/2A9URwS

    Shop Online - Global Marketplace https://buff.ly/2zCAag2

    Cheap Flights, Hotels, Tours & Travels - Book Online - Global Merchant Explorer https://buff.ly/2iLFD9Y

    Fashion Shopping - Global Merchant Explorer – Shop Online https://buff.ly/2hR49pu

    Buy Online - Global Merchant Explorer - Shopping Deals https://buff.ly/2iLmh4V

    Black Friday! – Buy Online - Best Deals and Sales https://buff.ly/2hPiYJ6

    Best Deals and Sales for the Black Friday! – Buy Online https://buff.ly/2A45rYv

    Shopping Deals - Buy Online - Merchant Explorer https://buff.ly/2iMyyGr

    Dresslily - Christmas Sale - Official Shop - 75%OFF, Global Free Shipping https://buff.ly/2hOBWiM

    Lookupfare - Cheap Flights, Flight Deals, Christmas Travel Deals! - Book Online https://buff.ly/2j59IRi

    Grand Sale - Christmas Shopping Deals - Buy Online - Merchant Explorer https://buff.ly/2B72lAi

    Merchant Explorer - Shopping Deals - Buy Online https://buff.ly/2iurihY

    Ho! Ho! Ho! Book on Christmas Eve – OneTravel – Great Deals on Global Flights! – Holiday Travel Deals https://buff.ly/2mnotUf

    Belle Lily – Christmas Sale – Clothing, Shoes & Accessories – Shop Online https://buff.ly/2mll48u

    OneTravel - Great Deals on Global Flights! - Holiday Travel Deals - Save On Vacation Package https://buff.ly/2yXeydZ

    Grand Offer – Hotels, Flights, Tours & Travels https://buff.ly/2yYa6fi

    Kinokuniya Offers – Buy Online – http://bit.do/dRJq3 https://buff.ly/2mmusZI

    Xmas Sales & Deals : Christmas Holiday Collection – Shop Online https://buff.ly/2Az6oFj

    Grand Offer – Fashion, Health & Beauty, Electronics, Mobile Apps (CPI), Travels and Other https://buff.ly/2AA1Kqj

  • peterwwillis 2320 days ago
    > For example, if Google was to deploy its public DNS service over DOH on www.google.com and a user configures their browser to use it, a network that wants (or is required) to stop it would have to effectively block all of Google (thanks to how they host their services).

    Which will result in all of Google being blocked by schools, businesses, and entire nations. Which, as Google is relied upon more and more, means less access to things like mail, documents, news, messaging, video content, the Android platform, etc.

    Thanks.

    • jlgaddis 2320 days ago
      Nah, many of them can't -- won't -- block Google over this.

      A huge number of them are absolutely reliant on Google, for things like (org-wide) Google Mail, Google Docs, ChromeBook deployments, and so on -- not to mention basic Google search.

      • norin 2320 days ago
        What about China or the EU? They can surely block Google?
        • inimino 2320 days ago
          China has, for many years. The EU is unlikely to.