DNSFS – Store files in others' DNS resolver caches

(benjojo.co.uk)

243 points | by benjojo12 2267 days ago

15 comments

  • tomcallahan 2267 days ago
    Awesome article! Interesting related work is in [1], where we used DNS TTLs as a covert channel for passing data, without needing to control the domain(s) being used. Through the development of that covert channel, we found a variety of idiosyncrasies such in the client-side DNS infrastructure and discussed them in [2]. Some devices will report an erroneously high TTL, some will unnecessarily shorten the TTL, some represent entire clusters of DNS resolvers with interesting properties, and so on. Based on your work, it appears that over the past five years, the number of open resolvers has dropped dramatically, from ~30M to ~3M.

    Your email response really is indicative of some of the folks that get cranky when you send them packets :)

    [1]: http://research.tom.callahan.us/pubs/icsi-tr-12-002.pdf

    [2]: http://research.tom.callahan.us/pubs/imc029-schompAemb.pdf

  • moduspwnens14 2267 days ago
    I wish I had a more insightful comment, but I'll just say this:

    I love posts like this where someone applies a theoretical concept in a fun and interesting (even if not practical) way.

  • aplorbust 2267 days ago
    Reminds me of this, one of my all-time favorites:

    http://lcamtuf.coredump.cx/juggling_with_packets.txt

    Guessing date as circa 2003. Could be wrong.

    As for DNS, djbdns can store arbitrary bytes in RR (e.g., TXT), as octal. For example, modified dnstxt can print formatted text stored in TXT records, with linefeeds, etc.

  • shredwheat 2267 days ago
    Super fun article. I also like to see a "real" implementation of crazy ideas like this.

    Can anyone confirm if the Microsoft DNS servers default to caching an unlimited amount of data? The article claims "Unlimited??" as the default for these systems. Eyeballing the pie chart looks like 20% of the servers are running Microsoft, which could provide quite a lot of storage.

    • otakucode 2266 days ago
      Please don't. There's a perfectly lovely naturally emerged digital life form living in the spaces in between on the Internet, and this would threaten their habitat. Sure they haven't figured out we exist yet, and are certainly a long way from being able to communicate with us, but they seem kind to one another and I'd hate to see their evolution displaced.
    • voidlogic 2267 days ago
      Even unlimited is bound by memory/storage with probably an LRU eviction scheme. So unless your stored data is hot, or their storage is very large, it might not stay around long.
      • AdamJacobMuller 2266 days ago
        need a background worker that periodically reads all data to keep things in cache (like a raid or SSD background check).
  • Annatar 2266 days ago
    Ingenious!

    An enhancement of this technique could be used on one’s own private network of DNS resolvers for the specific purpose of acting like a highly available directory of private cloud nodes, storing the following information:

      host:service:port:protocol
    
    encoded in one DNS TXT record per service.

    This would kind of be like a mashup of Apple Bonjour and this technique.

    The big question is, how long to cache the information for in such a setup, assuming the cloud itself is highly unreliable, so as to make the entire thing extremely fault tolerant?

  • ape4 2267 days ago
    Too bad he couldn't use FUSE. Would be nice to do `ls` and other commands with this.
  • IncRnd 2267 days ago
    While an interesting use, abusing DNS in a similar way has beena long known (15 year) security vulnerability. For example, OzymanDNS. Even then, that was just one of the first published exploits. People had been performing DNS tunneling for some time.

    There are detectors of DNS abuse that I imagine the people who actually would store files in DNS would not want pointed at their files.

    • twic 2267 days ago
      Yes! Reading the description of DNSFS, i was sure Dan Kaminsky had done something like this years ago, but couldn't track it down - Dan Kaminsky has done a lot of things with DNS.
      • IncRnd 2267 days ago
        Indeed! :)
  • jradd 2267 days ago
    Wish I had more to add than: "This is so neat!"

    Seem's like this would be a good way to circumvent web filters that block remote file services (though allow DNS over tcp or udp).

    How would one restrict this capability from an administrative perspective?

    • vthriller 2267 days ago
      People are already sneaking data through DNS in both directions. Here's a quick example from a year ago that popped in my head first: http://4lemon.ru/2017-01-17_facebook_imagetragick_remote_cod...
      • jradd 2267 days ago
        This is very neat as well. Still trying to understand it.

        I have tested various use cases for Iodine, which works great, unless you are blocking all outbound dns traffic.

      • aplorbust 2267 days ago
        FYI re: PoC

        NS for hacker.toys not responding

    • cmonfeat 2267 days ago
      I've typically blocked outgoing DNS requests to arbitrary resolvers on every network I've managed, which disables the use of this FS.

      Reason being, if users on my network are using a resolvers other than my own, they can resolve all sorts of domains I would have otherwise blackholed.

      • Ambroos 2267 days ago
        Controlling network access on DNS level seems pretty ineffective to me.

        Especially with things like Google DNS over HTTPS and https://github.com/pforemski/dingo ...

        • cmonfeat 2267 days ago
          Oh I'm with you, you've gotta put other controls in place. Still in my basic acl for every network, because it's one if the first things users will do to circumvent controls.
          • yetanother1980 2266 days ago
            It's also not uncommon to not use the default DNS settings of a network.

            Doing this sounds like a good way to increase the noise to signal ratio in your support calls....

        • yetanother1980 2267 days ago
          Pretty much 100% waste of time I think. Users can easily just use raw IP addresses right?
          • 13of40 2267 days ago
            HTTP 1.1 servers need the host name in the request, so that a single IP can host multiple domains that resolve to it. If you just go to the IP address, you get an error or a default host. It should work fine with most other protocols, though.
            • e12e 2266 days ago
              Adding to what others say here: if you have/know the ip address, you probably also know the host name. There's nothing magical about:

                # from memory, syntax might not quite work 
                telnet 1.2.3.4 80
                Http/1.1
                Host: example.com
                Get /
              
              Which is indeed why you can put the ip and host name(s) in /etc/hosts - and without other network level blocks - browsers etc will just work.

              With http 1.0 blocking/filtering ips was enough, with 1.1 you need a proxy. With tls/ssl you have the choice between (having the capability to) decrypt everything or filter nothing. (obviously ip level filtering works, but it's a little crude in a Http 1/1 world. Ditto for http2 etc).

            • toomuchtodo 2267 days ago
              Add entry to /etc/hosts (or the windows equivalent), navigate in browser.

              Too high of a hurdle for your average user though, in which case blocking sites at the DNS resolver works.

            • yetanother1980 2266 days ago
              I'm pretty sure you can send a request to an IP address with the host name in the request.
  • mino 2266 days ago
    Fun article, kudos!

    Just a tiny correction: RIPE Atlas' reliability tags (e.g., "-stable-Xd") have nothing to do with the probe "changing the public IP address once a day". Those filter simply measure the probe's uptime over different time windows.

    In fact, the "-stable-1d" tag you mentioned would be true even for probes that have been down "up to 2h" over the last day.

  • w8rbt 2267 days ago
    You can use the dig utility to see if a DNS server is recursive. Just do the scan in two steps. One major port scan using masscan, netscan, etc., then a smaller scan of the IPs with port 53 open to see if they are recursive or not. You'll see this in dig's output if the server is not recursive:

        ;; WARNING: recursion requested but not available
  • mellamoyo 2267 days ago
    I'm surprised at the marketshare dnsmasq has, I would've thought BIND and dnsmasq numbers to be flipped.
    • ape4 2267 days ago
      I'm surprised too. Since I am running it and it dies every few days.
    • krylon 2267 days ago
      dnsmasq is very popular with SOHO routers.
      • LeonM 2266 days ago
        And just about every mobile device (hotspot mode)
  • mrb 2266 days ago
    Ha! Combine this idea with my proof-of-concept CDN53 Chrome extension and it would be serving websites directly from others' DNS resolvers =:)
  • dh-g 2267 days ago
    Great, article. I've noticed a trend with anything which requires masscan is probably going to be fun/interesting.