Awesome article! Interesting related work is in , where we used DNS TTLs as a covert channel for passing data, without needing to control the domain(s) being used. Through the development of that covert channel, we found a variety of idiosyncrasies such in the client-side DNS infrastructure and discussed them in . Some devices will report an erroneously high TTL, some will unnecessarily shorten the TTL, some represent entire clusters of DNS resolvers with interesting properties, and so on. Based on your work, it appears that over the past five years, the number of open resolvers has dropped dramatically, from ~30M to ~3M.
Your email response really is indicative of some of the folks that get cranky when you send them packets :)
Super fun article. I also like to see a "real" implementation of crazy ideas like this.
Can anyone confirm if the Microsoft DNS servers default to caching an unlimited amount of data? The article claims "Unlimited??" as the default for these systems. Eyeballing the pie chart looks like 20% of the servers are running Microsoft, which could provide quite a lot of storage.
Please don't. There's a perfectly lovely naturally emerged digital life form living in the spaces in between on the Internet, and this would threaten their habitat. Sure they haven't figured out we exist yet, and are certainly a long way from being able to communicate with us, but they seem kind to one another and I'd hate to see their evolution displaced.
An enhancement of this technique could be used on one’s own private network of DNS resolvers for the specific purpose of acting like a highly available directory of private cloud nodes, storing the following information:
encoded in one DNS TXT record per service.
This would kind of be like a mashup of Apple Bonjour and this technique.
The big question is, how long to cache the information for in such a setup, assuming the cloud itself is highly unreliable, so as to make the entire thing extremely fault tolerant?
While an interesting use, abusing DNS in a similar way has beena long known (15 year) security vulnerability. For example, OzymanDNS. Even then, that was just one of the first published exploits. People had been performing DNS tunneling for some time.
There are detectors of DNS abuse that I imagine the people who actually would store files in DNS would not want pointed at their files.
HTTP 1.1 servers need the host name in the request, so that a single IP can host multiple domains that resolve to it. If you just go to the IP address, you get an error or a default host. It should work fine with most other protocols, though.
Adding to what others say here: if you have/know the ip address, you probably also know the host name. There's nothing magical about:
# from memory, syntax might not quite work
telnet 184.108.40.206 80
Which is indeed why you can put the ip and host name(s) in /etc/hosts - and without other network level blocks - browsers etc will just work.
With http 1.0 blocking/filtering ips was enough, with 1.1 you need a proxy. With tls/ssl you have the choice between (having the capability to) decrypt everything or filter nothing. (obviously ip level filtering works, but it's a little crude in a Http 1/1 world. Ditto for http2 etc).
Just a tiny correction: RIPE Atlas' reliability tags (e.g., "-stable-Xd") have nothing to do with the probe "changing the public IP address once a day". Those filter simply measure the probe's uptime over different time windows.
In fact, the "-stable-1d" tag you mentioned would be true even for probes that have been down "up to 2h" over the last day.
You can use the dig utility to see if a DNS server is recursive. Just do the scan in two steps. One major port scan using masscan, netscan, etc., then a smaller scan of the IPs with port 53 open to see if they are recursive or not. You'll see this in dig's output if the server is not recursive: