DNS cache poisoning part 2

April 11, 2019  |  Jeff Thompson

My last blog on DNS cache poisoning only covered the superficial aspects of this long-standing issue. This installment aims to give a bit more technical detail, and expose some of the tactics used by the "bad-actors" looking to leverage a poisoned DNS cache against you and your network. In a worst-case scenario, the results of a poisoned DNS cache could lead to more than just a headache: civil liability, phishing, increased DNS overhead, and other kinds of nightmares are too easy to overlook with this type of 'attack'.

So, you may be wondering, "What exactly makes a DNS cache poisoning attack so dangerous, and what can we do to prevent it?" Well, as outlined in my first article, not answering DNS requests on the web is a great place to start. If you're only running an internal DNS infrastructure, your attack-surface is much lower. However, this comes with a caveat; "internal-only" DNS attacks are much harder to detect, and can often go weeks or months before even the keenest of sysops recognize them. This has to do with the fundamental structure of DNS. Let me explain.

Fundamental structure of DNS

In a typical DNS server (e.g. Windows DNS, or BIND) there is little mechanism (e.g. NONE) to provide any sanity checking. In its simplest form, a DNS query will look to its local database (the 'cache') first, upon finding no answer for the request it will then send a lookup request to its configured DNS server (the one you hopefully manage) and see if it can find an answer for the request.

If this lookup fails a 2nd time, there is a 'forwarder' configuration that kicks in, and the request goes to a list of pre-specified DNS hosts that your server will send the request to, looking for a resolution to the name. If this final 'forward' lookup fails, the final lookup happens out on the internet, on one of the 'Root' nameservers that share a distributed list of all the DNS hosts that make up the TCP/IPv4 internet. If this final lookup fails, the original requesting client is returned with a 'DNS Name not found' answer, and the name will not resolve. At any point during this journey, a "faked" response can be issued, and the initiator will accept it. No questions asked.

Problems with the model

This model is good when we can trust each one of the segments in the process. However, even during the early days of the web - there were some issues that became apparent with the way DNS works. For example, what if the root servers are unavailable? Unless your local DNS server has a record of ALL of the domains on the web, or one of your 'forwarders' does - the DNS name will not resolve. Even if it is a valid domain, DNS will simply not be able to lookup your host.

There was an "attack" on several of the root servers in the late 1990's. Several of the root servers were knocked offline, effectively taking down the internet for a large portion of the USA. It was during this outage that many network operators realized a large oversight of the DNS system, and a push was made to distribute control of these systems to a variety of trustworthy and capable internet entities. At the time of this attack, much of the internet name resolution duties fell to a single entity: Yahoo. A DDoS of Yahoo effectively killed the internet. Sure, we could still get to our desired hosts via IP, but e-mail, for example, was not as resilient. It was a great learning lesson for the web community at-large.

This was just a denial-of-service at the highest level of the infrastructure. What would  happen if the localized database on every computer in your organization had different "answers" for DNS lookups? Instead of consistent DNS timeouts, the possibility of 'spoofed' web portals, fake e-mail NDR's, or outright phishing campaigns become incredibly simple to pull off. It's a tad scary to think that something we all depend on so greatly is (in theory at least) so easy to compromise. Enter the RNDC key.

What is an RNDC key?

Basically it's an encrypted string that gets appended to all the DNS queries originating from a DNS server. It allows many (if not all) tasks to be handled remotely, and is incredibly useful when managing a large DNS ecosystem. Depending on the security climate of your organization, there may be a desire to disable RNDC completely. This could cause several superficial challenges, but the main advantage of this would be that the DNS server would have to be managed individually. This would greatly reduce the attack surface of a DNS cache poisoning attack, and probably make your DNS admin a) happy with the job security b) upset at the task of managing individual DNS entries across multiple servers.

The largest attack vector a DNS cache poison attack is the recursive query. Although it merits an extremely technical overview, it is somewhat outside of the scope for this blog. Essentially, a recursive query takes a known suffix (.COM), and works 'backwards' to find where the actual domain resides.

  1. COM -> Find .COM resolver (root-server)
  2. Locate authoritive namesever for MYCOMPANY.COM (Authoritive namesever on Domain WHOIS)
  3. Match MYCOMPANY.COM to IP address
  4. Pass "answer" back to client requesting
  5. Store both the 'root-server' (for the .COM answer) and the authoritive nameserver (for the MYCOMPANY.COM answer) in the local cache

If a bad actor were to adjust either one of these "answers" they have successfully pulled off a DNS cache poisoning attack. Prior to BIND9, these types of attacks were incredibly easy to initiate. Combine that, with say, a DNS admin that kept their TTL's in the 24-48hr range and they could often go undetected for days at a time. So what changed with Bind9? A few things. The first was the adoption of RNDC. This was useful for DNS servers that *had* to be public facing, or accessible via the internet.

The second layer of protection afforded was something that we will get into in our next blog, "DNSSEC". DNSSEC is basically a transport-layer mechanism that provides a way for the DNS server to know the answer it received is not only authoritative, but also has not been modified in transit. One of the easiest ways (even with DNSSEC checking the hash of a request) an attacker can wreak havoc on a DNS infrastructure, is by guessing (or intercepting) the TXID and providing what APPEARS to be an authoritive response. DNSSEC eliminates this by leveraging the tool of cryptography to beat even the fastest DNS attackers.

Conclusion

It should be noted that the scope of this blog is more to educate rather than correct. Since its inception, the DNS protocol has undergone MANY improvements that keep attacks like this much less common than they once were. However, with un-fettered access to a private LAN, where DNS is essential, these attacks can still very much happen.

The more users, sysops, and overall tech community know about HOW attacks like this work, the better prepared we are to stop them when they happen and begin to think about new ways to access information. IPv6 has long touted a fix for much of this, but who has time to count in HEX? I love my Nana, but she struggled with getting to 'yahoo.com' . Mix in some HEXADECIMAL counting, and she would have been even more adverse to a keyboard than she already was, (R.I.P. my Nana!).

Additionally, IPv6 adoption is still quite a ways off, especially with backwards-compatiblity. DNS cache poisoning is something we are still left with, and it takes a fairly strong understanding of how DNS works, and how we got to this point to begin with to mitigate the effects in an environment where a DNS infrastructure is susceptible.

With YOUR help. Si se puede.

Share this with others

Get price Free trial