Find this story and other updates on our Let's Talk Security Podcast Episode here:
This vulnerability was first found and fixed in 2008, which essentially allowed attackers to redirect users traffic originally destined for a specific domain, to the attackers servers. We’ll talk about how this was resolved in a bit.
So, now a team researches from University of California at Riverside and the Tsinghua University in Beijing, presented their paper at the ACM Conference on Computer and Communications Security, that's CCS '20, which was held last week. And it won the Distinguished Paper Award.
The paper was titled: "DNS Cache Poisoning Attack Reloaded: Revolutions with Side Channels." Their paper explains their accomplishment with its introduction.
They wrote: "In this paper, we report a series of flaws in the software stack that leads to a strong revival of DNS cache poisoning," they said, "a classic attack which is mitigated in practice with simple and effective randomization-based defenses such as randomized source port. To successfully poison a DNS cache on a typical server, an off-path adversary" - "off-path" meaning an adversary that's not participating in the local network traffic - "would need to send an impractical number of" - and they say 2^32. Actually on average half that, right, but still, that scale - "2^32 spoofed responses, simultaneously guessing the correct source port," which is 16 bits, "and the correct transaction ID," also 16 bits.
Surprisingly, they discovered weaknesses that allowed an adversary to 'divide and conquer' that space by guessing the source port first, followed by the transaction ID. This leads to only 2^16 plus 2^16 spoofed responses." Otherwise it would have been 2^16 times 2^16. You'd much rather have a plus there than a multiplication. Even worse, they demonstrate a number of ways an adversary can extend the attack window" - which is to say that the time available to make all these guesses - "which drastically improves the odds of success."
To explain this:
Back in 2008, the port from which the query was made was fixed, the other part of the query was the 16bit transaction ID. So an attacker only had to guess these 16 bits. The solution was to randomize the source port of the DNS quesries as well, into 16bits, so now it’s 16+16, so 32 bit entropy, and hence much harder to guess.
What the latest vulnerability exposes is that, the researchers could figure out a way to ICMP as side channel feedback to derandomize and guess the query port. So now the entropy is 16 bits, same as the entropy back in 2008.
"To summarize, source port randomization becomes the most important hurdle to overcome in launching a successful DNS cache poisoning attack. Indeed, in the past there have been prior attacks that attempt to derandomize the source port of DNS requests. As of now, they are only considered nice conceptual attacks, but not very practical. Specifically, one requires an attacker to bombard the source port and overload the socket receive buffer, which is not only slow and impractical, unlikely to succeed in time, but can also be achieved only in a local environment with stringent RTT [round trip time] requirements. It is assumed that a resolver sits behind a NAT which allows its external source port to be derandomized, but such a scenario is not applicable to resolvers that own public IPs."
What’s the scope of this vulnerability?
They said: " the vulnerabilities we find are both much more serious and generally applicable to a wide range of scenarios and conditions. Specifically, we're able to launch attacks against all layers of caches which are prevalent in the modern DNS infrastructure, including application-layer DNS caches, for example, in browsers; OS-wide caches; DNS forwarder caches, for example, in home routers; and the most widely targeted DNS resolver caches.
"The attack affects all layers of caches within the DNS infrastructure, such as DNS forwarders and resolver caches, and a wide range of DNS software stacks, including the most popular BIND, Unbound, and dnsmasq, running on top of Linux, and potentially other operating systems. The major condition for a victim being vulnerable is that anOS is allowed to generate outgoing ICMP error messages."
They said: "Interestingly, these vulnerabilities result from either design flaws in UDP standards or subtle implementation details that lead to side channels based on a global rate limit of ICMP error messages, allowing derandomization of source port with great certainty usually there are ICMP rate limits based on IP, but they overcame that with 3 probing techniques: One is two own multiple IPs, or use a machine that supports IPv6, which gives a LAN network 2^64 IP addresses.
Second is asking for multiple IP addresses using DHCP , even if you own only a single IPv4 machine.
They said: "From our measurement, we find over 34% of the open resolver population on the Internet are vulnerable, and in particular 85% of the popular DNS services, including Google's 22.214.171.124" and also, elsewhere, 126.96.36.199, both vulnerable. "Furthermore, we comprehensively validate the proposed attack with positive results against a variety of server configurations and network conditions that can affect the success of the attack, in both controlled experiments and with a production DNS resolver."
Most of the services have mitigated this, like linux for example has made the rate limit a variable, so you can’t rely on fixed batches of requests to probe the port. and its not going to be devastating deal, like it did in 2008. Especially now that we are moving to HTTPS and to a certificate based system.