What is a Pseudo Random Subdomain (PRSD) Attack?

what is a pseudo random subdomain (prsd) attack?

Overview

A Pseudo Random Subdomain (PRSD) attack is a specific attack designed to send hundreds of thousands of legitimate but malicious Domain Name Service (DNS) requests at a nameserver in order to perform a Distributed Denial of Service (DDoS) attack. This can also be referred to as a DNS water torture attack but is essentially the same thing.

Instead of just flooding the server with packets like many other DDoS attacks, the packets sent are legitimate DNS requests and due to the pseudo-random nature, they’re designed to be as legitimate as possible.

These attacks are therefore quite powerful, as their legitimate nature means they can bypass many of the DDoS protections and most of the automatic mitigations of most firewalls and DDoS scrubbers (automatic filters for large attacks or malicious attacks) and therefore overwhelm most nameservers.

While the attacks aren’t new, they have been evolving rapidly in the past 12 months to exploit new and different ways to ensure the attack makes it through.

What does the attack look like?

A normal DNS query may look like this:

what is a pseudo random subdomain (prsd) attack?

When your web browser goes to load a webpage (in this example, www.test.com), it needs to know what server this website is on. A DNS request will convert the name into an IP address, which then allows your browser to talk to the right server. 

For most environments, your Internet Service Provider (ISP) runs local DNS recursors which make the request on your behalf, as this allows them to cache (keep a copy of) the record to help speed up all future requests as well as other customer’s requests for the same website.

A pseudo-random DNS query will then look like this:

what is a pseudo random subdomain (prsd) attack?

As you can see, there’s very little difference between the two requests. In many of these PRSD attacks, the attackers are using legitimate DNS servers such as ISP’s and larger providers such as Google to make the requests. 

However, a single request of course isn’t enough and as mentioned above, recursors cache a copy of the record so that if it receives a second request within what’s called the Time to live (or TTL for short) then it doesn’t have to ask the authoritative server again.

Instead, PRSD attacks will ensure that every request is unique by creating subdomain requests (eg www and fast123 in the example) based on dictionary words. These dictionaries can even be based on records of legitimate records used elsewhere (eg mail.test.com, blog.test.com, shop.test.com and so forth) so that the requests are as legitimate as possible.

This means that every request therefore has to reach the end authoritative server (eg your hosting provider) and it then has to process it. Because the requests are unique, any caching then at the authoritative end won’t work either so it can be hugely intensive on their infrastructure. These attacks can be in the millions and being legitimate requests, this will simply overwhelm most nameservers. 

What can be done to prevent the attacks?

This is the million dollar question. PRSD attacks in the past may have come directly from other compromised systems so the attacking IP could simply be dropped if it’s sending too many DNS queries per second. Many systems either had limits in place here or the use of firewalls and upstream DDoS scrubbers could detect abnormal packet rates and mitigate the attack.

However, modern attacks are exploiting the fact that some of the largest DNS resolvers have highly distributed infrastructure. For example, Google’s public resolver (8.8.8.8) isn’t just one server but a very large fleet of servers and services. If you make a query against their recursive server, this request to the authoritative server could come from one of hundreds of thousands of servers behind the scenes.

Because of the distributed nature of their infrastructure (required to make it so resilient), this means the requests could come from hundreds of thousands of different IP’s and from any of the countries Google has infrastructure in.  Even worse, if the system attempts to rate limit or block this IP, it also affects all other legitimate requests through Google’s servers for all other domains as well.

This leaves three key options. 

More servers

The first is to increase the size of the authoritative DNS infrastructure. If the attacks are five hundred (500) times larger than your normal traffic flow then you’ll need to scale your infrastructure out to be five hundred times larger. This also includes ensuring that your firewalls and upstream network infrastructure also scales to this point. 

As you can expect, this is a very costly approach to solving the problem so the least likely to be implemented. If you have the infrastructure spare (ie already racked and paid for), then this would be the only time this is a viable option.

Per domain limits

Instead of limiting DNS queries per IP, you would need to limit the overall DNS queries per domain. This is possible via some DNS authoritative servers or it may require a proxy service such as dnsdist to implement. 

This can require some complicated scripting to achieve, with a high likelihood of still negatively affecting the domain being attacked (or used in the attack).

Dedicated DNS DDoS protection

While this sounds like the easiest option, it could also be costly depending on your scenario. As an individual or company, you could use one of the large providers such as Akamai or Cloudflare to protect your domain. This could become quite expensive or convoluted if you have multiple domains to manage and may break integrations with your existing infrastructure, so there are a number of factors to consider.

If you’re a service provider and have thousands of domains to protect, there are options available as third party services or as dedicated on-premises hardware or VM based systems. They all of course all vary in terms of price and complexity to manage and it can be difficult to evaluate as simulation of attacks aren’t ever as accurate as the real thing.

What’s the best solution?

There is no magic answer here. The nature of these attacks means that they’ll find any weakness possible to exploit. Many of the systems require constant fine tuning as it’s a continual cat and mouse game. The moment you put effective mitigation in place, it means the nature of the attack can (and will!) vary to disrupt. 

It may also be that the solution you choose today won’t be the right solution in 1-2 years time. The key is as the threat evolves, your mitigation strategies need to evolve with it.

Back to the Blog

avatar of tim butler

Tim Butler

With over 20 years experience in IT, I have worked with systems scaling to tens of thousands of simultaneous users. My current role involves providing highly available, high performance web and infrastructure solutions for small businesses through to government departments. NGINX Cookbook author.

  • Conetix
  • Conetix

Let's Get Started

  • This field is for validation purposes and should be left unchanged.