Recursive Client Rate limiting - FAQs
  • 05 Oct 2021
  • 17 Minutes to read
  • Contributors
  • Dark
    Light
  • PDF

Recursive Client Rate limiting - FAQs

  • Dark
    Light
  • PDF

Article Summary

Recursive Client Rate limiting provides new tuning controls intended to optimize recursive server behavior in favor of good client queries, whilst at the same time limiting the impact of bad client queries (e.g. queries which cannot be resolved, or which take too long to resolve) on local recursive server resource use.

This article was created to answer some FAQs about this feature, derived from real questions that have been posed to ISC.

The main resources on Recursive Client Rate limiting are:

How do I enable Recursive Client Rate limiting?

Recursive Client Rate limiting is not enabled by default, but can be configured by adding new options to named.conf.  You also need to be running a named binary that supports this functionality.

In BIND 9.9.8 and 9.10.3, named must be built specifically to add Recursive Client Rate limiting support by using the --enable-fetchlimit configure option.

From BIND 9.11 and BIND 9.9.8-S1 and newer, Recursive Client Rate limiting is included by default.

Guidance on the configuration options for Recursive Client Rate limiting can be found in the Administrator Reference Manual (9.8.8 and 9.10.3 and newer). The ARM is available in HTML and PDF form in the distribution tarball, but can also be accessed directly from our downloads server at: https://downloads.isc.org/isc/bind9/cur/, and then choosing the /arm subdirectory for the BIND version you are running. The ARM for current versions of BIND is also available at https://bind9.readthedocs.io/en/latest/.

There is also advice in KB article Recursive Client Rate Limiting

When are these features useful?

These options are particularly good when a large number of queries is being received:

  • fetches-per-zone: for different names in the same zone
  • fetches-per-server: for different names in different domains for the same server

... when these authoritative servers are slow at responding or are failing to respond.  They should not impact popular domains whose servers are responding promptly to each query received.

When are these features unlikely to be helpful?

1. Authoritative servers are still responsive despite being under attack via many unique queries

If authoritative servers are responding very quickly, then it's possible that the number of outstanding queries for that server or zone will never reach the limit, rendering this mechanism ineffectual.  Consider setting a very low max-ncache-ttl value instead. Care should also be taken not to configure too low a value for these:

  • fetches-per-server: might negatively impact servers which host many popular zones.
  • fetches-per-zone: might negatively impact some popular social media and other sites.

2. Tiered Resolvers

In a situation where client-facing resolvers globally forward to one or more Internet-facing resolvers:

  • any client-facing (front end) or middle-tier resolvers cannot usefully deploy fetches-per-server; this is because they send all queries to a small number of forwarders, and fetches-per-server, if triggered, would apply to all client queries, both those for 'good' domains, and those for names that are hard to resolve.
  • any Internet-facing (back-end) or middle-tier resolvers whose clients are other recursive servers, must be configured to 'fail' fetch-limited client queries (respond with SERVAIL) rather than 'drop'; note that the default for fetches-per-zone is 'drop', whereas the default for fetches-per-server is 'fail'.
  • all tiers of resolver should be able to use fetches-per-zone, but administrators may need to monitor and apply different limits to each.

3. clients-per-query is disabled or is too large

If you have disabled clients-per-query entirely, or have configured large limits, then fetch-limits will be less effective when a popular service's nameservers are unreachable and where the majority of client queries are for a small number of unique names. See How does clients-per-query work?

The effectiveness of fetch-limits is reduced in this situation because the clients-per-query configuration controls how many clients are allowed to wait for a single Internet fetch (or series of fetches) to complete. All of the waiting clients will be added to the recursive clients list, whereas between them they have only generated a single fetch, making it significantly less likely that fetches-per-zone or fetches-per-server will be triggered before the recursive-clients limit is reached.

Are there any edge cases where odd behavior might be observed?

When restarting a server, or if the cache has just been cleared via the rndc utility, then there may be some temporary spikes in traffic that trigger these limits unexpectedly, but the effect should be temporary.

A misconfigured authoritative server that fails to respond to queries for one or more of its zones, or perhaps fails solely to respond to queries of a specific RTYPE (for example AAAA), depending on the rate and profile of client queries, could find itself being limited by fetches-per-server, even though it can respond normally for all other zones.

What happens when a client query is dropped as a result of fetches-per-server/zone rate-limiting?

Clients whose queries are dropped due to client rate-limiting quotas are sent either a SERVFAIL response, or are silently dropped.  The choice to SERVFAIL or to drop is configurable but the default is not the same for the two rate limiters.  When fetches-per-zone is enabled, the default behaviour when rate-limiting is active is to drop queries that exceed the limit, whereas for fetches-per-server, the default is to SERVFAIL.

How can I find out how this configuration option is impacting my server?

rndc recursing now reports the list of current fetches, with statistics on how many are active, how many have been allowed and how many have been dropped due to exceeding the fetches-per-server and fetches-per-zone quotas.

You can also monitor the BIND statistics - two new counters have been added:

  • ZoneQuota counts the number of client queries that are dropped or sent SERVFAIL due to the fetches-per-zone limit being reached.
  • ServerQuota counts the number of client queries that are dropped or sent SERVFAIL due to the fetchers-per-server limit.
Relying on logging for statistical purposes will produce inconsistent results
When applying Recursive Client Rate limiting, logging is emitted at intervals, but the logging of per-zone statistics may sporadically reset back to the original value (when the structure that was capturing the values is released).  The logging is useful as an indication that Recursive Client Rate limiting is active during a time period, and to what extent client queries are being dropped, but BIND's statistics provide a much more accurate set of counters for graphing and statistics.

On a normal, busy DNS server, that is not under attack, should there ever be a backlog of more than a few clients?

Client details are stored as 'Recursive Clients' when the query they have made is a cache miss.  That is going to happen more often with domains that deploy content management solutions that intentionally serve answers with short TTLs.  You're also going to see a spike in the backlog when you first start named (with an empty cache) and if you issue rndc flush to clear the entire cache.

Sometimes there's a short-lived increase in the backlog of recursive clients when the NS records for a popular domain have expired from cache and are being refreshed.

As for 'not under attack' - actually, with the traffic patterns we covered, it is the authoritative domains that are under attack, not the recursive servers.  The point here is that if there is a situation where you're receiving a lot of unique queries for domain(s) whose authoritative servers are not responding (perhaps they have an unfortunate outage, or perhaps they are under attack, but not via your server), then there could still be impact on your recursive server in the form of a backlog of recursive clients (though likely not as many as if your server is actively involved in the attack).

The 'normal' level of backlog for your server is going to depend on your query rate, your cache hit rate (so the types of queries that your clients are making), and also, to some extent, your Internet connectivity.  The longer it takes for a 'normal' client query to be resolved, the larger the backlog will be.

We recommend monitoring to determine what is 'normal' for your server, and configuring limits appropriate to your own environment.

When looking at outbound open sockets and named.recursing - if a query was forwarded to my server via another recursive/caching server - would I see the IP address of the query originator or the IP address of the caching server?

When you use the rndc recursing command to dump the current recursive clients backlog, what you will see in that file is:

  • The source IP address (and port) of the client
  • The query the client has made

In the situation that the client is a forwarding server itself, then on your server you will see the IP address of the forwarding server.  (You would have to do the same investigation on the forwarding server to see the originating clients.)

Have you seen the clients issuing these to be real clients, or simply botnets sending spoofed addresses? If the latter, does sending responses cause issue?

The clients are 'real'.  Sometimes they are relaying the traffic (e.g. in the broken CPE device scenario).  Sometimes they are compromised.  Sending query responses shouldn't make a significant difference to the outcome:

  • Most answers are going to be ignored by the sender anyway (the point of the attack is to send a high volume of unique queries, not to process the answers).
  • If the attacking client has used standard DNS resolver libraries, (as opposed to constructing the DNS query packets itself), then an NXDOMAIN response may be better than SERVFAIL or no response as this is a proper and final answer.
  • A SERVFAIL response may cause the client to re-send the same query again (or to try another resolver).
  • No response may also cause the client to retry, but it will not do so until it reaches its local timeout.

In the case of spoofed source addresses, a client that did not initiate the query will not have a socket open awaiting a response, so it should just ignore the query response and drop it (or it might send back an ICMP error).

We have not seen any significant reports of problems caused by this.  (It is no different a scenario than that of a late SERVFAIL response to a genuine client, where the client had stopped waiting for the response before the SERVFAIL was received from the server.)

Suppose the attack is on microsoft.com, do you think it is correct to be authoritative for this domain? This will be a disaster.

Yes, this would be a disaster.  The BIND tuning options fetches-per-server and (with a suitably large value) fetches-per-zone will be the better choice of mitigation strategy for this case.

Note though, that unless the authoritative servers for microsoft.com are non-responding, that neither of those will make a difference because there will not be a backlog of client queries.

In this case, you may want to make sure that you have have specified a low value (e.g. 10 seconds) for max-ncache-ttl so that the received NXDOMAINs don't occupy your cache for too long.

Microsoft has indicated that the random queries for SOAs is expected behavior for Direct Access. Have you encountered this?

There are a number of applications that make limited use of random name queries to establish 'status' - for example, Chrome does this too. Providing that the servers authoritative for the domain respond promptly - and we would hope that they do, if the designers of these probes have provisioned them adequately - then there should not be any detrimental impact on a recursive server handling those queries.

On my server, I see attacks from many countries: Canada, Singapore, USA, China with random IP and random subdomains.

We would guess that you are running authoritative servers - in which case we would recommend that you look instead at the Response Rate Limiting (RRL) options for authoritative servers.  (Recursive Client Rate Limiting is intended for recursive servers.).  Please see: Using the Response Rate Limiting Feature in BIND 9.10

If you are seeing queries from many countries on your recursive servers, then ISC would recommend that you check your ACLs and make sure that you're not accidentally running an open resolver.

Even if the CPE devices of the customers of an ISP are bad, it is generally not possible for the ISP to block them from the ISP's services.

In the case of the CPE devices that are accepting and proxying queries from the Internet to the ISP resolvers, yes, it would be bad to block the CPE devices entirely.  But it would not be unreasonable to block queries to port 53 on the Internet interface of those devices.  This query traffic will be reaching them through the ISP's network and passing their perimeter routers and firewalls in order to reach the CPE devices.

Is there any information about an increase in CPU/memory that could be attributed to the use of this automated mitigation method?

We have neither observed in our tests, nor received reports from our testing partners of any increase in CPU or memory consumption that can be attributed to the automated mitigation (fetches-per-server/zone).

There will, however, be a significant impact on the memory consumption of BIND during an unmitigated attack.  This is due to both the increase of cached NXDOMAIN responses received from the authoritative servers (when they are able to respond), and the additional overhead of maintaining an increased backlog of recursive client contexts.

Don't forget, that without the mitigation techniques in place, under an attack, your servers could be completely overwhelmed.

Isn't a SERVFAIL to the client the same outcome anyway, without Recursive Client Rate limiting?

There are two types of SERVFAILs that occur during an unmitigated attack.  The first type are the SERVFAILs sent back in response to the 'bad' client queries when those queries time out.  (When you are using fetches-per-server/zone, those are still sent back, but immediately instead of after the timeout).

The second type of SERVFAILs are those being sent back to 'good' client queries that cannot be handled because the server is overwhelmed, but which otherwise could be answered promptly.  We implement the mitigation techniques to eliminate this second set, so that legitimate clients are unaffected (as much as possible) by the attack.

Has there been any discussion on implementing a new RCODE that could indicate to the client that the domain is blocked or temporarily unavailable?

There has been some recent discussion on the IETF's dnsops mailing list about adding new RCODEs for various purposes, but nothing that addresses this particular case. You can follow and participate in the IETF working groups here:

https://www.ietf.org/

Immediate SERVFAIL would allow the client to query a different server much sooner, avoiding delays, much better than a timeout if dropped.

Please see the earlier FAQ on different types of answers and the impact of each.  But yes, that is one possible outcome.

What limitations do you see in the solution as released now?

We have not yet implemented any per-server or per-zone overrides, which was something that we would have liked to have done - these are likely to follow in a subsequent release, unless we come up with even better ideas!

One clear limitation of the fetches-per-zone/server settings is that they only trigger and start rate-limiting if the authoritative servers fail to respond - it is the failure to respond that is driving the recursive client backlog (and similarly the outstanding 'fetches').

In that case, the recursive server will still be handling higher query rates than usual (but without a backlog).  It will also still be sending the DDoS queries to the authoritative servers as well as building up a much larger proportion of cached NXDOMAIN entries.

In this one situation the technique of generating a local NXDOMAIN might be more effective than fetches-per-server/zone, although as noted above, you cannot reasonably respond NXDOMAIN for all queries for, e.g., microsoft.com.

Really this needs a more sophisticated automated filtering approach than we currently have available - if you have any good ideas (we have some that we're discussing internally), please do submit them to: bind-suggest@isc.org.

What if you are running authoritative servers?

We have a good presentation on using RRL on the authoritative server here: https://www.isc.org/presentations/; scroll down and look for RRL.

Is anything being done to look at the authoritative server responses during an attack?  In other words, to recognize that there is an unusual number of NXDOMAIN responses?

This is another potential approach that we have not yet fully explored.  If you have any good ideas on how to distinguish (reliably) between DDoS queries that would most likely receive an NXDOMAIN response, and genuine client queries for the same domain that would not, we'd love to hear them.  Please submit your ideas to bind-suggest@isc.org.

Is there a way to specify IPs in the configuration that are not subject to any rate-limiting, for administrative and/or diagnostic purposes?

Not yet - this is something we are considering for a future release.

RRL affects all query traffic, doesn't it?

By default, RRL only affects non-recursive queries - those sent to authoritative servers.  It is possible though to make it apply to recursive queries too, although this is not usually recommended. Note that RRL is intended to protect authoritative servers and their clients.

Rate limiting with RRL (as opposed to Recursive Client Rate Limiting) is applied to the query responses, that is, after recursion has taken place. There are some unusual corner case scenarios however where a resolver's cache could be used in a reflection attack; if you believe this is happening on your servers, then RRL could help you reduce the effects.

Last year, we saw that during a random attack on a domain, a resolver that did not receive any answer from the authoritative servers, was looping for several seconds without getting answer. Did you enhance the timeouts on BIND?

We did make some careful tuning changes between BIND 9.9 and 9.10 but reducing the timeouts on BIND (when querying authoritative servers) can also produce unexpected and unwanted side-effects, such as names that no longer resolve for clients.

So even with the updated algorithm the same principles apply - BIND will retry all of the servers authoritative for a domain several times, changing the EDNS (for the maximum UDP packet size) as well as disabling EDNS altogether.

There is a need for BIND to 'try hard' because the Internet is an imperfect place, abounding in broken DNS implementation and middleware (routers, load balancers, firewalls etc..) that behave incorrectly.  If we configure BIND to give up more easily, then we see resolution failure rates that are unacceptable to most ISPs.

Can I be confident to put this into production at a very large ISP now?

Yes!  That is why it is now available in open source BIND.

We have already completed extensive testing in partnership with a number of our customers (so that we could control the distribution, collect feedback, and let early adopters/testers know of any problems and updates as soon as possible).

We're now very confident that Recursive Client Rate limiting addresses many of the problems being encountered, and that it shouldn't introduce new issues (except when inappropriately configured).

Is it possible to limit this type of DDOS traffic (Pseudo-random Subdomain Queries) before they reach the DNS Server?

This depends on the source of the queries.  If you are not yourself running an open resolver, then you do have some control over your legitimate clients, and the DDoS traffic will be originating with those clients, either directly or indirectly.

If the DDoS traffic is being proxied by broken (open) CPE devices, then one very simple technique will be to block, at your network boundary, any query traffic to port 53 destined for your client IP addresses.  There may be some exceptions (that you can deal with individually), but generally most home users are not running DNS servers, and therefore do not need to be accepting DNS queries to port 53 on their Internet-facing address.

In the other scenario where the traffic originates from compromised clients and devices, it's usually clear (from the volumes) that there is only a small number - these would be easier to deal with individually.  If they're sending DNS DDoS traffic, then it's quite likely that this is not their only point of compromise - blocking their access to your network could be a good response to the situation (but do check for address spoofing first).

If you are running an open resolver, then you have much less control over the content of queries that you're receiving, and the problem then becomes one of where to install the filters for best effect.  A well-designed flood of inbound query packets (or of any kind of network traffic) could easily overwhelm your inbound network bandwidth, before reaching the DNS server.  Assuming that it is only DNS traffic that you want to rate-limit (all other traffic being blocked), then we'd suggest a hash-bucket based approach, where you configure rate-limiting on a per-pool basis, the source addresses being randomly assigned to a pool based on a hash algorithm.  This will probably have mixed success, dropping some good client queries along with those that you are attempting to block.