Recursive Client Rate limiting in (superseded) BIND 9.9 Subscription Version and BIND 9.9 and 9.10 Experimental versions
Several new tuning options for Recursive Server behaviour made their debut in BIND 9.9.6-S1 and the newer BIND 9.9 and 9.10 experimental versions (available on request). These features are intended to optimize recursive server behaviour in favor of good client queries, whilst at the same time limiting the impact of bad (cannot be resolved, or which take too long to resolve) client queries on local recursive server resource use.
Early-testing Experimental Features Removed
The 'Hold-down' timer introduced in 9.9.6-S1b1 has been removed in favor of rate-limiting fetches per server (described below). The associated options, holddown-threshold and holddown-time , have been removed.
Another option introduced in 9.9.6-S1b1, client-soft-quota , was removed in favor of named calculating its own soft quota based on the recursive-clients (or hard quota) setting. For changes in the Client Soft Quota, see below.
If any of those settings are still in your named.conf file from 9.9.6-S1b1, you will have an error when starting named (or from named-checkconf.)
Rate-limiting Fetches Per Server
Replacing the hold-down timer feature is a dynamic limit to the number of fetches allowed per server (IP).
The fetches-per-server option sets a hard upper limit to the number of outstanding fetches allowed for a single server. The lower limit is 2% of fetches-per-server, but never below 1.
Based on a moving average of the timeout ratio for each server, the server's individual quota will be periodically adjusted up or down. The adjustments up and down are not linear; instead they follow a curve that is initially aggressive but which has a long tail.
The fetch-quota-params option specifies four parameters that control how the per-server fetch limit is calculated.
fetches-per-server 200; fetch-quota-params 100 0.1 0.3 0.7;
The default value for fetches-per-server is 0, which disables this feature.
The first number infetch-quota-params specifies how often, in number of queries to the server, to recalculate its fetch quota. The default is to recalculate every 100 queries sent.
The second number specifies the threshold timeout ratio below which the server will be considered to be "good" and will have its fetch quota raised if it is below the maximum. The default is 0.1, or 10%.
The third number specifies the threshold timeout ratio above which the server will be considered to be "bad" and will have its fetch quota lowered if it is above the minimum. The default is 0.3, or 30%.
The fourth number specifies the weight given to the most recent counting period when averaging it with the previously held timeout ratio. The default is 0.7, or 70%.
By design, this per-server quota should have little impact on lightly-used servers no matter how responsive (or not) they are, whilst heavily-used servers will have enough traffic to keep the moving average of their timeout ratio "fresh" even when they are deeply penalized for not responding.
Rate-limiting Fetches Per Zone
BIND already has an option that limits how many identical client queries (that cannot be answered directly from cache or authoritative zone data) it will accept. When many clients simultaneously query for the same name and type, the clients will all be attached to the same fetch, up to the max-clients-per-query limit, and only one iterative query will be sent. This doesn't help however in the situation where client queries are for the same domain, but the hostname portion of the query is unique for each.
To help with this, we're introducing logic to rate-limit by zone
instead. This is configured using a new option fetches-per-zone
which defines the maximum number of simultaneous iterative queries to any one domain that the server will permit before blocking new queries for data in or beneath that zone. If fetches-per-zone is set to zero, then there is no limit on the number of fetches per query and no queries will be dropped.
The default is 0, which disables this feature. (In earlier versions it was 200.)
When a fetch context is created to carry out an iterative query, it gets initialized with the closest known zone cut, and we put a cap on the number of fetches are allowed to be querying for that same zone cut at a time.
FAQs on Rate-limiting Fetches Per Zone/Server
What happens when a client query is dropped as a result of fetches-per-server/zone rate-limiting?
Clients whose queries are dropped due to client rate-limiting quotas are sent a SERVFAIL response.
When are these features useful?
These options are particularly good when a large number of queries are being received:
- fetches-per-zone: for different names in the same zone
- fetches-per-server: for different names in different domains for the same server
... when these authoritative servers are slow at responding or are failing to respond. They should not impact popular domains whose servers are responding promptly to each query received.
When are these feature unlikely to be helpful?
If authoritative servers are responding very quickly, then it's possible that the number of outstanding queries for that server or zone will never reach the limit, rendering this mechanism ineffectual. Care should also be taken not to configure too low a value for these:
- fetches-per-server: as might negatively impact servers which host many popular zones.
- fetches-per-zone might negatively impact some popular social media and other sites.
Are there any edge cases where odd behavior might be observed? When restarting a server, or if the cache has just been cleared via the rndc utility, then there may be some temporary spikes in traffic that trigger these limits unexpectedly, but the effect should be temporary.
How can I find out how this configuration option is impacting my server?
rndc recursing now reports the list of current fetches, with statistics on how many are active, how many have been allowed and how many have been dropped due to exceeding the fetches-per-server and fetches-per-zone quotas.
Client Drop Policy
This feature was introduced following the observation that the build-up of recursive clients is very similar in behavior to a TCP SYN storm. Researchers determined that not dropping the oldest connection (in our case, the oldest recursive client) when the pool of connections becomes full, is a more effective strategy than always dropping the oldest. This is because it has a good chance of dropping one of the 'bad' connections than the 'OK' ones, and will be dropping it sooner rather than later, which overall works out better. This code also works best in combination with tuning the recursive clients soft limit so that the recursive server is never in the position of hitting the hard limit - we always want to accept the new inbound.
The client-drop-policy option lets you set values for the probability of dropping oldest, newest, or random (the three of which have to sum to 100) existing recursive queries when recursive clients quota is reached. (Note: soft or hard - there is still a drop, but in the case of the hard limit we also drop the inbound query too).
client-drop-policy has three arguments that define percentage probabilities for "drop newest", "drop random" and "drop oldest" in that order. All three values must be set, and they must sum to exactly 100. By default, the probabilities of these are 0% for drop newest, 50% for drop random, and 50% for drop oldest.
Recursive Client Contexts Soft Quota
In the traditional recursive clients context model, we have both a soft and a hard limit to the number of recursive clients. When reached, the soft limit acts by dropping a pending request for each new incoming request. When named reaches the hard limit, it drops both a pending request, and the new inbound client query. So ideally we want named to be managing its backlog of recursive clients before reaching the hard limit.
There is no soft limit at all in the traditional model where recursive-clients <= 1000. For recursive-clients > 1000, the soft quota defaults to hard-quota -100.
In 9.9.6-S1b1 we introduced the client-soft-quota option to give the operator precise control over how the soft quota was configured. In testing since the introduction of this option we have determined that tuning this is not very useful, but that better defaults were needed than we had before.
Now, when recursive-clients <= 1000 the soft quota is 90% of recursive-clients. When recursive-clients > 1000, the soft quota will the equal to the hard quota minus either 100 or the number of worker threads, whichever is greater.
Caching of SERVFAIL responses
Introduced with 9.9.6-S1 is a new feature to cache a SERVFAIL response due to DNSSEC validation failure or other general server failure. This feature is controlled by the servfail-ttl option, in global or per-view options.
The SERVFAIL cache is not consulted if a query has the CD (Checking Disabled) bit set; this allows a query that failed due to DNSSEC validation to be retried without waiting for the SERVFAIL TTL to expire.
The default value for servfail-ttl is 10, which causes any SERVFAIL results to be cached for 10 seconds. The maximum value is 300 (five minutes); a higher value will be silently reduced to 300. A value of 0 disables this feature.
SERVFAIL caching addresses the same problems as fetches-per-zone and fetches-per-server.
Note, however, that there can be unexpected consequences from this caching, as previously all SERVFAIL responses were retried immediately when re-queried.
Caching of SERVFAIL responses assists in limiting the impact of repeated queries (due to client retries) for the same name for which resolution has already failed.
Production environments continuing to use the older versions of BIND that include this feature are recommended to disable it by setting servfail-ttl 0; or if they are deriving clear benefit from it, to consider setting it to a lower value than the default 10s. 1s or 2s should be sufficient.