• Print
  • Share
  • Dark

UDP Listeners - choosing the right value for -U when starting named

  • Updated on 20 Sep 2018
  • 2 minutes to read
  • Contributors 

BIND 9.9.0 introduced a new feature to improve performance in multi-threaded environments, particularly those with a large number of processors.  The reasons for this are documented here:

Performance: Multi-threaded I/O (https://kb.isc.org/docs/aa-00629)

In later versions (9.9.6 and 9.10.0) we reduced the default number of UDP listeners per interface from equaling the number of worker threads, to half of that value.

The default setting for the -U option (setting the number of UDP listeners per interface) has been adjusted to improve performance. [RT #35417]

Then in 9.9.9 and 9.10.4, the default is updated again:

On machines with 2 or more processors (CPU), the default value for the number of UDP listeners has been changed to the number of detected processors minus one. [RT #40761]

Essentially, there's no one setting that's correct for every system; the best we can do is pick a value that works well in the largest number of circumstances.

On many platforms, we found that when setting the default UDP listeners to the same as the number of CPUs, there was lock contention between the listeners that slowed the overall system performance down.  So we tested several different -U settings on several different operating systems, and found that on most of the n-processor systems we tried, performance increased until the number of UDP listeners reached n/2, then flattened out, followed by dropping significantly as it approached n.  Therefore n/2 seems to be a better place to start than n.

There are many factors that influence the 'right' choice of -U - including the type of OS, CPU and machine architecture, and also how many interfaces you already have configured to listen on.  If you have a large number of interfaces (virtual or otherwise), setting -U to an even smaller value than n/2 may be best.  In other environments, setting -U to the number of CPUs less one may provided the optimum throughput for your machine.

Our advice is to pick a starting point, and then evaluate (through benchmarking in a test environment, or running for a period with different values in a production environment) what is best for your specific traffic and configuration.  For most n/2 is a good place to start.

While you're bench-marking......also take a look at the default value of -n (number of worker threads).  This defaults to the number of CPUs detected, but on systems with very large numbers of CPUs, may not be the best choice.  Particularly when the number of logical CPUs exceeds the number of physical CPUs, setting -n to the number of physical CPUs may improve throughput.

Related articles:

How to determine if you are using a threaded build

How to determine BIND query rates (qps)

Problems with this site? Email us at marketing@isc.org