--with-tuning=large - about using this build-time option
  • 04 Sep 2018
  • 2 Minutes to read
  • Contributors
  • Dark
  • PDF

--with-tuning=large - about using this build-time option

  • Dark
  • PDF

Article summary

In BIND 9.10 (and earlier in the stable preview edition) we added a built-time option --with-tuning=large.

This option allows operators to tune BIND for better performance in high-memory machines, by setting various constants and defaults to values more appropriate in such usage.

Note that except for the MAXSOCKETS control (which can be set with "named -S") these settings are only available at compile time. 

Note also, that running a binary that has been built with --with-tuning=large may not help the performance of a smaller and low-end BIND server because it will cause named to consume more resources than it needs, which in itself could cause issues rather than improving throughput.

Here's a short summary of the individual internal changes - and what impact they might be expected to have on a server:

1.  ISC_SOCKET_MAXEVENTS changed from 64 to 1024

This is the maximum number of events communicated with the kernel; see lib/isc/unix/socket.c in the source tree.

2.  ISC_SOCKET_MAXSOCKETS changed from 4096 to 21000

"named -S $number" can set it higher but no lower

This is the maximum number of sockets named can use. Again see lib/isc/unix/socket.c in the source tree; see also the documentation of "-S" in the named(8) man pages.

Both of the two above require slightly more memory, but not much; they wouldn't overly impact smaller systems not needing those resource levels.

3.  RCVBUFSIZE changed from 32K to 16M

Increasing RCVBUFSIZE (the receive buffer size) will reduce dropped packets, but it may also hurt socket performance on some platforms; the Linux kernel allocates the receive buffer space when creating a socket, and an increase from 32k to 16m allocated per socket is potentially significant.

4.  RESOLVER_NTASKS changed from 31 to 523

Increasing the number of resolver tasks from 31 to 523 reduces lock contention and increases throughput, but it also greatly increases physical memory consumed by named, and so would not be advisable on small systems.

5.  UDPBUFFERS increased from 1000 to 32K; EXCLBUFFERS increased from 4096 to 32K

These last settings are quota changes, so they should only impact memory use if named needs to use those resources that it was previously limited against using.

If you are concerned about BIND performance, particularly recursive performance, you should also consider:

  • Use named.conf option minimal-responses yes; to reduce the amount of work that named needs to do to assemble the query response
  • Disable (in some cases, completely remove in order to prevent ongoing interference) outbound firewalls/packet-filters (particularly that maintain state on connections)
  • Increase outbound buffers
  • Ensure that your network infrastructure supports EDNS and large UDP responses up to 4096
  • Ensure that your network infrastructure allows transit for and reassembly of fragmented UDP packets (these will be large query responses)
  • Ensure that your network infrastructure allows DNS over TCP
  • Check for, and eliminate any incomplete IPv6 interface set-up (what can go wrong here is that BIND thinks that it can use IPv6 authoritative servers, but actually the sends silently fail, leaving named waiting unnecessarily for responses)

If you are suffering from performance problems on a Recursive Server, then you might also be interested in our Recursive Client Rate Limiting features.