Background info about 'maximum number of FD events' log messages
Log messages reading maximum number of FD events received mean that when checking to see whether any sockets were ready to be read from,
named found that there were more than 64 of them.
These log messages are not related to the maximum number of open file descriptors. They indicate that when the I/O watcher polled the open sockets, more socket events were found than the implementation normally expects to see (a value defaulting to 64). This is not an error in and of itself -- the remaining events will be returned with the next poll -- but it can indicate high socket traffic or activity. If the message is logged frequently and persists for a long period of time, then it may be that recompiling with a higher value of
ISC_SOCKET_MAXEVENTS will make the message disappear. But if there is another underlying problem that isn't diagnosed and addressed, then you may still reach the maximum number of events, no matter how high the limit is set.
On larger machines that have high query rates, increasing this setting can be done when you build the named binary by setting ISC_SOCKET_MAXEVENTS when invoking
configure. For example:
There is also a build-time option available for those running large recursive servers that increases this setting and several others: --with-tuning=large - about using this build-time option.
Some busy servers may need to run with a higher limit, but if the problem persists after increasing ISC_SOCKET_MAXEVENTS and the message is being logged constantly, then there is most likely something else happening that you need to investigate - for example, unusual traffic loads or specific queries or patterns of queries that are causing your name server to be overloaded and unable to service inbound queries or replies from other authoritative servers quickly enough.
You may want to check overall stability of the server. For example:
- The ratio of server failures (SERVFAIL) that your server returns to the clients
- The ratio of query replies to queries (compare both of these with what's 'normal' for your server)
- Cache memory footprint
- Cache hit ratio
- UDP packet drops reported by the OS
- Query response times
- Recursive clients (recursive queries currently 'in progress')
We also recommend upgrading to one of the latest supported production versions of BIND. Current versions of BIND include changes to cache management that provide greater performance resilience to some types of client query loads.
For further troubleshooting advice, also see: What to do with a misbehaving BIND server.