Purpose of this test
Kea is designed to connect and use database backends for storage of leases, host reservations, and even most of the configuration. Using clustering technology to provide a single source of backend data enables the operator to quickly spin up new virtual machines to provide the database component of this system. Any new VM can simply become a node in the existing cluster, where it will quickly acquire all the data. This facilitates moving the database and Kea virtual machines around on the network, by minimizing the configuration overhead for each new VM. Kea supports both MySQL and PostgreSQL database backends. Previous testing has confirmed that Galera clusters can function as Kea backends; this test was to determine if the equivalent functionality for PostgreSQL would also work with Kea.
The Postgres project calls their replication function "High Availability" and uses the terms "pools" and "load-balancing," because this feature is designed to support database high-availability. We use those same terms when talking about Kea High Availability, but the Postgres High Availability system does not provide, by itself, high availability for Kea DHCP services.
Summary of Results
Postgres is capable of several High Availability configurations, both with and without PGPool-II. Both have been confirmed to work in all three Kea backend roles (lease database, host reservations database and configuration database).
Only connection and usage testing were performed. This was not a load or stress test, and no performance data was gathered.
This was an experiment to confirm the basic functionality of using a PostgreSQL cluster as a Kea database backend. In general, ISC does not prescribe how to configure your chosen database software, as we are not experts in database software.
Security was not a concern in this testing as all of the tests were performed with virtual systems that were only accessible to each other on the local host machine. Operators should consider their own network configuration and security requirements; adjustments may be required for a secure configuration.
Test Design
The purpose of the test was to confirm that data provided by Kea to one database node was properly propagated and available to Kea from another node.
In all of the tests, these general parameters were used:
- All test virtual machines were Debian GNU/Linux 11 (bullseye).
- PostgreSQL 15 and PGPool-II 4.3.5 were the latest available versions from the official Postgres Debian repository at the time of testing.
- There were two PostgreSQL virtual machines (db01 and db02).
- There was one Kea 2.3.6 virtual machine used.
- There was one additional virtual for perfdhcp, used to simulate DHCP clients.
- The Kea server instance and the perfdhcp server instance both had an additional interface that is part of the 10.1.2.0/24 subnet.
- Only DHCPv4 was tested but results should be similar for DHCPv6.
The virtual machines called db01 and db02 had entries in the /etc/hosts
files on all relevant virtuals so that the names could be used:
Unless otherwise noted, db01 was the primary and db02 was the secondary for the purposes of PostgreSQL High Availability.
Configuration Steps
The steps and subsequent tests shown were performed to verify the functionality; you can follow along and repeat the testing if you wish. It is possible that you could then use the resulting configuration in production, with some modification.
Install the PostgreSQL software
There are Debian-maintained versions of PostgreSQL; however, for these tests, the Postgres-maintained version was used.
Add the repository
The repository and verification key for the Postgres packages was added to db01 and db02 as shown below:
Install the software
PostgreSQL was then installed on db01 and db02 as shown below:
At this point, PostgreSQL could be used as a single database store with no HA after starting the server (if everything went OK, it should already be running). There is more involved with setting up HA, however.
Timezone
It is very important that the timezones match across all of the servers that will be used for Kea or PostgreSQL or PGPool-II when setting up the database, but particularly when the Kea server and the PostgreSQL server are not the same server. This is detailed in the ARM. If the timezones are not the same, very strange errors will occur during simple operations (such as lease renewal). The only supported configuration for the timezone is 'UTC'; it may be possible to use other timezones, but it is not recommended.
First, go through each server and confirm that it is set to UTC as follows (Debian 11 commands shown; other systems may use some other method):
As can be seen above, this server's local time is set to EDT. Modify it to UTC as follows:
Then check again using the first command. It should now show UTC for the Local time
and Time zone
lines.
On the PostgreSQL servers (db01 and db02), it is necessary to alter settings in /etc/postgresql/15/main/postgresql.conf
, specifically changing the timezone =
setting to timezone = 'UTC'
. It may also help to change log_timezone =
to log_timezone = 'UTC'
as well.
Test One: PostgreSQL HA
In this mode, there are two PostgreSQL servers configured with built-in streaming replication. Only one of the servers is writeable; the other server is a replica only. The secondary (replica) server can be promoted to primary (writeable) in the event of a failure on the existing primary. According to the documentation, connecting to the writeable server is meant to be accomplished by using DNS records. The documentation doesn't mention it, but it might be possible to use VRRP to control the access as well.
The Kea Configuration
For this test, a DNS name was used as shown in the partial config below:
One can imagine that if the current db01 failed, then the config above could be changed to connect to db02. Alternatively, a db03 could be added to /etc/hosts
, pointing to either a floating IP address or simply edited to point to either db01 or db02's IP address, depending on which was the current primary (writeable) server. This hypothetical db03 could then be the host value in the partial configuration above instead.
Configuring PostgreSQL HA
First, some changes need to be made to /etc/postgresql/15/main/postgresql.conf
on both servers to enable the replication. The file should be owned by postgres, so you can edit it as the user postgres. All of these lines should exist in the aforementioned configuration file. Alter them to the values shown below on both servers:
Note that other values for some of the above fields may yield different performance results.
Next, the replication hosts need to be configured to allow communication amongst themselves. Permission for db02 to connect to postgres as the replication user on db01 needs to be added to the bottom of /etc/postgresql/15/main/pg_hba.conf
on both servers, as shown below.
First, disable these lines:
Any line near the bottom of the file that starts with "host" should, at this point, be disabled.
Then add the below section:
This is also a good time to allow the kea user to connect from the Kea server to the Kea database. Add the below section as well:
Note that the final line shows the IP 192.168.115.128/32
. This is the IP address of the Kea server in the test.
This is an area where security could be an issue. The trust
keyword causes the connection to be allowed with no password. There may be other concerns here as well. This is suitable in a private environment such as our test lab environment, but care should be taken in production.
Now restart the PostgreSQL service on both servers:
Configure the database service for replication
On the primary (db01), these commands need to be run to setup the replication slots and add the replication user as shown below:
On the secondary (db02), the datastore directory needs to be cleared out in preparation for replicating the current state from the primary (db01). The commands shown below accomplish this:
Then, again on the secondary, use pg_basebackup
to copy the current state from the primary as shown:
Now tell the secondary (db02) how to connect to the primary to replicate the data by adding the below to the bottom of /etc/postgresql/15/main/postgresql.conf
:
Now restart PostgreSQL on both servers.
Check that the logs look okay on both servers with the command sudo tail /var/log/postgresql/postgresql-15-main.log
. The db01 server should have a line like this (probably the last line):
db02 should have these few lines (probably the final lines):
Test that Replication is working
At this point you can test that the replication is working by running these commands on the primary:
This newly created testdb
should appear on the secondary. Test that theory by performing the following on the secondary:
Create the Kea database for use by Kea
Not many additional steps are required to create the database for use by Kea, apart from remembering that this needs to be done on the primary (db01). Simply follow the normal instructions for creating the database as shown in the ARM, with a small difference. Since this is PostgreSQL 15, there is one additional step that must be performed while granting permission to the user to connect to the database Kea will use. This is an additional security parameter that must be added, as the defaults were changed in PosgreSQL 15. Add all privileges for the "public" schema to the Kea database as shown below (replace "kea" with the actual database name if different):
Until the above is done, it won't be possible to setup the database using the kea-admin command remotely from the Kea server.
Testing Kea with PostgreSQL streaming replication
Configure the Kea server to use postgresql
for the lease-database
as shown in this simple configuration:
Start the Kea server. Send some client traffic toward the Kea server (perfdhcp can be used to send DHCP traffic: sudo perfdhcp -4 -r 1 -R 10 -t 60 -l ens256
). Check the log files for any problems. Now connect to each PostgreSQL service on the primary and secondary servers to check whether leases are appearing in both locations as shown (replace kea
with the actual name of your Kea database and replace lease4
with lease6
if testing using DHCPv6):
Both servers should feature similar resulting leases. Now test that the replication can be moved to the secondary by stopping the PostgreSQL service on the primary:
On the secondary, promote the service to writeable mode by exectuting the following:
Now, modify the Kea configuration or /etc/hosts
or db03
, as discussed earlier, to connect to the NEW primary (db02). Everything should work as before. Switching back to using db01 as the primary is beyond the scope of this document as it is non-trivial (it would be easier to add a new secondary using the above method).
Test Two: PostgreSQL + PGPool-II HA
This test builds on test one by adding PGPool-II which, usually, would be installed on a third server (db03), although that is not a requirement, and would manage connections to the primary (writeable) server in PostgreSQL HA Replica mode. PGPool-II offers many features, but this test will only use one - load balancing. This feature will spread read queries between db01 and db02, and writes will go only to db01. For more information about PGPool-II capabilities, see their documentation.
Install PGPool-II
First, the pgpool2 package must be installed. The below command should install the latest package from the earlier added PostgreSQL official repository:
If all went well, there should be several pgpool instances running. There are some settings that must be changed before PGPool-II can be tested, however.
Configure PGPool-II
To make this simple, only db01 will be configured with PGPool-II. Only slight modification of the configuration file /etc/pgpool2/pgpool.conf
is required. Change the following values in that file, as shown:
These values will cause PGPool-II to listen on all local addresses at port 5433. It will connect to db01 and db02 PostgreSQL servers on port 5432. Restart PGPool-II to load the new configuration:
That's all there is to it! PGPool-II is now ready to accept connections. Check the logs on db01 to be sure that everything is working with PGPool-II using the command sudo grep pgpool /var/log/messages
, which should show the last few lines as follows:
Testing PGPool-II
Test that the PGPool-II service is working properly by connecting to all three services: the two Postgres services as described previously, and the PGPool-II server as shown below:
Note that this will probably only work from db01
. Now, drop the database created during the earlier initial test testdb
by running the command drop database testdb;
from the PSQL session connected to PGPool-II. DROP DATABASE
should appear before returning to the prompt. Now check on all three connections (the two connections directly to postgres on db01 and db02 and the connection to pgpool on db01) to make sure the database is gone, using \l
to list databases. There should be no testdb shown on any of the three connections!
Testing Kea API commands with PGPool-II backend
Now it is time to test Kea with PostgreSQL streaming replication with a PGPool-II front end. For this testing, there will be additional parts of Kea using the database; host reservations and the configuration backend will be added. For this testing, the following configuration will be used:
The path to the hook libraries in the above configuration has been changed to path
. Adjust this value to be the path to the real files when performing this testing.
Test the Configuration Backend with PostgreSQL + PGPool-II
First, start Kea with the above configuration. It should start and run, outputting various messages to the terminal. The next step will be to use the cb_cmds
hook library to configure a subnet. Also, connect to the Kea database on db02. We will confirm that these commands succeed and are replicated with some simple SQL queries.
See the ARM regarding the usage of the configuration backend hook. A simple method of submitting API commands to Kea is to perform this simple string of commands as the user root
:
where <file>
is the file that contains the API command to be submitted.
jq
, seen at the end of the command string, is a nice command-line JSON formatting program, as the API returns are JSON as well. socat
is a command-line program for communicating with Unix sockets. Neither of these are installed on Debian by default, but both can be added easily with apt install jq socat
.
First, create the server container that will hold the configuration with the following JSON in <file>
and piped to socat
as shown above:
Check the output on the terminal to confirm command success. On db02, run the SQL query shown to confirm that the data made it across (note the record of id 2 and tag server1):
Now the subnet can be added to server1 with the following JSON in a file again piped to socat
as discussed previously:
Confirm that the subnet was replicated on db02 by running the sql query as shown:
Note the pool start and end shown above match the pool statement in the JSON from the API command.
Now perform a config-get
API command as shown below using the previously described cat
file and pipe to socat
method. The subnet should be shown in the output:
Test Host Reservations in PostgreSQL + PGPool-II
The next test is using the API to add a host reservation using the host_cmds hook. This is fairly easy to test. First, add the host using the API command and previously described cat
and socat
method, with the following JSON in the file to be read with cat
:
Now the host should appear in the PostgreSQL database on db02 if the replication is working correctly. This can be confirmed with the following SQL query:
Finally, reading of host reservations using the host_cmds hook can be confirmed with the API by attempting to retrieve the previously added host reservation using the API call reservation-get
with the following JSON:
The host reservation should be returned as expected when using the above with the previously described cat
and socat
method.
Final Test: lease-database in PostgreSQL + PGPool-II
For this test, an additional Kea command line tool will be used, perfdhcp
, which can be used to generate DHCP client traffic for testing on any DHCP server. This will allow testing of the creation of leases during normal server operations using the PostgreSQL database (operating in streaming replication mode behind PGPool-II) for lease storage. Additionally, a separate server instance at 192.168.115.193 will be used to send the traffic. Of note here is that the Kea server and the perfdhcp server both have an additional interface with IPs of 10.1.2.2 and 10.1.2.6 respectively. This is where the DHCP activity takes place.
First, start perfdhcp with sudo perfdhcp -4 -r 1 -R 10 -t 60 -l ens256
, which will simulate 10 clients. Look at the Kea logs to ensure that the clients are receiving leases, or just wait for the reports from perfdhcp every 60 seconds as shown:
Check the database on db02 to ensure that 10 leases have appeared there (replication should have copied the leases to db02) as shown:
Re-running the query a few times should show the expiration times incrementing, as the 10 clients perform DHCP over the course of 10 seconds.
Next, the API (lease4-get-all
) will be used to retrieve leases, which Kea will do using the PostgreSQL lease-database as configured. The command is very simple as shown below. Use the aformentioned cat
and socat
method to send to the Kea server:
It should return a list of all 10 leases.
Conclusions
Kea can function correctly with PostgreSQL streaming replication High Availability mode with PGPool-II as a front-end connection point. Other modes (there are many possible combinations) were not tested. Neither performance nor security were considered in this testing. Both of these areas can, most likely, be greatly improved upon by configuration changes in PostgreSQL, PGPool-II, or both. The goal here was simply to confirm that it is possible to use Kea with such a setup.