Experimenting with PostgreSQL High Availability
  • 18 Jul 2023
  • 21 Minutes to read
  • Contributors
  • Dark
    Light
  • PDF

Experimenting with PostgreSQL High Availability

  • Dark
    Light
  • PDF

Article Summary

Purpose of this test

Kea is designed to connect and use database backends for storage of leases, host reservations, and even most of the configuration. Using clustering technology to provide a single source of backend data enables the operator to quickly spin up new virtual machines to provide the database component of this system. Any new VM can simply become a node in the existing cluster, where it will quickly acquire all the data. This facilitates moving the database and Kea virtual machines around on the network, by minimizing the configuration overhead for each new VM. Kea supports both MySQL and PostgreSQL database backends. Previous testing has confirmed that Galera clusters can function as Kea backends; this test was to determine if the equivalent functionality for PostgreSQL would also work with Kea.

Confusing terminology

The Postgres project calls their replication function "High Availability" and uses the terms "pools" and "load-balancing," because this feature is designed to support database high-availability. We use those same terms when talking about Kea High Availability, but the Postgres High Availability system does not provide, by itself, high availability for Kea DHCP services.

Summary of Results

Postgres is capable of several High Availability configurations, both with and without PGPool-II. Both have been confirmed to work in all three Kea backend roles (lease database, host reservations database and configuration database).

Test Limitations

Only connection and usage testing were performed. This was not a load or stress test, and no performance data was gathered.
This was an experiment to confirm the basic functionality of using a PostgreSQL cluster as a Kea database backend. In general, ISC does not prescribe how to configure your chosen database software, as we are not experts in database software.

Security Considerations

Security was not a concern in this testing as all of the tests were performed with virtual systems that were only accessible to each other on the local host machine. Operators should consider their own network configuration and security requirements; adjustments may be required for a secure configuration.

Test Design

The purpose of the test was to confirm that data provided by Kea to one database node was properly propagated and available to Kea from another node.

In all of the tests, these general parameters were used:

  • All test virtual machines were Debian GNU/Linux 11 (bullseye).
  • PostgreSQL 15 and PGPool-II 4.3.5 were the latest available versions from the official Postgres Debian repository at the time of testing.
  • There were two PostgreSQL virtual machines (db01 and db02).
  • There was one Kea 2.3.6 virtual machine used.
  • There was one additional virtual for perfdhcp, used to simulate DHCP clients.
  • The Kea server instance and the perfdhcp server instance both had an additional interface that is part of the 10.1.2.0/24 subnet.
  • Only DHCPv4 was tested but results should be similar for DHCPv6.

The virtual machines called db01 and db02 had entries in the /etc/hosts files on all relevant virtuals so that the names could be used:

$ grep db /etc/hosts
192.168.115.195 db01
192.168.115.196 db02

Unless otherwise noted, db01 was the primary and db02 was the secondary for the purposes of PostgreSQL High Availability.

Configuration Steps

The steps and subsequent tests shown were performed to verify the functionality; you can follow along and repeat the testing if you wish. It is possible that you could then use the resulting configuration in production, with some modification.

Install the PostgreSQL software

There are Debian-maintained versions of PostgreSQL; however, for these tests, the Postgres-maintained version was used.

Add the repository

The repository and verification key for the Postgres packages was added to db01 and db02 as shown below:

$ sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
$ wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -

Install the software

PostgreSQL was then installed on db01 and db02 as shown below:

$ sudo apt-get update
$ sudo apt-get -y install postgresql

At this point, PostgreSQL could be used as a single database store with no HA after starting the server (if everything went OK, it should already be running). There is more involved with setting up HA, however.

Timezone

It is very important that the timezones match across all of the servers that will be used for Kea or PostgreSQL or PGPool-II when setting up the database, but particularly when the Kea server and the PostgreSQL server are not the same server. This is detailed in the ARM. If the timezones are not the same, very strange errors will occur during simple operations (such as lease renewal). The only supported configuration for the timezone is 'UTC'; it may be possible to use other timezones, but it is not recommended.

First, go through each server and confirm that it is set to UTC as follows (Debian 11 commands shown; other systems may use some other method):

$ sudo timedatectl
               Local time: Fri 2023-04-14 09:49:02 EDT
           Universal time: Fri 2023-04-14 13:49:02 UTC
                 RTC time: Fri 2023-04-14 13:49:03
                Time zone: America/New_York (EDT, -0400)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no

As can be seen above, this server's local time is set to EDT. Modify it to UTC as follows:

$ sudo timedatectl set-timezone UTC

Then check again using the first command. It should now show UTC for the Local time and Time zone lines.

On the PostgreSQL servers (db01 and db02), it is necessary to alter settings in /etc/postgresql/15/main/postgresql.conf, specifically changing the timezone = setting to timezone = 'UTC'. It may also help to change log_timezone = to log_timezone = 'UTC' as well.

Test One: PostgreSQL HA

In this mode, there are two PostgreSQL servers configured with built-in streaming replication. Only one of the servers is writeable; the other server is a replica only. The secondary (replica) server can be promoted to primary (writeable) in the event of a failure on the existing primary. According to the documentation, connecting to the writeable server is meant to be accomplished by using DNS records. The documentation doesn't mention it, but it might be possible to use VRRP to control the access as well.

The Kea Configuration

For this test, a DNS name was used as shown in the partial config below:

    "lease-database": {
      "type": "postgresql",
      "name": "kea",
      "host": "db01",
      "port": 5432,
      "user": "kea",
      "password": "kea"
    },

One can imagine that if the current db01 failed, then the config above could be changed to connect to db02. Alternatively, a db03 could be added to /etc/hosts, pointing to either a floating IP address or simply edited to point to either db01 or db02's IP address, depending on which was the current primary (writeable) server. This hypothetical db03 could then be the host value in the partial configuration above instead.

Configuring PostgreSQL HA

First, some changes need to be made to /etc/postgresql/15/main/postgresql.conf on both servers to enable the replication. The file should be owned by postgres, so you can edit it as the user postgres. All of these lines should exist in the aforementioned configuration file. Alter them to the values shown below on both servers:

listen_addresses = '*'
wal_level = replica
max_wal_senders = 10
wal_keep_size = 1024
wal_compression = on

Note that other values for some of the above fields may yield different performance results.

Next, the replication hosts need to be configured to allow communication amongst themselves. Permission for db02 to connect to postgres as the replication user on db01 needs to be added to the bottom of /etc/postgresql/15/main/pg_hba.conf on both servers, as shown below.

First, disable these lines:

# IPv4 local connections:
#host    all             all             127.0.0.1/32            scram-sha-256
# IPv6 local connections:
#host    all             all             ::1/128                 scram-sha-256
# Allow replication connections from localhost, by a user with the
# replication privilege.
#host    replication     all             127.0.0.1/32            scram-sha-256
#host    replication     all             ::1/128                 scram-sha-256

Any line near the bottom of the file that starts with "host" should, at this point, be disabled.

Then add the below section:

# For streaming replication
host replication nobody 192.168.115.195/24 trust
host replication nobody 192.168.115.196/24 trust
host all all 127.0.0.1/32 trust
host all all 192.168.115.0/24 trust

This is also a good time to allow the kea user to connect from the Kea server to the Kea database. Add the below section as well:

# Allow kea access to kea identified by kea 
local kea kea password
host kea kea 127.0.0.1/32 password
host kea kea ::1/128 password
host kea kea 192.168.115.128/32 password

Note that the final line shows the IP 192.168.115.128/32. This is the IP address of the Kea server in the test.

Potential Security Issues

This is an area where security could be an issue. The trust keyword causes the connection to be allowed with no password. There may be other concerns here as well. This is suitable in a private environment such as our test lab environment, but care should be taken in production.

Now restart the PostgreSQL service on both servers:

$ sudo systemctl restart postgresql

Configure the database service for replication

On the primary (db01), these commands need to be run to setup the replication slots and add the replication user as shown below:

$ sudo su - postgres
postgres@db01:~$ psql template1
template1=# select * from pg_create_physical_replication_slot('db02_nobody_slot');
template1=# select slot_name, slot_type, active, wal_status from pg_replication_slots;
template1=# exit
postgres@db01:~$ createuser -U postgres --replication nobody
postgres@db01:~$ exit

On the secondary (db02), the datastore directory needs to be cleared out in preparation for replicating the current state from the primary (db01). The commands shown below accomplish this:

$ sudo systemctl stop postgresql
$ sudo su - postgres
postgres@db02:~$ rm -rv /var/lib/postgresql/15/main/*

Then, again on the secondary, use pg_basebackup to copy the current state from the primary as shown:

postgres@db02:~$ pg_basebackup --pgdata /var/lib/postgresql/15/main --format=p --write-recovery-conf --checkpoint=fast --label=mffb --progress --host=db01 --port=5432 --username=nobody
postgres@db02:~$ exit

Now tell the secondary (db02) how to connect to the primary to replicate the data by adding the below to the bottom of /etc/postgresql/15/main/postgresql.conf:

# Standby
primary_conninfo = 'user=nobody port=5432 host=db01 application_name=db02.nobody'
primary_slot_name = 'db02_nobody_slot'

Now restart PostgreSQL on both servers.

Check that the logs look okay on both servers with the command sudo tail /var/log/postgresql/postgresql-15-main.log. The db01 server should have a line like this (probably the last line):

2023-04-18 12:02:49.342 UTC [4929] LOG:  database system is ready to accept connections

db02 should have these few lines (probably the final lines):

2023-04-18 12:02:51.804 UTC [4889] LOG:  redo starts at 0/2000028
2023-04-18 12:02:51.804 UTC [4889] LOG:  consistent recovery state reached at 0/2000100
2023-04-18 12:02:51.804 UTC [4886] LOG:  database system is ready to accept read-only connections
2023-04-18 12:02:51.809 UTC [4890] LOG:  started streaming WAL from primary at 0/3000000 on timeline 1

Test that Replication is working

At this point you can test that the replication is working by running these commands on the primary:

$ sudo su - postgres
postgres@db01:~$ psql
postgres=# CREATE DATABASE testdb;
CREATE DATABASE
postgres=# exit
postgres@db01:~$ exit

This newly created testdb should appear on the secondary. Test that theory by performing the following on the secondary:

$ sudo su - postgres
postgres@db02:~$ psql
postgres=# \l testdb;
                              List of databases
  Name  |  Owner   | Encoding |   Collate   |    Ctype    | Access privileges 
--------+----------+----------+-------------+-------------+-------------------
 testdb | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
(1 row)

postgres=# exit
postgres@db02:~$ exit

Create the Kea database for use by Kea

Not many additional steps are required to create the database for use by Kea, apart from remembering that this needs to be done on the primary (db01). Simply follow the normal instructions for creating the database as shown in the ARM, with a small difference. Since this is PostgreSQL 15, there is one additional step that must be performed while granting permission to the user to connect to the database Kea will use. This is an additional security parameter that must be added, as the defaults were changed in PosgreSQL 15. Add all privileges for the "public" schema to the Kea database as shown below (replace "kea" with the actual database name if different):

$ sudo su - postgres
postgres@db01:~$ psql
postgres=# \c kea
kea=# GRANT ALL ON SCHEMA public TO kea;
kea=# \q
postgres@db01:~$ exit

Until the above is done, it won't be possible to setup the database using the kea-admin command remotely from the Kea server.

Testing Kea with PostgreSQL streaming replication

Configure the Kea server to use postgresql for the lease-database as shown in this simple configuration:

{
  "Dhcp4": {
    "interfaces-config": {
      "interfaces": [ "ens256" ]
    },
    "valid-lifetime": 300,
    "lease-database": {
      "type": "postgresql",
      "name": "kea",
      "host": "db01",
      "port": 5432,
      "user": "kea",
      "password": "kea"
    },
    "subnet4": [
      {
        "subnet": "10.1.2.0/24",
        "pools": [
          {
            "pool": "10.1.2.100 - 10.1.2.254"
          }
        ]
      }
    ],
    "loggers": [
      {
        "name": "kea-dhcp4",
        "output_options": [
          {
            "output": "stdout"
          }
        ],
        "severity": "DEBUG",
        "debuglevel": 99
      }
    ]
  }
}

Start the Kea server. Send some client traffic toward the Kea server (perfdhcp can be used to send DHCP traffic: sudo perfdhcp -4 -r 1 -R 10 -t 60 -l ens256). Check the log files for any problems. Now connect to each PostgreSQL service on the primary and secondary servers to check whether leases are appearing in both locations as shown (replace kea with the actual name of your Kea database and replace lease4 with lease6 if testing using DHCPv6):

$ sudo su - postgres
postgres@db01:~$ psql kea
kea=# select * from lease4;

Both servers should feature similar resulting leases. Now test that the replication can be moved to the secondary by stopping the PostgreSQL service on the primary:

$ sudo systemctl stop postgresql

On the secondary, promote the service to writeable mode by exectuting the following:

$ sudo su - postgres
postgres@db02:~$ /usr/lib/postgresql/15/bin/pg_ctl promote -D /var/lib/postgresql/15/main

Now, modify the Kea configuration or /etc/hosts or db03, as discussed earlier, to connect to the NEW primary (db02). Everything should work as before. Switching back to using db01 as the primary is beyond the scope of this document as it is non-trivial (it would be easier to add a new secondary using the above method).


Test Two: PostgreSQL + PGPool-II HA

This test builds on test one by adding PGPool-II which, usually, would be installed on a third server (db03), although that is not a requirement, and would manage connections to the primary (writeable) server in PostgreSQL HA Replica mode. PGPool-II offers many features, but this test will only use one - load balancing. This feature will spread read queries between db01 and db02, and writes will go only to db01. For more information about PGPool-II capabilities, see their documentation.

Install PGPool-II

First, the pgpool2 package must be installed. The below command should install the latest package from the earlier added PostgreSQL official repository:

$ sudo apt install pgpool2

If all went well, there should be several pgpool instances running. There are some settings that must be changed before PGPool-II can be tested, however.

Configure PGPool-II

To make this simple, only db01 will be configured with PGPool-II. Only slight modification of the configuration file /etc/pgpool2/pgpool.conf is required. Change the following values in that file, as shown:

listen_addresses = '*'
port = 5433
backend_hostname0 = 'db01'
backend_port0 = 5432
backend_hostname1 = 'db02'
backend_port1 = 5432

These values will cause PGPool-II to listen on all local addresses at port 5433. It will connect to db01 and db02 PostgreSQL servers on port 5432. Restart PGPool-II to load the new configuration:

$ sudo systemctl restart pgpool2

That's all there is to it! PGPool-II is now ready to accept connections. Check the logs on db01 to be sure that everything is working with PGPool-II using the command sudo grep pgpool /var/log/messages, which should show the last few lines as follows:

Apr 18 08:25:34 db01 pgpool[5597]: 2023-04-18 12:25:34.492: main pid 5597: LOG:  Setting up socket for 0.0.0.0:5433
Apr 18 08:25:34 db01 pgpool[5597]: 2023-04-18 12:25:34.492: main pid 5597: LOG:  Setting up socket for :::5433
Apr 18 08:25:34 db01 pgpool[5597]: 2023-04-18 12:25:34.495: main pid 5597: LOG:  find_primary_node_repeatedly: waiting for finding a primary node
Apr 18 08:25:34 db01 pgpool[5597]: 2023-04-18 12:25:34.500: main pid 5597: LOG:  find_primary_node: primary node is 0
Apr 18 08:25:34 db01 pgpool[5597]: 2023-04-18 12:25:34.500: main pid 5597: LOG:  find_primary_node: standby node is 1
Apr 18 08:25:34 db01 pgpool[5636]: 2023-04-18 12:25:34.501: health_check pid 5636: LOG:  process started
Apr 18 08:25:34 db01 pgpool[5634]: 2023-04-18 12:25:34.501: sr_check_worker pid 5634: LOG:  process started
Apr 18 08:25:34 db01 pgpool[5597]: 2023-04-18 12:25:34.501: main pid 5597: LOG:  pgpool-II successfully started. version 4.3.5 (tamahomeboshi)
Apr 18 08:25:34 db01 pgpool[5597]: 2023-04-18 12:25:34.501: main pid 5597: LOG:  node status[0]: 1
Apr 18 08:25:34 db01 pgpool[5597]: 2023-04-18 12:25:34.501: main pid 5597: LOG:  node status[1]: 2
Apr 18 08:25:34 db01 pgpool[5633]: 2023-04-18 12:25:34.502: pcp_main pid 5633: LOG:  PCP process: 5633 started
Apr 18 08:25:34 db01 pgpool[5635]: 2023-04-18 12:25:34.502: health_check pid 5635: LOG:  process started

Testing PGPool-II

Test that the PGPool-II service is working properly by connecting to all three services: the two Postgres services as described previously, and the PGPool-II server as shown below:

$ psql -U postgres  -p 5433 -h db01

Note that this will probably only work from db01. Now, drop the database created during the earlier initial test testdb by running the command drop database testdb; from the PSQL session connected to PGPool-II. DROP DATABASE should appear before returning to the prompt. Now check on all three connections (the two connections directly to postgres on db01 and db02 and the connection to pgpool on db01) to make sure the database is gone, using \l to list databases. There should be no testdb shown on any of the three connections!

Testing Kea API commands with PGPool-II backend

Now it is time to test Kea with PostgreSQL streaming replication with a PGPool-II front end. For this testing, there will be additional parts of Kea using the database; host reservations and the configuration backend will be added. For this testing, the following configuration will be used:

{
  "Dhcp4": {
    "server-tag": "server1",
    "interfaces-config": {
      "interfaces": [ "ens256" ]
    },
    "control-socket": {
      "socket-type": "unix",
      "socket-name": "/tmp/kea-dhcp4-socket"
    },
    "valid-lifetime": 300,
    "lease-database": {
      "type": "postgresql",
      "name": "kea",
      "host": "db01",
      "port": 5433,
      "user": "kea",
      "password": "kea"
    },
    "hosts-database": {
      "type": "postgresql",
      "name": "kea",
      "host": "db01",
      "port": 5433,
      "user": "kea",
      "password": "kea",
    },
    "config-control": {
      "config-databases": [
        {
          "type": "postgresql",
          "name": "kea",
          "host": "db01",
          "port": 5433,
          "user": "kea",
          "password": "kea"
        }
      ]
    },
    "hooks-libraries": [
      {
        "library": "/path/libdhcp_pgsql_cb.so"
      },
      {
        "library": "/path/libdhcp_cb_cmds.so"
      },
      {
        "library": "/path/libdhcp_host_cmds.so"
      },
      {
        "library": "/path/libdhcp_lease_cmds.so"
      }
    ],
    "loggers": [
      {
        "name": "kea-dhcp4",
        "output_options": [
          {
            "output": "stdout"
          }
        ],
        "severity": "DEBUG",
        "debuglevel": 99
      }
    ]
  }
}

The path to the hook libraries in the above configuration has been changed to path. Adjust this value to be the path to the real files when performing this testing.

Test the Configuration Backend with PostgreSQL + PGPool-II

First, start Kea with the above configuration. It should start and run, outputting various messages to the terminal. The next step will be to use the cb_cmds hook library to configure a subnet. Also, connect to the Kea database on db02. We will confirm that these commands succeed and are replicated with some simple SQL queries.

$ sudo su - postgres
postgres@db02:~$ psql kea
kea=# 

See the ARM regarding the usage of the configuration backend hook. A simple method of submitting API commands to Kea is to perform this simple string of commands as the user root:

# cat <file> | socat UNIX:/tmp/kea-dhcp4-socket -,ignoreeof | jq .

where <file> is the file that contains the API command to be submitted.

jq, seen at the end of the command string, is a nice command-line JSON formatting program, as the API returns are JSON as well. socat is a command-line program for communicating with Unix sockets. Neither of these are installed on Debian by default, but both can be added easily with apt install jq socat.

First, create the server container that will hold the configuration with the following JSON in <file> and piped to socat as shown above:

{
  "command": "remote-server4-set",
  "arguments": {
    "servers": [
      {
        "server-tag": "server1"
      }
    ],
    "remote": {
      "type": "postgresql"
    }
  }
}

Check the output on the terminal to confirm command success. On db02, run the SQL query shown to confirm that the data made it across (note the record of id 2 and tag server1):

kea=# select * from dhcp4_server;
 id |   tag   |        description        |        modification_ts        
----+---------+---------------------------+-------------------------------
  1 | all     | special type: all servers | 2023-04-14 14:32:59.773146+00
  2 | server1 |                           | 2023-04-14 19:12:52+00
(2 rows)

Now the subnet can be added to server1 with the following JSON in a file again piped to socat as discussed previously:

{
  "command": "remote-subnet4-set",
  "arguments": {
    "subnets": [
      {
        "id": 1,
        "shared-network-name": "",
        "subnet": "10.1.2.0/24",
        "pools": [
          {
            "pool": "10.1.2.100 - 10.1.2.254"
          }
        ]
      }
    ],
    "remote": {
      "type": "postgresql"
    },
    "server-tags": [
      "server1"
    ]
  }
}

Confirm that the subnet was replicated on db02 by running the sql query as shown:

kea=# select * from dhcp4_pool;
 id | start_address | end_address | subnet_id |    modification_ts     | client_class | require_client_classes | user_context 
----+---------------+-------------+-----------+------------------------+--------------+------------------------+--------------
  1 | 10.1.2.7      | 10.1.2.254  |         1 | 2023-04-14 19:15:30+00 |              | [  ]                   | 
(1 row)

Note the pool start and end shown above match the pool statement in the JSON from the API command.

Now perform a config-get API command as shown below using the previously described cat file and pipe to socat method. The subnet should be shown in the output:

{
    "command": "config-get"
}

Test Host Reservations in PostgreSQL + PGPool-II

The next test is using the API to add a host reservation using the host_cmds hook. This is fairly easy to test. First, add the host using the API command and previously described cat and socat method, with the following JSON in the file to be read with cat:

{
  "command": "reservation-add",
  "arguments": {
    "reservation": {
      "subnet-id": 1,
      "hw-address": "1a:1b:1c:1d:1e:1f",
      "ip-address": "10.1.2.20"
    }
  }
}

Now the host should appear in the PostgreSQL database on db02 if the replication is working correctly. This can be confirmed with the following SQL query:

kea=# select host_id,dhcp_identifier,dhcp4_subnet_id,ipv4_address from hosts;
 host_id | dhcp_identifier | dhcp4_subnet_id | ipv4_address 
---------+-----------------+-----------------+--------------
       1 | \x1a1b1c1d1e1f  |               1 |    167838228
(1 row)

Finally, reading of host reservations using the host_cmds hook can be confirmed with the API by attempting to retrieve the previously added host reservation using the API call reservation-get with the following JSON:

{
  "command": "reservation-get",
  "arguments": {
    "subnet-id": 1,
    "ip-address": "10.1.2.20"
  }
}

The host reservation should be returned as expected when using the above with the previously described cat and socat method.

Final Test: lease-database in PostgreSQL + PGPool-II

For this test, an additional Kea command line tool will be used, perfdhcp, which can be used to generate DHCP client traffic for testing on any DHCP server. This will allow testing of the creation of leases during normal server operations using the PostgreSQL database (operating in streaming replication mode behind PGPool-II) for lease storage. Additionally, a separate server instance at 192.168.115.193 will be used to send the traffic. Of note here is that the Kea server and the perfdhcp server both have an additional interface with IPs of 10.1.2.2 and 10.1.2.6 respectively. This is where the DHCP activity takes place.

First, start perfdhcp with sudo perfdhcp -4 -r 1 -R 10 -t 60 -l ens256, which will simulate 10 clients. Look at the Kea logs to ensure that the clients are receiving leases, or just wait for the reports from perfdhcp every 60 seconds as shown:

sent: 59/59; received: 59/59; drops: 0/0; rejected: 0/0
sent: 119/119; received: 119/119; drops: 0/0; rejected: 0/0
sent: 179/179; received: 179/179; drops: 0/0; rejected: 0/0

Check the database on db02 to ensure that 10 leases have appeared there (replication should have copied the leases to db02) as shown:

kea=# select address,hwaddr,client_id,valid_lifetime,expire,subnet_id from lease4;
  address  |     hwaddr     |    client_id     | valid_lifetime |         expire         | subnet_id 
-----------+----------------+------------------+----------------+------------------------+-----------
 167838218 | \x000c01020307 | \x01000c01020307 |            300 | 2023-04-17 16:43:28+00 |         1
 167838219 | \x000c01020308 | \x01000c01020308 |            300 | 2023-04-17 16:43:29+00 |         1
 167838220 | \x000c01020309 | \x01000c01020309 |            300 | 2023-04-17 16:43:30+00 |         1
 167838221 | \x000c0102030a | \x01000c0102030a |            300 | 2023-04-17 16:43:31+00 |         1
 167838222 | \x000c0102030b | \x01000c0102030b |            300 | 2023-04-17 16:43:32+00 |         1
 167838223 | \x000c0102030c | \x01000c0102030c |            300 | 2023-04-17 16:43:33+00 |         1
 167838224 | \x000c0102030d | \x01000c0102030d |            300 | 2023-04-17 16:43:34+00 |         1
 167838215 | \x000c01020304 | \x01000c01020304 |            300 | 2023-04-17 16:43:35+00 |         1
 167838216 | \x000c01020305 | \x01000c01020305 |            300 | 2023-04-17 16:43:36+00 |         1
 167838217 | \x000c01020306 | \x01000c01020306 |            300 | 2023-04-17 16:43:37+00 |         1
(10 rows)

Re-running the query a few times should show the expiration times incrementing, as the 10 clients perform DHCP over the course of 10 seconds.

Next, the API (lease4-get-all) will be used to retrieve leases, which Kea will do using the PostgreSQL lease-database as configured. The command is very simple as shown below. Use the aformentioned cat and socat method to send to the Kea server:

{
    "command": "lease4-get-all"
}

It should return a list of all 10 leases.

Conclusions

Kea can function correctly with PostgreSQL streaming replication High Availability mode with PGPool-II as a front-end connection point. Other modes (there are many possible combinations) were not tested. Neither performance nor security were considered in this testing. Both of these areas can, most likely, be greatly improved upon by configuration changes in PostgreSQL, PGPool-II, or both. The goal here was simply to confirm that it is possible to use Kea with such a setup.