PROXYv2 Support in BIND 9
  • 21 Feb 2024
  • 22 Minutes to read
  • Contributors
  • Dark
    Light
  • PDF

PROXYv2 Support in BIND 9

  • Dark
    Light
  • PDF

Article summary

Overview

PROXYv2 protocol support has been added in the BIND 9.19 development branch. The PROXYv2 protocol is designed with one thing in mind: passing transport connection information (including, but not limited to, source and destination addresses and ports) to a backend system across multiple layers of NAT, TCP, or UDP proxies and load balancers. The protocol achieves its goal by prepending each connection or datagram with a header reporting the other side's connection characteristics. Effectively, from the point of view of the backend (in our case, BIND), it is a controllable way to spoof peer and incoming interface connection information.

With the addition of this feature, BIND can act as a backend to the front-end proxies implementing the PROXYv2 protocol. The list of such proxy implementations includes but is not limited to dnsdist and HAProxy. Many cloud infrastructure providers also implement the PROXYv2 protocol in their in-house front-end software.

The PROXYv2 protocol is supported for all DNS transports currently implemented in BIND, including DNS over UDP and TCP (Do53), DNS over TLS (DoT), and DNS over HTTP(S) (DoH) . The same applies to dig as well, as we wanted to ensure that DNS operators who want to use the PROXYv2 protocol have a reliable tool for diagnosing their deployments. Moreover, dig might be one of the few such tools, if not the only one, that implements PROXYv2 for so many DNS transports.

Different "Flavors" of PROXY

There are currently two versions of the PROXY protocol - text-based PROXYv1 and binary-based PROXYv2. Also, there are protocols with similar purposes, like Simple Proxy Protocol (SPP) for UDP from Cloudflare. BIND is capable of accepting PROXYv2 only, so if we mention the PROXY protocol without a version, PROXYv2 is implied if not stated otherwise.

PROXYv2 in BIND uses the source and destination addresses and ports extracted from PROXYv2 headers instead of the real source and destination addresses and ports, as seen by the operating system. With very few exceptions (which we will discuss later), from the point of view of BIND, these are real - as a result, you will see them in the logs, and the ACL functionality of BIND will use them during matching and so on. In short, almost all aspects of BIND functionality that need source and destination addresses and ports will use the ones provided via the PROXYv2 protocol. Of course, the source and destination addresses of the real endpoints are preserved internally and are used for the actual data exchanges.

The above is done to fulfill the PROXY protocol's goal of filling the backend server's internal structures with the information collected by the front-end proxy, which the server would have been able to get by itself from the operating system if the client were connecting directly to the server instead of via a front-end. That provides a level of transparency that has many architectural benefits, some of which are discussed in detail in the PROXY protocol specification. Let’s discuss them briefly.

Applications for PROXYv2

  1. Firstly, it becomes possible to chain multiple layers of front-ends (like proxies and firewalls) and always present the original connection information (like source and destination IP addresses and ports). With PROXY, the complexity of the forwarding infrastructure in front of BIND does not matter, as it makes it possible to preserve and pass the original information about endpoints through it to the backend. It might consist of just one front-end instance running on the same machine or local network, or be a complex, multi-layer infrastructure with many forwarders.

  2. Secondly, this feature makes it easier to deploy elaborate infrastructures with large front-end farms in front of big backend farms, possibly shared between multiple sites; when using the PROXY protocol, the servers do not need to know routes to the client, only to the closest proxy that forwarded the connection. That provides benefits over the so-called transparent proxies, because using them usually implies that there is only one return path for the data; in cases when both front-end proxies and backend servers support the PROXYv2 protocol, it is easier to provide multiple return paths while preserving the ability to pass the endpoints data to the backends. That might be particularly useful for large DNS resolver operators.

  3. Thirdly, using PROXY eases IPv4 and IPv6 integration; in particular, it is absolutely fine to receive a request over IPv4 and forward it over the chain of intermediates that are connected over IPv6 only (or vice versa). In that case, with proper configuration, the backend server will receive the original endpoint information.

  4. Fourthly, PROXY support allows relatively transparent transport protocol conversion (from the backend server perspective), including TLS termination. There are front-end implementations that allow transport protocol conversion; for example, it is possible to configure a dnsdist instance to serve DNS over HTTP/2 or DNS over QUIC (starting from version 1.9.X), while the DNS backend server might not have these transports enabled or not have support for them. Similarly, HAProxy is notorious for allowing HTTP protocol version conversion support and is also often used for TLS termination. However, simply placing such front-ends in front of a backend (e.g. BIND) involves losing the original endpoint information. That is exactly the problem that enabling PROXYv2 both on the front-end and backend can solve. This feature is useful for both small and large installations alike. In particular, it allows serving DNS over transports currently not supported by BIND, like QUIC (DNS over QUIC/DoQ), in a very transparent way.

Let's discuss how we can use the PROXYv2 protocol in BIND now with a few examples.

Preserving Connection Information with PROXYv2 in BIND

To demonstrate the use of PROXYv2 in BIND, we should discuss a couple of things about the front-ends we are going to use for demonstration, namely HAProxy and dnsdist, and some details about the PROXYv2 protocol.

The specification advises sending a PROXYv2 header at once after establishing a TCP connection. In fact, it only provides very few details about UDP (mostly the necessary constant definitions) and is not concerned about using PROXY over TLS.

Regarding PROXY over UDP, it seems that most software developers have agreed that a PROXY header should precede data in a datagram. It seems to be a very logical choice in this case. dnsdist works like this and is one of the few choices that use and support PROYXv2 over UDP.

Regarding PROXY over TLS, it is trickier. As noted above, the specification of the PROXY protocols is mainly concerned with TCP proxies. As TLS is used on TCP connections, the PROXY protocol can be used on TLS connections just as it is described in the specification: that is, by sending a PROXY header in front of any data related to TLS handshake. In this case, the PROXY header itself is, as one could expect, not encrypted. Most of the software implementing PROXY over TLS support works like this, including HAProxy.

One can see a possible problem with this approach, as relatively sensitive data is transmitted in clear text over what is expected to be an otherwise secure, encrypted connection. In order to resolve this problem, the dnsdist authors implemented PROXYv2 slightly differently: instead of sending plain PROXY header data before the TLS handshake over TCP, they decided to send an encrypted PROXY header as the first chunk of data after the handshake. One could argue that this is not described in the specification and, thus, deviates from it. On the other hand, all sensitive data is protected and cannot be collected or analyzed by any intermediaries. Also, it is worth noting that, as of the latest versions (since 1.8.X, 1.9.0), dnsdist started adding support for accepting non-encrypted PROXYv2 protocol messages as well.

BIND and dig, which strive to be useful in all deployment scenarios, support both plain and encrypted mode for the PROXY protocol. As expected, only plain mode is available for non-encrypted DNS transports, while the ones based on TLS support both.

Configuration

Configuring PROXYv2 support in BIND requires first establishing which protocols BIND will listen for, then adjusting ACLs to determine which clients have access.

Here, you can see a couple of examples for the first step:

options {

	# Enable PROXYv2 for Do53 (both TCP and UDP)
	listen-on port 53 proxy plain { any; };

	# Enable proxy for DoT, use encrypted PROXY
	# headers for compatibility with dnsdist
	listen-on port 853 proxy encrypted tls local-tls { any; };

	# Enable proxy for DoH, unencrypted proxy
	# headers for compatibility with HAProxy (and other tools)
	listen-on port 443 proxy plain tls local-tls http local-http-server { any; };
};

By default, this configuration is not enough to allow accepting PROXYv2. The second step is to set access control lists (ACLs) associated with PROXYv2, namely allow-proxy and allow-proxy-on.

  • allow-proxy - defines an ACL for the client addresses allowed to send PROXYv2 headers.
  • allow-proxy-on - defines an ACL for the interface addresses allowed to accept PROXYv2 headers.

These are the only ACLs that work with real endpoint addresses and ports - everything else will use the information carried by the PROXYv2 protocol.

One could ask why enabling PROXYv2 via listen-on statements is not enough. That is done for security reasons. As mentioned above, the core idea behind PROXYv2 is to provide a way for spoofing source and destination addresses and ports, which has security implications, as a client might make it seem as if a request is coming from someone else.

For example, it is customary, albeit not recommended, to configure BIND to allow recursive queries for clients on the local networks only, while serving zones on public networks. If such a BIND instance allows the PROXY protocol on a public interface, then a remote client could run recursive queries over a public interface, effectively turning the instance into an open resolver. That is only one example that comes to mind, but in general, PROXY allows bypassing other ACLs, too. That, of course, is undesirable and might be unexpected for the operator.

On the other hand, every deployment is different, so it is impossible to provide a default that would fit everyone, especially without sacrificing security. As a result, BIND does not allow PROXY for any clients by default; BIND defaults to the following:

options {
	...
	allow-proxy { none; };
	allow-proxy-on { any; };
	...
};

In other words, by default, the PROXY protocol is not allowed for any client address but is allowed on any interface. For PROXY to be accepted, a request should pass checks by both of the ACLs.

For example, it is possible to allow PROXY for clients on local networks where BIND has network interfaces by configuring it this way:

options {
	...
	allow-proxy { localnets; };
	...
};

As another example, it is possible to allow PROXY for clients on a loop-back interface only by configuring it this way:

options {
	...
	allow-proxy { 127.0.0.1; ::1; };
	...
};

And now a somewhat "inverted" example. Let's imagine a situation where we know that it is safe for BIND to accept the PROXY protocol on a particular interface (192.168.1.10 in this example):

options {
	...
	allow-proxy { any; };
	allow-proxy-on { 192.168.1.10; };
	...
};

Of course, you can configure these two ACLs to match your infrastructure needs exactly. What we do not recommend doing, though, is allowing PROXY on publicly accessible network interfaces due to the security concerns described above.

We should note that enabling PROXYv2 on a listen-on statement will prevent the corresponding listener from accepting "regular" DNS queries that arrive without a PROXYv2 header.

Now that we have learned enough about PROXYv2 support in BIND, we can provide more examples of using BIND with both dnsdist and HAProxy. Both of these proxying front-ends have unique functionality, and although some of the capabilities overlap, they can also complement each other.

Example: DNS over UDP and TCP, BIND, and dnsdist

Let's start with dnsdist, as it is specifically built to be a front-end for DNS servers or a pool of them. One of the distinctive features of dnsdist is its support for PROXYv2 for DNS over TCP and DNS over UDP (aka DNS over port 53 or Do53). Moreover, it is one of the few software packages that supports PROXYv2 for UDP.

Let's start with a simple example: both BIND and dnsdist are listening on different machines and, thus, addresses and ports (BIND is on 192.168.1.14:53000, dnsdist 192.168.1.36:53). BIND is configured as a resolver and allows for queries in the local network (192.168.1.0/24).

options {
    # Let's enable PROXYv2 support
    listen-on port 53000 proxy plain {
        192.168.1.14;
        127.0.0.1;
        ::1;
    };
    allow-proxy { 
        192.168.1.36; # dnsdist instance address
        127.0.0.1;
        ::1;
    }; 
    allow-recursion { 192.168.1.0/24; };
};

According to the configuration above, only the machine with a dnsdist instance or anything running on the same machine as BIND can communicate with the BIND instance.

So, the next thing to configure is the dnsdist instance. The simplest configuration for our case will look like this (dnsdist uses Lua for its configuration file):

setLocal("192.168.1.36:53")
newServer({
    address="192.168.1.14:53000",
    useProxyProtocol=true
})

Now, it is time to verify how it works. For that, we will use dig against the dnsdist instance from another machine on the network (192.168.1.5) as follows:

dig @192.168.1.36 A isc.org

It must work similarly for TCP, too:

dig @192.168.1.36 +tcp A isc.org

We should successfully resolve the A record for isc.org, but we are not interested in the output itself, provided that the resolution was successful. The result we are looking for can be found in the BIND instance log (provided that detailed logging is enabled via -d 5):

...
28-Dec-2023 21:46:18.367 Received a PROXYv2 header from 192.168.1.36#34599 on 192.168.1.14#53000 over UDP: command: PROXY, socket type: SOCK_DGRAM, source: 192.168.1.5#57546, destination: 192.168.1.36#53, TLVs: no
28-Dec-2023 21:46:18.367 query client=0x7fffef646000 thread=0x7ffff07ff680(<unknown-query>): query_reset
28-Dec-2023 21:46:18.367 client @0x7fffef646000 (no-peer): allocate new client
28-Dec-2023 21:46:18.367 client @0x7fffef646000 192.168.1.5#57546: UDP request
28-Dec-2023 21:46:18.367 client @0x7fffef646000 192.168.1.5#57546: using view '_default'
28-Dec-2023 21:46:18.367 client @0x7fffef646000 192.168.1.5#57546: request is not signed
28-Dec-2023 21:46:18.367 client @0x7fffef646000 192.168.1.5#57546: recursion available
…

Here, we can see the PROXYv2 protocol in action. But let's discuss it in more detail.

Firstly, we can see the dump of most of the information in the PROXYv2 header. From the message, we can see the following:

192.168.1.36 (the host with the dnsdist instance) sent a PROXYv2 header to the BIND instance on 192.168.1.14, port 53000

The header contains the source address 192.168.1.5 (that is, the address of the host where we ran dig) and the destination address 192.168.1.36 with port 53 - where the dnsdist instance is listening for the queries in our example.

Secondly, we can see that a new client object was allocated for the IP address 192.168.1.5 - the address of the host where we issued the query against the dnsdist instance via dig and an address that, according to the configuration above, cannot even directly send queries to the BIND instance.

From the perspective of the BIND instance, it looks as if there is no dnsdist instance in front of it, as the original endpoint information was preserved and passed to BIND by dnsdist. What happened is:

  1. We sent the query to dnsdist as a front-end using dig;
  2. The front-end passed the original source and destination addresses to BIND via the PROXYv2 protocol;
  3. The front-end passed the query to the BIND instance;
  4. The BIND instance extracted and used the endpoint information obtained from PROXY - in particular, the client address 192.168.1.5;
  5. The BIND instance resolved the query (as the allow-recursion ACL allowed recursion for the address of dig's host - 192.168.1.5) and sent the answer back to dnsdist, acting as a front-end;
  6. dnsdist sent the answer back to dig.

This process was transparent for most of the code in BIND, as the original source and destination were preserved for BIND, and dig did not interact with BIND directly.

We could have achieved a similar configuration without PROXYv2, but in this case, the original source and destination addresses would have been lost, so from the point of view of BIND, dnsdist would have been the originator of the queries. The functionality of ACLs in BIND would have been affected by this (among other things).

Using dig with PROXYv2

Now, let's see how address spoofing can be achieved via PROXYv2 using dig, which can be very useful when configuring BIND to use PROXYv2. For that, let's see how to make dig send queries with PROXYv2 headers.

There are two new PROXYv2-related options for that:

+proxy[=src_addr[#src_port]-dst_addr[#dst_port]]
This option instructs dig to send a PROXYv2 header with the given addresses. If they are omitted, LOCAL PROXYv2 headers are sent. These do not contain any address information (real ones are used by BIND). LOCAL headers are often used for ping requests from front-ends to backends.

+proxy-plain[=src_addr[#src_port]-dst_addr[#dst_port]]
This option is used for encrypted transports and instructs dig to send PROXYv2 ahead of any encryption (e.g. for compatibility with HAProxy and other tools). Otherwise, encrypted PROXYv2 headers are sent by default, just as dnsdist expects.

Address Spoofing

So, armed with this knowledge, let's run the following command on the same machine where the BIND instance is running (because PROXY is allowed for localhost):

dig -p 53000 +proxy=192.168.2.25-192.168.1.36 @127.0.0.1 A isc.org

In this example, we are passing 192.168.2.25 as a client address via PROXYv2. We will get a response similar to this:

; <<>> DiG 9.19.20-dev <<>> -p 53000 +proxy @127.0.0.1 A isc.org
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 51219
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: 08f560936734b0a701000000658efb3b41df9b14a0e51718 (good)
; EDE: 18 (Prohibited)
;; QUESTION SECTION:
;isc.org.			IN	A
;; Query time: 0 msec
;; SERVER: 127.0.0.1#53000(127.0.0.1) (UDP)
;; CLIENT PROXY HEADER: source: 192.168.2.25#0, destination: 192.168.1.36#53
;; WHEN: Fri Dec 29 19:00:43 EET 2023
;; MSG SIZE rcvd: 70

That happened because, from the point of view of BIND, the request arrived from 192.168.2.25 to 192.168.1.36, as we have passed this information via the PROXYv2 protocol, but in our configuration, recursion is allowed only for clients from the network 192.168.1.0/24 (see the configuration above).

We can find the information about it in the BIND log:

…
29-Dec-2023 19:00:43.292 Received a PROXYv2 header from 127.0.0.1#57871 on 127.0.0.1#53000 over UDP: command: PROXY, socket type: SOCK_DGRAM, source: 192.168.2.25#0, destination: 192.168.1.36#53, TLVs: no
29-Dec-2023 19:00:43.292 query client=0x7ffff1b1c000 thread=0x7ffff2fff680(<unknown-query>): query_reset
29-Dec-2023 19:00:43.292 client @0x7ffff1b1c000 (no-peer): allocate new client
29-Dec-2023 19:00:43.292 client @0x7ffff1b1c000 192.168.2.25#0: UDP request
29-Dec-2023 19:00:43.292 client @0x7ffff1b1c000 192.168.2.25#0: using view '_default'
29-Dec-2023 19:00:43.292 client @0x7ffff1b1c000 192.168.2.25#0: request is not signed
29-Dec-2023 19:00:43.292 client @0x7ffff1b1c000 192.168.2.25#0: recursion not available (allow-recursion did not match)
…

So that should give you an overall understanding of how PROXYv2 works in BIND. Please take your time to understand what is written above, because whatever is written below assumes that you have a generic understanding of how PROXYv2 works in BIND.

Example: Using dnsdist for TLS termination

We should mention that dnsdist can be used with PROXYv2 enabled for other DNS transports and most of these support PROXYv2. For example, we can add the following to the dnsdist configuration above:

    addTLSLocal({
    "0.0.0.0:853",
    certFile="/path/to/cert.pem",
    keyFile="/path/to/key.pem"
})

In that case, we can run dig against the dnsdist instance via DNS over TLS (using +tls options). This way, we can make dnsdist do TLS termination for us.

We can achieve end-to-end TLS encryption by making BIND listen on TLS with PROXYv2 enabled and configuring dnsdist to connect to it via TLS.

In order to achieve that, we could add something like this to the configuration above:

tls tls-cert {
    ...
    cert-file="/path/to/cert.pem",
    key-file="/path/to/key.pem"
    ...
}
options {
    # Let's enable encrypted PROXYv2 support for dnsdist
    listen-on port 8530 proxy encrypted tls tls-cert {
        192.168.1.14;
        127.0.0.1;
        ::1;
    };
    ...
}

Now, we can use the following to instruct dnsdist to connect to the upstream server (BIND).

newServer({
    address="192.168.1.14:8530",
    useProxyProtocol=true,
    checkTCP=true,
    tls="openssl" -- or "gnutls"
})

This way, all traffic that goes to the BIND instance will be encrypted.

Other Uses of dnsdist

Dnsdist is well known for implementing multiple DNS transports, including HTTP/2 (DNS over HTTPS/DoH), QUIC (DNS over QUIC/DoQ), and HTTP/3 (DNS over HTTP/3/DoH3). BIND does not support the latter two at the time of this writing, but they are available in the alpha versions of dnsdist (1.9.X). Thanks to the availability of PROXYv2 in both programs, it is possible to place dnsdist in front of BIND relatively transparently. See the dnsdist documentation for more details, in particular regarding addDOQLocal and addDOH3Local. We hope to support these transports in BIND eventually, but using dnsdist paired with PROXYv2 support in BIND might be a good alternative for now.

It is quite easy to offload support for DNS over HTTPS to dnsdist as well:

addDOHLocal({
    "0.0.0.0:443",
    certFile="/path/to/cert.pem",
    keyFile="/path/to/key.pem"
})

But it should be noted that there is no need to do that, as BIND supports DoH natively.

One thing that we want to note before moving on is that, by default, when using PROXYv2 with dnsdist over encrypted transports, you need to use the proxy encrypted in BIND's configuration, as this is what dnsdist expects. Until relatively recently, dnsdist would not accept the "standard" plain PROXYv2 protocol headers, but starting from dnsdist version 1.9.X there is support for this (look for the proxyProtocolOutsideTLS option). That contrasts it with another popular front-end option: HAProxy.

Examples: HAProxy, BIND, and PROXYv2

HAProxy supports only plain ("standard") PROXYv2 headers both for sending and receiving, regardless of transport. HAProxy, being a generic load balancer, does not support DNS over port 53 (TCP and UDP), also known as Do53, but it is a good option for load balancing of both DNS over TLS and DNS over HTTPS and TLS termination. In particular, if you have it deployed for web-server needs and want to serve DNS over HTTP and DNS over TLS as well, it will work very well and no worse than dnsdist in this case, provided that passing unencrypted PROXYv2 headers is acceptable for you. Furthermore, HAProxy also has HTTP/3 support, though, by default, it seems to be available only in enterprise-oriented editions for now, while the community version might require recompilation in some cases (please see https://www.haproxy.com/blog/how-to-enable-quic-load-balancing-on-haproxy).

As an example, let's say that we want to use HAProxy to do TLS termination for DoH and DoT while having PROXYv2 enabled for the sake of transparency. Then, we should have the following listen-on statements in the configuration file of our BIND instance:

options {
    ...
    listen-on port 53000 proxy plain {
        192.168.1.14;
        127.0.0.1;
        ::1;
    };
    listen-on port 8080 proxy plain tls none http default {
        192.168.1.14;
        127.0.0.1;
        ::1;
    };
    ...
};
Please Notice "proxy plain"

As mentioned above, HAProxy supports only plain PROXYv2 headers.

# DNS over TLS (DoT)
frontend dot-tls
    mode tcp
# Here we specify "dot" as the ALPN token to be selected.
    bind *:853 v4v6 tfo ssl crt /path/to/cert/full.pem alpn dot
    default_backend dot-server-plain-tcp

backend dot-server-plain-tcp
    mode tcp
 # Address where BIND listens for unencrypted HTTP/2 requests
    server doh-server 192.168.1.14:53000 send-proxy-v2

# DNS over HTTPS (DoH)
frontend doh-https
    mode http
 # Here we specify ALPN tokens to be selected. It is crucial for DoH to work.
    bind *:443 v4v6 tfo ssl crt /var/lib/acme/t.artlabs.ws/full.pem alpn h2,http/1.1
    default_backend doh-server-plain-http2

backend doh-server-plain-http2
    mode http
 # Address where BIND listens for unencrypted HTTP/2 requests
    server doh-server 192.168.1.14:8080 proto h2 send-proxy-v2

Providing an encrypted path from the front-end to the backend (as we did in the case of dnsdist) is also possible. For that, we need to add TLS-enabled listeners to the configuration:

tls tls-cert {
    ...
    cert-file="/path/to/cert.pem",
    key-file="/path/to/key.pem"
    ...
}
options {
    ...
    listen-on port 8530 proxy plain tls tls-cert {
        192.168.1.14;
        127.0.0.1;
        ::1;
    };
    listen-on port 44343 proxy plain tls tls-cert http default {
        192.168.1.14;
        127.0.0.1;
        ::1;
    };
    ...
}

After that, we can use a similar configuration for HAProxy - the only difference is that we instruct it to use encryption (if you need to disable TLS certificate verification for testing purposes, use verify none):

# DNS over TLS (DoT)
frontend dot-tls
    mode tcp
    bind *:853 v4v6 tfo ssl crt /path/to/cert/full.pem alpn dot
    default_backend dot-server-bk

backend dot-server-bk
    mode tcp
    server doh-server 192.168.1.14:8530 ssl send-proxy-v2

# DNS over HTTPS (DoH)
frontend doh-https
    mode http
# Here we specify ALPN tokens to be selected. It is crucial for DoH to work.
   bind *:443 v4v6 tfo ssl crt /var/lib/acme/t.artlabs.ws/full.pem alpn h2,http/1.1
    default_backend doh-server-bk

backend doh-server-bk
    mode http
# Address where BIND listens for unencrypted HTTP/2 requests
    server doh-server 192.168.1.14:44343 ssl proto h2 send-proxy-v2

These examples should serve as a good starting point for deploying PROXYv2 using BIND, dnsdist and HAProxy.

Conclusion

This document was meant to guide and inform BIND operators on PROXYv2 protocol support. We hope that it fulfills its purpose and, even more than that, can serve as a guide on how the PROXYv2 protocol works in general when applied in a DNS setting. We have omitted some details, but if you are more interested in the PROXYv2 protocol itself, we suggest you read the relatively short specification. Provided that you are not an implementer, you can safely ignore the implementer-specific parts (section 2).

As we have mentioned before, with the help of the front-ends that support PROXYv2, you can have a very complicated, yet transparent to BIND, network of forwarding entities. That is possible with both dnsdist and HAProxy, as they also support accepting and forwarding PROXYv2. To learn how to make these tools accept PROXYv2, please take a look at setProxyProtocolACL for dnsdist and accept-proxy (that can be added to the bind statement) in the case of HAProxy.

Despite the fact that we are primarily concerned with BIND, dnsdist, and HAProxy, there are other programs and products that support PROXYv2, including some in-house implementations by cloud infrastructure providers. It should be possible to use BIND with many of them - at least with the ones that closely follow the specification.

Implementing PROXYv2 support for all DNS transports that BIND supports was a major task and required a significant redesign of some of the DNS transports. It is particularly connected to Stream DNS, BIND’s new unified DNS transport for DNS over TCP and DNS over TLS. Having PROXYv2 support will benefit both large and small installations of BIND and, maybe, even allow you to think of your infrastructure in different ways, as transparently passing information about remote peers to backends is a very powerful mechanism. We hope that you will find it useful.

Another thing worth mentioning regarding PROXYv2 support is that for now BIND is only capable of receiving PROXYv2, but there is no support for this when interacting with other servers (e.g. for forwarding). We may address this in future releases if there is demand for this feature.

References