[Pdns-users] Master/Slaves in docker containers
christian.tardif at servinfo.ca
Wed May 29 12:51:54 UTC 2019
On 2019-05-29 02:52, frank+pdns--- via Pdns-users wrote:
>> On 29 May 2019, at 06:24, Christian Tardif <christian.tardif at servinfo.ca> wrote:
>> I'm trying to get this to work:
>> I have one master pdns in a docker container with bridge networking on 1 server, plus a slave pdns, also in a docker container with bridge networking on another server. On the master, I have a zone (until I get it to work) configured as I would do with any other dns servers (SOA, NS records (with the real IP of the master and slaves, as I need to reach them from this "external" ip).
> Are both servers in a Docker Swarm network or are they standalone
> servers? If standalone, is migrating to a Swarm network an option? It
> would make things a lot easier, network-wise.
Not really, unfortunately. The hosts where this resides are already
running kubernetes as docker containers. So why installing pdns within
Kubernetes then? I can't, as this pdns is meant to be the primary DNS
for the main network, which DNS the nodes need in order to be able to
actually start kubernetes.. the chicken and egg issue...
>> When I'm doing an update on the zone in the master, I see that, on the slave server, that I'm receiving the NOTIFY, but coming from 172.17.0.1 (got from the docker bridge) but then, the slave tries to get either the SOA on NS records for the zone.... at 172.17.0.1, which leads to nothing, as this is a NAT. How can I have the slave to unconditionally request the master server (on its real IP) for this zone about the SOA so this master/slave setup actually works?
> In general. if you want to use the supermaster functionality when the
> NOTIFYs are coming from a different ip, you'll need to change the ip
> of the master in the domains table of your backend to the "real" ip.
> You could do that using triggers in the database for instance (or have
> a script that you run every minute to update the records).
> However, let's take a step back. Docker does outbound NAT, not 2-way
> NAT. Let's assume on serverA, you run container1 (your master).
> container1 has (local) container ip 172.16.0.10. serverA has public ip
> Your slave runs in container2 (172.17.0.20) on serverB (10.10.10.20).
> The NOTIFY container2 receives, should have a source ip address
> 10.10.10.10, which is the correct ip, as container2 should use that
> address to reach container1. (Assuming your Docker hosts aren't in
> a/the same Swarm network). If you've told docker to map port 53 (tcp
> and udp) to your containers, then this setup should work.
> Could you describe your setup, describe which ports you've opened and
> where, and where exactly you see the NOTIFY coming from the wrong ip?
Sure I can. My setup, from this point of view, is a plain Docker setup,
using bridge networking.
pdns master is running on host 192.168.213.11, and container ip is
pdns slave is running on host 192.168.213.12, and container ip is
both containers have gateway set to 172.17.0.1, and hosts have gateway
set to 192.168.213.1
Both containers publishes udp/53 and tcp/53 (as 0.0.0.0:53) so
basically, I can connect to any of these two, targetting the
But when I do a zone update on the master container, docker logs of the
pdns-slave shows these two things, for all the domains for which he
should be authoritative:
- Received NOTIFY for _this_particular_zone_ from 172.17.0.1 for which
we are not authoritative
- Error resolving SOA or NS for _this_particular_zone_ at: 172.17.0.1:
Query to '172.17.0.1' for SOA of '_this_particular_zone_' produced no
Already, this 172.17.0.1 thing is strange, as this isn't the IP of any
of the containers. And for sure, 172.17.0.1 won't return anything, as
pdns isn't listening there...
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Pdns-users