Service discovery is an age old problem, made ever so complicated by the copious layers of virtualisation and isolation that we use nowadays to run anything. Since humans are better at remembering names than numbers, DNS can be very effective at helping to solve this problem, and indeed a lot of projects already exist in this space. However, I have decided to create another one: DD-DNS or Docker-Dynamic DNS service, which can create and update A records at public DNS providers in real time based on docker labels. In this post I’ll explain the problem that I wanted it to solve and compare it to a few other solutions I have considered. The code, along with instructions on how to use the tool are available at https://github.com/wdullaer/dd-dns.
I have a small server at home which runs various services I don’t want to host on a public cloud instance for privacy reasons. Since this is a very low powered device, everything runs on the metal (no virtualisation) and I have been using docker for isolation almost since it was announced. Originally I used the host IP address along with a port number to address the services, but this caused a number of issues:
- The port numbers became harder and harder to remember as the amount of services I was running went up
- Having to type in numbers in a browser wasn’t user friendly enough for other members of the household
- Not having a proper domain name made it rather hard to run services using HTTPS without triggering security warnings in a browser
It became increasingly clear that I should have individual hostnames for each of my services. Ideally these should be configured automatically and any config required should be put into the docker-compose file I was already using for the containers.
In order to make this work in a context where multiple services are running on the same host, I would need 2 components:
- A router, which knows which services are currently running and can route traffic to the right one based on the requested hostname. This can be achieved using Traefik or Caddyserver, which are webservers that can configure themselves automatically based on information from the docker daemon (among other things). How I wired this up is the subject of another blog post.
- A DNS registrator or server that can automatically configure itself based off of information provided by the docker daemon. This is the role that dd-dns aims to fulfill.
Service Discovery is an age old problem and so various solution already exist. As a general rule I try to avoid writing new code, because creating production ready software is always more work than you think and even though it’s just a home network, I like this kind of infrastructure plumbing to “just work” without much babysitting. I’m going to present a few of those which I have considered using and highlight why they didn’t meet my needs in the end.
Consul is a household name when it comes to service discovery. Hashicorp makes very fine tools, and consul is no exception. Consul can do a whole lot of things, but the main part I am interested in is the fact that it can expose registered services through a built-in DNS server.
The idea is that services register themselves with consul when they start. Services which are not consul aware can be wrapped with something like Containerpilot that can take care of the registering for them.
Consul will take control over a top level domain (
.consul by default) and generate a domain name for each registered service. The network then needs to be configured such that queries for this TLD are sent to a consul instance, rather than a public DNS server.
Note that nothing here is docker aware. Since everything happens at the application level, this is a very general solution that can be used regardless of how services are deployed.
The main downside of Consul for my usecase was how invasive it is. I would have to deploy consul (fair enough), wrap most of my services with containerpilot, inject a consul agent into each of them and I’d have to mess around with the internals of my routers DNS server. There are just too many additional moving parts that can break down. Another minor niggle is that you can’t chose what the resulting domain name will look like.
CoreDNS is a very flexible, easy to configure DNS server. It has recently been adopted as the internal DNS server for kubernetes. CoreDNS has a plugin system, which, among other things, allows it to read its zone configuration from various sources such as BIND files or a kubernetes cluster.
Because it is a fully featured DNS server, integrating it into the network is also quite straight forward: unlike consul, it can complete replace the built-in DNS server of my router.
Surprisingly, there is no plugin which allows CoreDNS to read zone information from a docker daemon. In order to use it in my home setup I would either have to migrate all my services to kubernetes, or write a docker plugin. I’m not a big fan of kubernetes to start with, but for a small setup like mine it is completely overkill, so this option went right off the table.
I did start on a docker plugin, using the kubernetes plugin as a template, but didn’t make much progress on the two evenings I have spent on it. I don’t know how to properly describe this, but I couldn’t quite get the shape of the code in my head, while I had no such issues with the third approach I was considering.
The main idea behind dd-dns is inspired by Traefik: listen to docker events and based on container labels, create A records at a public DNS provider. It has a number of nice operational benefits over the other options:
- I don’t have to run an additional DNS server. I can leave that to the professionals at the provider.
- If I make a mistake (DD-DNS crashes), all my other services remain reachable. If I’d make a mistake which causes CoreDNS to crash, my entire network would be down.
- DD-DNS is just one service that needs to run on the host, no changes to any existing services required.
- Using public DNS providers means I can’t run into any DNS server related issues when using a VPN or Wireguard.
The main downside of course is that I have to write it, but docker exposes a rather well designed API and the APIs of public DNS providers are a lot easier to deal with than implementing the DNS protocol in CoreDNS.
Am I claiming that this solution is better than the other tools: absolutely not. My needs are probably quite niche, and most likely not your needs. In fact, I expect to be the only user of DD-DNS, but if someone else finds it useful, all the better.
Special thanks to William Hughes for his input.