|
| 1 | +# Consul Agent in Docker |
| 2 | + |
| 3 | +This project is a Docker container for [Consul](http://www.consul.io/). It's a slightly opinionated, pre-configured Consul Agent made specifically to work in the Docker ecosystem. |
| 4 | + |
| 5 | +## Getting the container |
| 6 | + |
| 7 | +The container is very small (28MB virtual, based on busybox) and available on the Docker Index: |
| 8 | + |
| 9 | + $ docker pull progrium/consul |
| 10 | + |
| 11 | +## Using the container |
| 12 | + |
| 13 | +#### Just trying out Consul |
| 14 | + |
| 15 | +If you just want to run a single instance of Consul Agent to try out its functionality: |
| 16 | + |
| 17 | + $ docker run -p 8400:8400 -p 8500:8500 -p 8600:53/udp -h node1 progrium/consul -server -bootstrap |
| 18 | + |
| 19 | +We publish 8400 (RPC), 8500 (HTTP), and 8600 (DNS) so you can try all three interfaces. We also give it a hostname of `node1`. Setting the container hostname is the intended way to name the Consul Agent node. |
| 20 | + |
| 21 | +Our recommended interface is HTTP using curl: |
| 22 | + |
| 23 | + $ curl localhost:8500/v1/catalog/nodes |
| 24 | + |
| 25 | +We can also use dig to interact with the DNS interface: |
| 26 | + |
| 27 | + $ dig @0.0.0.0 -p 8600 node1.node.cluster |
| 28 | + |
| 29 | +However, if you install Consul on your host, you can use the CLI interact with the containerized Consul Agent: |
| 30 | + |
| 31 | + $ consul members |
| 32 | + |
| 33 | +#### Testing a Consul cluster on a single host |
| 34 | + |
| 35 | +If you want to start a Consul cluster on a single host to experiment with clustering dynamics (replication, leader election), here is the recommended way to start a 3 node cluster. |
| 36 | + |
| 37 | +We're **not** going to start the first node in bootstrap mode because we want it as a stable IP for the others to join the cluster. |
| 38 | + |
| 39 | + $ docker run -d --name node1 -h node1 progrium/consul -server |
| 40 | + |
| 41 | +We can get the container's internal IP by inspecting the container. We'll put it in the env var `JOIN_IP`. |
| 42 | + |
| 43 | + $ JOIN_IP="$(docker inspect -f '{{ .NetworkSettings.IPAddress }}' node1)" |
| 44 | + |
| 45 | +Then we'll start `node2`, which we'll run as the bootstrap (forced leader) node, and tell it to join `node1` using `$JOIN_IP`: |
| 46 | + |
| 47 | + $ docker run -d --name node2 -h node2 progrium/consul -server -bootstrap -join $JOIN_IP |
| 48 | + |
| 49 | +Now we can start `node3`. Very simple: |
| 50 | + |
| 51 | + $ docker run -d --name node3 -h node3 progrium/consul -server -join $JOIN_IP |
| 52 | + |
| 53 | +That's a three node cluster. Notice we've also named the containers after their internal hostnames / node names. At this point, we can kill and restart `node2` without bootstrap mode since otherwise it will always be the leader. |
| 54 | + |
| 55 | + $ docker rm -f node2 |
| 56 | + $ docker run -d --name node2 -h node2 progrium/consul -server -join $JOIN_IP |
| 57 | + |
| 58 | +We now have a real cluster running on a single host. We haven't published any ports to access the cluster, but we can use that as an excuse to run a fourth agent node in "client" mode. This means it doesn't participate in the consensus quorum, but can still be used to interact with the cluster. |
| 59 | + |
| 60 | + $ docker run -d -p 8400:8400 -p 8500:8500 -p 8600:53/udp -h node4 progrium/consul -join $JOIN_IP |
| 61 | + |
| 62 | +Now we can interact with the cluster on those published ports and, if you want, play with killing, adding, and restarting nodes to see how the cluster handles it. |
| 63 | + |
| 64 | +#### Running real Consul cluster in a production environment |
| 65 | + |
| 66 | +Setting up a real cluster on separate hosts is very similar to our single host cluster setup process. However, we're going to pass the `-advertise` flag with the machine's external IP. This should be on a private network otherwise more advanced encryption is needed. You'll also need to publish all the ports to this interface, including the internal Consul ports (8300, 8301, 8302). |
| 67 | + |
| 68 | +Assuming we're on a host with a private IP of 10.0.1.1, we can start the first host agent: |
| 69 | + |
| 70 | + $ docker run -d -h node1 \ |
| 71 | + -p 10.0.1.1:8300:8300 \ |
| 72 | + -p 10.0.1.1:8301:8301 \ |
| 73 | + -p 10.0.1.1:8302:8302 \ |
| 74 | + -p 10.0.1.1:8400:8400 \ |
| 75 | + -p 10.0.1.1:8500:8500 \ |
| 76 | + -p 10.0.1.1:8600:53/udp \ |
| 77 | + progrium/consul -server -advertise 10.0.1.1 |
| 78 | + |
| 79 | +On the second host, we'd run the same thing, but passing `-bootstrap` and a `-join` to the first node's IP. Let's say the private IP for this host is 10.0.1.2: |
| 80 | + |
| 81 | + $ docker run -d -h node2 \ |
| 82 | + -p 10.0.1.2:8300:8300 \ |
| 83 | + -p 10.0.1.2:8301:8301 \ |
| 84 | + -p 10.0.1.2:8302:8302 \ |
| 85 | + -p 10.0.1.2:8400:8400 \ |
| 86 | + -p 10.0.1.2:8500:8500 \ |
| 87 | + -p 10.0.1.2:8600:53/udp \ |
| 88 | + progrium/consul -server -advertise 10.0.1.2 -bootstrap -join 10.0.1.1 |
| 89 | + |
| 90 | +And the third host with an IP of 10.0.1.3: |
| 91 | + |
| 92 | + $ docker run -d -h node3 \ |
| 93 | + -p 10.0.1.3:8300:8300 \ |
| 94 | + -p 10.0.1.3:8301:8301 \ |
| 95 | + -p 10.0.1.3:8302:8302 \ |
| 96 | + -p 10.0.1.3:8400:8400 \ |
| 97 | + -p 10.0.1.3:8500:8500 \ |
| 98 | + -p 10.0.1.3:8600:53/udp \ |
| 99 | + progrium/consul -server -advertise 10.0.1.3 -join 10.0.1.1 |
| 100 | + |
| 101 | +Once the third host is running, you want to go back to the second host, kill the container, and run it again just as before but without the `-bootstrap` flag. You'd then have a full cluster running in production on a private network. |
| 102 | + |
| 103 | +## Opinionated Configuration |
| 104 | + |
| 105 | +#### DNS |
| 106 | + |
| 107 | +This container was designed assuming you'll be using it for DNS on your other contianers. So it listens on port 53 inside the container to be more compatible and accessible via linking. It also has DNS recursive queries enabled, using the Google nameservers. |
| 108 | + |
| 109 | +It also assumes DNS is a primary means of service discovery for your entire system. It uses `cluster` as the top level domain instead of `consul` just as a more general/accurate naming ontology. |
| 110 | + |
| 111 | +#### Runtime Configuration |
| 112 | + |
| 113 | +Although you can extend this image to add configuration files to define services and checks, this container was designed for environments where services and checks can be configured at runtime via the HTTP API. |
| 114 | + |
| 115 | +It's recommended you keep your check logic simple, such as using inline `curl` or `ping` commands. Otherwise, keep in mind the default shell is `ash`. |
| 116 | + |
| 117 | +## Sponsor |
| 118 | + |
| 119 | +This project was made possible thanks to [DigitalOcean](http://digitalocean.com). |
| 120 | + |
| 121 | +## License |
| 122 | + |
| 123 | +BSD |
0 commit comments