Installing Consul On Ubuntu

By Bill Ward | July 14, 2017

Consul is a service discovery solution that also provides failure detection, Key/Value pair storage, and is datacenter aware out-of-the-box. I use it in a lot of projects over older solutions like Zookeeper because it is much more robust in my humble opinion. In this post we will walk through installing a basic Consul cluster using Ubuntu.

Introduction

We will be configuring a three node Consul cluster. Consul needs to have quorum in order to function. If we have a three node cluster we can survive one of the nodes failing and still have consul up. If we had a five node cluster we could survive a two node failure. More than five nodes is not recommended because of the gossib protocol (Serf) that Consul uses becomes too noisy and affects performance. How Consul/Serf handle leader election is really amazing and I highly recommend reading more about Serf and Raft.

For this tutorial you will need three virtual machines with Ubuntu installed and updated. I also recommend that you have static IP’s assigned to them and that some sort of name resolution is enabled on them (i.e. - /etc/hosts or DNS).

Name IP
consul-01 192.168.1.30
consul-02 192.168.1.31
consul-03 192.168.1.32

Make sure everyone can ping everyone else and we should be good to move on.

Consul Service Configuration

The following steps need to be completed on each server.

Login in and sudo to root. Make sure you have unzip installed.

$ sudo su -
# apt install unzip -y

Go to the Consul download page and copy the link location to the clipboard. Next go to your terminal and wget the link to download the zip file. The link may have some extra stuff at the end. Delete everything after .zip before you hit enter.

# wget https://releases.hashicorp.com/consul/0.8.5/consul_0.8.5_linux_amd64.zip

Move the Consul binary file to /usr/local/bin.

# mv consul /usr/local/bin

SystemD Service Configuration

We are going to run consul as a SystemD service. On consul-01 create the following file:

/etc/systemd/system/consul.service

[Unit]
Description=Consul
Documentation=https://www.consul.io/

[Service]
ExecStart=/usr/local/bin/consul agent -server -ui -data-dir=/tmp/consul -node=consul-01 -bind=192.168.1.30 -config-dir=/etc/consul.d/
Egent execReload=/bin/kill -HUP $MAINPID
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

For consul-02

[Unit]
Description=Consul
Documentation=https://www.consul.io/

[Service]
ExecStart=/usr/local/bin/consul agent -server -join=192.168.1.30 -data-dir=/tmp/consul -node=consul-02 -bind=192.168.1.31 -config-dir=/etc/consul.d/
Egent execReload=/bin/kill -HUP $MAINPID
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

And consul-03

[Unit]
Description=Consul
Documentation=https://www.consul.io/

[Service]
ExecStart=/usr/local/bin/consul agent -server -join=192.168.1.30 -data-dir=/tmp/consul -node=consul-02 -bind=192.168.1.32 -config-dir=/etc/consul.d/
Egent execReload=/bin/kill -HUP $MAINPID
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

Notice that for consul-02 and consul-03 we have -join=102.168.1.30. This tells these two to join the cluster by pointing them to the first server we create consul-01. Also, we have -bind=192.168.1.x that matches the IP of the server we are configuring.

On all servers run this command:

# mkdir /etc/consul.d/

This directory is for .json files that will hold any extra configuration you might need in the future.

Configure the Consul UI on consul-01

On consul-01 create the following file:

/etc/consul.d/ui.json

{
  "addresses": {
    "http": "0.0.0.0"
  }
}

This will make the UI accept connections by binding to the any IP address on the system instead of the default of 127.0.0.1.

Start the cluster

Starting with consul-01 start the services:

# systemctl daemon-reload
# systemctl start consul.service
# systemctl enable consul.service

After you start the services on all three servers you can run the below command on any of the servers to see if everything is working correctly:

# consul members
Node             Address            Status  Type    Build  Protocol  DC
consul-01        192.168.1.25:8301  alive   server  0.8.5  2         dc1
consul-02        192.168.1.26:8301  alive   server  0.8.5  2         dc1
consul-03        192.168.1.27:8301  alive   server  0.8.5  2         dc1

Congratulations! You now have your very own consul cluster. If you would like to access your cluster from your development machine (and you should), then read on.

Development Machine Install

To be able to access and manage your cluster from your dev machine, download the binary and move it to /usr/local/bin just like we did on the servers. Next add the following line to your ~/.bash_profile file:

export CONSUL_HTTP_ADDR="192.168.1.30:8500"

Source the file (source ~/.bash_profile) and run consul members. You should see the same output as on the servers.

You can also view the UI by going to the following URL:

http://192.168.1.30:8500

If you have DNS setup then you can use the DNS name instead of the IP.

I hope you enjoyed this post. If it was helpful or if it was way off then please comment and let me know.

Subscribe to our mailing list

indicates required
Email Format

comments powered by Disqus