Deploying a Consul cluster using Terraform and Puppet

By Bill Ward | June 1, 2017

After reading The DevOps Handbook it became obvious that the current trend in IT is defining your infrastructure as code. This gives you infrastructure that you can build and destroy at will. In this article we define a consul cluster in code using Hashicorp’s Terraform.

Cluster Design

Terraform is great at provisioning your infrastructure but leaves the configuration management for other solutions. The ultimate goal of this solution is to deploy a single consul server and multiple consul agents. To do this I also provision a puppet server from scratch so that it can configure our consul server and agents automatically after they are provisioned by terraform. If you would like to know more about Terraform may I suggest The Terraform Book by James Turnbull. He also happens to be a noted Puppet book author.

Diagram

The below diagram shows what we will be configuring:

consul cluster diagram

We have the terraform server where we have terraform installed. I actually have this running as an openstack instance. There is also a GitLab server where I commit all the terraform files to which gives us the version control. This also allows us to quickly rollback changes. For the cluster that gets provisioned using terraform we have the puppet server, a consul server, and 1 or more consul clients.

Terraform Code

Here is the code I used to deploy the cluser:

# puppet server
#
resource "openstack_compute_instance_v2" "puppet" {
  name = "puppet"
  image_name = "ubuntu-16-04"
  availability_zone = "nova"
  flavor_id = "cfe2cfe1-0731-4fbc-99b7-076b8a2d4724"
  key_pair = "oskey"
  security_groups = ["default","puppet"]
  network {
    name = "provider"
  }

  connection {
    user        = "ubuntu"
    private_key = "${file("file/oskey.pem")}"
    agent       = false
  }

  provisioner "file" {
    source = "site.pp"
    destination = "/tmp/site.pp"

    connection {
      user        = "ubuntu"
      private_key = "${file("file/oskey.pem")}"
      agent       = false
    }
  }

  provisioner "file" {
    source = "puppet-install.sh"
    destination = "/tmp/puppet-install.sh"

    connection {
      user        = "ubuntu"
      private_key = "${file("file/oskey.pem")}"
      agent       = false
    }
  }

  provisioner "remote-exec" {
    inline = [
      "chmod a+x /tmp/puppet-install.sh",
      "/tmp/puppet-install.sh ${openstack_compute_instance_v2.puppet.network.0.fixed_ip_v4}",
    ]
  }  

}

# consul servers
#
variable "count" {
  default = 1
}

resource "openstack_compute_instance_v2" "consul" {
  count = "${var.count}"
  depends_on = ["openstack_compute_instance_v2.puppet"]
  name = "${format("consul-%02d", count.index+1)}"
  image_name = "centos-7"
  availability_zone = "nova"
  flavor_id = "6605094c-a5ee-454b-beec-3c4ba723c85d"
  key_pair = "oskey"
  security_groups = ["default","consul","nomad"]
  network {
    name = "provider"
  }

  connection {
    user        = "centos"
    private_key = "${file("file/oskey.pem")}"
    agent       = false
  }

  provisioner "file" {
    source = "puppet-bootstrap.sh"
    destination = "/tmp/puppet-bootstrap.sh"

    connection {
      user        = "centos"
      private_key = "${file("file/oskey.pem")}"
      agent       = false
    }
  }

  provisioner "remote-exec" {
    inline = [
      "chmod a+x /tmp/puppet-bootstrap.sh",
      "/tmp/puppet-bootstrap.sh ${openstack_compute_instance_v2.puppet.network.0.fixed_ip_v4}",
    ]
  }

}


# privision consul clients
#
variable "client_count" {
  default = 3
}

resource "openstack_compute_instance_v2" "consul_client" {
  count = "${var.client_count}"
  depends_on = ["openstack_compute_instance_v2.consul"]
  name = "${format("consul-client-%02d", count.index+1)}"
  image_name = "centos-7"
  availability_zone = "nova"
  flavor_id = "e80482ce-8dfe-4727-86e2-a09a6bc53994"
  key_pair = "oskey"
  security_groups = ["default","consul","nomad"]
  network {
    name = "provider"
  }

  connection {
    user        = "centos"
    private_key = "${file("file/oskey.pem")}"
    agent       = false
  }

  provisioner "file" {
    source = "puppet-bootstrap.sh"
    destination = "/tmp/puppet-bootstrap.sh"

    connection {
      user        = "centos"
      private_key = "${file("file/oskey.pem")}"
      agent       = false
    }
  }

  provisioner "remote-exec" {
    inline = [
      "chmod a+x /tmp/puppet-bootstrap.sh",
      "/tmp/puppet-bootstrap.sh ${openstack_compute_instance_v2.puppet.network.0.fixed_ip_v4}",
    ]
  }
  
}

output "puppet-server-address" {
  value = "${openstack_compute_instance_v2.puppet.network.0.fixed_ip_v4}"
}

output "consul-server-addresses" {
  value = "${openstack_compute_instance_v2.consul.*.network.0.fixed_ip_v4}"
}

output "consule-agent-addresses" {
  value = "${openstack_compute_instance_v2.consul_client.*.network.0.fixed_ip_v4}"
}

output "consul-ui-url" {
  value = "http://${openstack_compute_instance_v2.consul.network.0.fixed_ip_v4}:8500/ui"
}

The first block configures the puppet server on an Ubuntu instance. This configures an openstack_compute_instance_v2 on Ubuntu 16.04 using a large flavor (I used the flavor GUID from my openstack cluster) which is a 2x4 instance. I set the key_pair and security groups. You can use a different provider for terraform other than openstack, just know that your options will vary slightly here.

File Provisioners

I have several provisioners that copy a bootstrap file to the server that will install puppet and some modules that we will need. We also copy the puppet code over so that our consul servers will be configured automatically.

node 'puppet.openstacklocal' {
  include puppetdb
  include puppetdb::master::config
}

node 'consul-01.openstacklocal' {
  include exportfact

  class { '::consul':
    config_hash => {
      'bootstrap_expect' => 1,
      'client_addr'      => '0.0.0.0',
      'data_dir'         => '/opt/consul',
      'datacenter'       => 'us-central-1',
      'log_level'        => 'INFO',
      'node_name'        => 'server',
      'server'           => true,
      'ui_dir'           => '/opt/consul/ui',
    }
  }

  exportfact::export { 'consul_server_ip':
    value => $ipaddress,
    category => "consul"
  }
}

node /^consul-client-\d{2}.openstacklocal$/ {
  include exportfact

  exportfact::import { 'consul': }

  class { '::consul':
    config_hash => {
      'data_dir'   => '/opt/consul',
      'datacenter' => 'us-central-1',
      'log_level'  => 'INFO',
      'node_name'  => "${fqdn}",
      'retry_join' => [$consul_server_ip],
    }
  }
}

This uses the exportfact module to save the IP address of the consul master as a fact on all nodes that import it.

After our puppet server is configured fully, terraform then will begin to configure our consul server.

Terraform Dependencies

When this cluster is built we need to make sure that the servers are created in a certain order.

puppet server -> consul server -> consul client(s)

To do this we utilize the depends_on stanza for terraform.

resource "openstack_compute_instance_v2" "consul" {
  count = "${var.count}"
  depends_on = ["openstack_compute_instance_v2.puppet"]

Here we are telling terraform that we need to have the puppet server built and provisioned before the consul server is built. This is because we need to make sure that puppet is up and running so that it can automatically configure our consul servers. We have a similar dependency for our consul clients that depend on the consul server because we have to know the consul server’s IP address in order to configure the clients to connect to the consul server.

Consul Server and Clients

The consul server and clients have similar code for provisioning. For the clients we have a bash script that is run by a terraform provisioner to install puppet and run it:

#!/bin/bash

sudo su <<EOF
echo "$1	puppet" >> /etc/hosts
rpm -Uvh https://yum.puppetlabs.com/puppetlabs-release-pc1-el-7.noarch.rpm
yum install puppet-agent unzip -y
/opt/puppetlabs/bin/puppet agent --test
echo "First Puppet Run Complete"
/opt/puppetlabs/bin/puppet agent --test
echo "Second Puppet Run Complete"
EOF

Since we are using the exportfact module we have to run puppet twice. The first run gets the exported fact and saves it to the local server. The second run makes use of that fact to run the consul puppet module and configure the consul client.

Why go through all this??

So I know there is a terrafom module for consul, but I decided to go this route in order to have a little more control over my cluster. This also allows me to use puppet to configure consul services, watches and more. The biggest benifit is the ability to have my infrastructure coded and checked into source control. Now if I was to use this in a production environment it would be a simple matter to stand up a production-like development environment using the same code. And I can install it in a matter of minutes rather than provisioning all the servers by hand and configuring them by hand. Sure there is other automation out there that handles this kind of stuff (like openstack heat) but none of them are provider agnostic. With terraform I can build the same cluster on AWS or GCE with a minimum amount of effort.

I hope you enjoyed this post. If it was helpful or if it was way off then please comment and let me know.

Subscribe to our mailing list

indicates required
Email Format

comments powered by Disqus
Google Analytics Alternative