Puppet in Docker as a Service

By Bill Ward | May 25, 2017

Puppet is a great tool for configuration management. The biggest problem I have found with Puppet is in a large (1000+) nodes in a production environment. Using Puppet Enterprise you have to move from the large monolithic servers to a model that splits out the puppet services to different servers. This adds a great deal of complexity to your puppet environment and makes it error prone and frankly pretty fragile. Not the most ideal solution for a production environment. Luckily, Puppet has caught on to the DevOps wave and started a project called Puppet-in-Docker where they have containerized Puppet services into containers. This is just what we need to create a Puppet-as-a-Service deployment model where we can quickly and repeatedly create production-like Puppet environments.

Overview

The goal of this project is to automatically provision and configure a Puppet-in-Docker environment consisting of multiple puppet servers running in docker containers. These containers will be running on a Consul/Nomad cluster which will provide elasticity as demand grows. Nomad will spin up multiple puppet server containers that will have the IP address of the Nomad Agent(s) that the individual containers are running on and a random port per container. We simply load balance these using something like HAProxy that will proxy requests to the HAProxy servers public IP and port 8140 (the puppet port) and route them to a puppet server container on our nomad cluster.

Storage

The problem here is that the multiple containers need a way of sharing data. Puppet server stores the puppet SSL certs of the puppet agents you approve in the /etc/puppetlabs/puppet/ssl directory. With multiple containers we need a way of sharing this directory to storage that any puppet server container can access on any nomad agent server. The solution I went with was Ceph because I wanted this solution to be production ready. This gives us distributed storage and ensures that our data will be safe.

This can be extended to the /etc/puppetlabs/code directory so that each container can have access to the configuration data for our environment. In ceph you simply create another rbd image and mount it on all the containers. We could mount other directories as well and share those amongst all the puppet containers but to keep this series short we will be just configuring these two shares.

Puppet-in-Docker

Puppet-in-Docker has many different docker containers that allow us to setup the puppet environment that best fits our environment. You can see a list of these containers here. For this initial series of articles I will be using the puppetserver-standalone container to keep things simple ( well as simple as they can get for this series ).

Infrastructure as Code

Spinning up a Docker container is stupid simple, so I won’t be covering it here. Rather we need a way to orchestrate the process and make it more cloud-like. The basis of this environment will be configured in Terraform which enables infrastructure as code. Terraform will be used to configure our consul/nomad cluster on the fly and our Ceph cluster.

I use Openstack to house all my infrastructure servers so I configured a Terraform provider that represents my openstack instance. Here is the code for the Ceph OSD servers.

resource "openstack_compute_instance_v2" "ceph-osd" {
  count = "${var.osdcount}"
  name = "${format("ceph-osd-%02d", count.index+1)}"
  image_name = "ubuntu-14-04"
  availability_zone = "nova"
  flavor_id = "e80482ce-8dfe-4727-86e2-a09a6bc53994"
  key_pair = "oskey"
  security_groups = ["default","web","ceph"]
  network {
    name = "provider"
  }

  connection {
    user        = "ubuntu"
    private_key = "${file("file/oskey.pem")}"
    agent       = false
  }

  provisioner "file" {
    source = "salt-bootstrap.sh"
    destination = "/tmp/salt-bootstrap.sh"

    connection {
      user        = "ubuntu"
      private_key = "${file("file/oskey.pem")}"
      agent       = false
    }
  }

  provisioner "remote-exec" {
    inline = [
      "chmod a+x /tmp/salt-bootstrap.sh",
      "/tmp/salt-bootstrap.sh ${format("ceph-osd-%02d", count.index+1)}",
    ]
  }

}

Benefits

We will use similar code to stand up our Consul/Nomad cluster. This has many benefits including the ability to build and destroy these clusters in a matter of minutes. If they become unstable, simply destroy the cluster and rebuild it in minutes. Since our configuration is stored on distibuted shared storage everything will be there when the cluster comes back up. Your puppet agents will most likely not even notice the hickup. In addition, this makes upgrades a snap. Simply, blue/green field the different versions and switch your load balancer to upgrade. Compare this to the weeks or months it could take to properly update a distributed puppet enterprise environment.

Conclusion

This article was just a 10,000 foot overview of what we will be doing. Look for the articles that go into the weeds on how to configure this solution step-by-step in the next few days.

Sign up for my newsletter to see when I post these articles.

I hope you enjoyed this post. If it was helpful or if it was way off then please comment and let me know.

Subscribe to our mailing list

indicates required
Email Format

comments powered by Disqus