Consul is a service discovery solution that also provides failure detection, Key/Value pair storage, and is datacenter aware out-of-the-box. I use it in a lot of projects over older solutions like Zookeeper because it is much more robust in my humble opinion. In this post we will walk through installing a basic Consul cluster using Ubuntu.
Jenkins doesn’t come with high-availability out-of-the-box. Using Nomad/Consul we can configure Jenkins to be highly-available. Furthermore, we can use terraform to make deployment a breeze.
This post is the next in a series of posts that shows how to integrate a complete production puppet environment with serf. In this post we learn how to use serf to fire off r10k updates on your compile masters. This takes the place of running mco r10k synchronize.
Serf is a very powerful tool for managing and orchestrating your clusters. The power comes from the ability for you to write customer queries and events to handle almost any situation. In this post we will go through configuring a serf cluster to handle a sample custom query to get average CPU Utilization from each of our cluster member servers in a matter of seconds.
This post will walk you through setting up a quick Serf cluster on Ubuntu 16.10 servers. Serf is an awsome clustering applicaiton that managers cluster membership, is decentralized, and recovers from downed nodes quickly.
If you have used puppet in a production environment before, then you have probably used MCollective which is the orchestration system for puppet enterprise. Most of the time this works out great. In my shop though, we have had many problems with MCollective. This post gives you a way to replace MCollective with HashiCorp’s Serf and explains how this is a much better solution.
After reading The DevOps Handbook it became obvious that the current trend in IT is defining your infrastructure as code. This gives you infrastructure that you can build and destroy at will. In this article we define a consul cluster in code using Hashicorp’s Terraform.
Puppet is a great tool for configuration management. The biggest problem I have found with Puppet is in a large (1000+) nodes in a production environment. Using Puppet Enterprise you have to move from the large monolithic servers to a model that splits out the puppet services to different servers. This adds a great deal of complexity to your puppet environment and makes it error prone and frankly pretty fragile. Not the most ideal solution for a production environment. Luckily, Puppet has caught on to the DevOps wave and started a project called Puppet-in-Docker where they have containerized Puppet services into containers. This is just what we need to create a Puppet-as-a-Service deployment model where we can quickly and repeatedly create production-like Puppet environments.
I walked you through the hard way of deploying an ELK stack using packages. This can be error prone and really doesn’t ring well with current DevOps/Infrastructure as Code mantra. So this post will help you accomplish the same setup but using Puppet for configuration management.