Deploying Elasticsearch, Logstash, and Kibana with puppet

By Bill Ward | April 19, 2017

I walked you through the hard way of deploying an ELK stack using packages. This can be error prone and really doesn’t ring well with current DevOps/Infrastructure as Code mantra. So this post will help you accomplish the same setup but using Puppet for configuration management.

We will be using the same configuration as my earlier post ELK STACK IN UNDER TEN MINUTES but this time we are going to do it using Puppet. This will be a short post because frankly, there isn’t much to getting this ramped up.

Requirements

I assume you have the current items already taken care of:

  • Puppet running in your environment. Can be either open source or enterprise. I don’t cover any roles or profiles in this post but it should suffice to say that this would all go into a single role called something like ‘elk_server’. Then you would have to split the different servers (elasticsearch, logstash, and kibana) into seperate profiles that are included in the role you create.
  • You need to ensure you have at least 4Gb of memory on the server you want to configure as an ELK server
  • The elk server OS is one supported by Puppet’s Java module. For details the instructions for the Java Puppet Module. In this post I use Ubuntu 16.04.
  • I assume you have some knowledge on Puppet. Also this article doesn’t go into using r10k, mcollective, or hieradata for management of this code. But if you have a good understanding of Puppet then it should translate well.

Install the Modules

The first thing we need to do is install the correct puppet modules so that we can deploy our ELK stack. Run the following commands as root on your puppet master.

# puppet module install elastic-elasticsearch --version 5.1.1
# puppet module install elastic-logstash --version 5.1.0
# puppet module install elastic-kibana --version 0.2.1

Naturally, this will also download any dependencies that the modules need.

Puppet Node Configuration

After you have puppet installed on the client system, add the following configuration to your site.pp file on the master.

/etc/puppetlabs/code/environments/production/manifests/site.pp

node 'logging.openstacklocal' {

  $myconfig =  @("MYCONFIG"/L)
input {
  beats {
    port => 5043
  }
}
output {
  elasticsearch {
    hosts => "192.168.1.133:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
  stdout { codec => rubydebug }
}
| MYCONFIG

  class { 'elasticsearch':
    java_install => true,
    manage_repo  => true,
    repo_version => '5.x',
    restart_on_change => true,
  }

  elasticsearch::instance { 'es-01':
    config => {
      'network.host' => '192.168.1.133',
    },
  }

  class { 'logstash':
    settings => {
      'http.host' => '192.168.1.133',
    }
  }

  logstash::configfile { '02-beats-input.conf':
    content => $myconfig,
  }

  logstash::plugin { 'logstash-input-beats': }

  class { 'kibana' :
    config => {
      'server.host'       =>  '192.168.1.133',
      'elasticsearch.url' => 'http://192.168.1.133:9200',
      'server.port'       => '5601',
    }
  }

  class { 'filebeat':
    outputs => {
      'logstash' => {
        'hosts' => [
          '192.168.1.68:5043',
        ],
        'index' => 'filebeat',
      },
    },
  }

  filebeat::prospector { 'syslogs':
    paths    => [
      '/var/log/auth.log',
      '/var/log/syslog',
    ],
    doc_type => 'syslog-beat',
  }

}

There are a couple of differences from my earlier post. The elasticsearch puppet module requires that elasticsearch run in a named instance. Also we set the instance to make use of the Filebeats templates so the index will be filebeat-* instead of logstash-*.

Run Puppet

Run puppet on your client system and you will have everything configured for you with puppet.

NOTE: Depending on your hardware for your ELK server, it could take a considerable amount of time for your index to show up in kibana. Make sure that you are indeed pushing logs to it using filebeats. For examples, you can follow some of my other posts on the subject:

SENDING OPENSTACK LOGS TO ELASTICSEARCH LOGSTASH KIBANA

SENDING MESOS LOGS TO ELASTICSEARCH LOGSTASH KIBANA

Conclusion

They will work with this setup as well.

I hope you enjoyed this post. If it was helpful or if it was way off then please comment and let me know.

Subscribe to our mailing list

indicates required
Email Format

comments powered by Disqus