Sending mesos logs to Elasticsearch Logstash Kibana (ELK stack)

By Bill Ward | April 8, 2017

In this post we enable more robust monitoring of our mesos cluster by sending our mesos logs to Logstash. This is very similar to my other post about sending logs from openstack.

This article assumes you already have a running Elastic Stack. If you don’t then go through my post Elk stack in under ten minutes and come back in 11 mins. Mesos is made of many components all of which essentially install the same way and log the same way. For this post we are going to limit our discussion to sending mesos logs to our ELK Stack. The other components will work similarly so don’t fret.

Preparing Logstash

We are going to configure Logstash to receive our logs from our mesos node. To do this, we will install a Logstash plugin called Filebeats. Filebeats takes the logs and parses them into fields that Elasticsearch can understand and make it easier for you to search the data.

Install Filebeats

You can skip this step if you have already installed filebeats on your logstash server. Run the following commands on your ELK stack server:

# cd /usr/share/logstash/
# bin/logstash-plugin install logstash-input-beats

Configure Logstash to receive FileBeats logstash

Create a new file named beats.conf in /etc/logstash/conf.d and add the following contents:

input {
  beats {
    port => "5043"
  }
}
output {
  elasticsearch {
    hosts => ["192.168.1.133:9200"]
  }
  stdout { codec => rubydebug }
}

Be sure to replace the IP for hosts with your particular IP for your ELK Stack.

Configure mesos nodes to send logs to Logstash

Install the FileBeats client by runng these commands on each of your masters and slaves.

$ curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.2.0-amd64.deb
$ sudo dpkg -i filebeat-5.2.0-amd64.deb

Confiure FileBeats client

Edit the Filebeats configuration to send the logs to our ELK Stack:

$ sudo vim /etc/filebeat/filebeat.yml

Make sure you have the following configurations enabled on all your mesos masters:

filebeat.prospectors:
- input_type: log
  paths:
    - /var/log/mesos/mesos-master.*  

output.logstash:
  hosts: ["192.168.1.133:5043"]

For each of your slaves use this configuration:

filebeat.prospectors:
- input_type: log
  paths:
    - /var/log/mesos/mesos-slave.*  

output.logstash:
  hosts: ["192.168.1.133:5043"]

Again make sure you use the correct IP for your ELK Stack. Notice that the input_type paths configuration is set to all the .log files in the /var/log/mesos directory.

Start and enable the FileBeats service.

$ sudo systemctl enable filebeat.service
$ sudo systemctl start filebeat.service

Filebeat template

Next we need to send the filebeat template to your logstash server. You only have to do this once if you are setting up logging for multiple servers using filebeats.

curl -XPUT 'http://192.168.1.73:9200/_template/filebeat' -d@/etc/filebeat/filebeat.template.json

Make sure to use the correct IP address for your logstash server.

Viewing mesos logs in Kibana

Go back to your Kibana page and you see your logs are available for searching! They should be available under the ‘logstash-’ index. Click on Discover and search for ‘mesos-master’ and you should see all your mesos master logs.

Kibana showing mesos Logs

Make your life easier by creating a saved search using the following search string.

beats.hostname="mesos-*"

See I told you that would be easy. If you liked this post please signup for my mailing list below to get updated every Friday about my latest posts for the week.

Subscribe to our mailing list

indicates required
Email Format

comments powered by Disqus
Google Analytics Alternative