Streaming Spring Boot Application Logs to ELK Stack - Part 1
What is ELK Stack ?
Logstash is a tool for managing logs. It supports virtually any type of log, including system logs, error logs, and custom application logs. It can receive logs from numerous sources, including syslog, messaging (for example, rabbitmq), and jmx, and it can output data in a variety of ways, including email, websockets, and to Elasticsearch.
Elasticsearch is a full-text, real-time search and analytics engine that stores the log data indexed by Logstash. It is built on the Apache Lucene search engine library and exposes data through REST and Java APIs. Elasticsearch is scalable and is built to be used by distributed systems.
Kibana is a web-based graphical interface for searching, analyzing, and visualizing log data stored in the Elasticsearch indices. It utilizes the REST interface of Elasticsearch to retrieve the data, and not only enables users to create customized dashboard views of their data, but also allows them to query and filter the data in an ad hoc manner.
The following image illustrates how the ELK Stack components are used to collect log data
Install & Configure Logstash
Install
Download Logstash zip from https://www.elastic.co/downloads/logstash
Extract it (unzip it)
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.1.1.zip
unzip logstash-5.1.1.zip
Configuration
Typical Logstash config file consists of three main sections: input, filter and output. Each section contains plugins that do relevant part of the processing.
Create a log.conf file in the root directory of the Logstash installation and copy the following code into it
input {
tcp {
port => 9600
type => syslog
}
udp {
port => 9600
type => syslog
}
}
filter {
grok {
match => [ "message", "%{GREEDYDATA}" ]
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-%{+YYYY.MM.dd}"
workers => 1
}
}
Input section
Input section defines from where Logstash will read input data — in our case it will be a file hence we will use a file plugin with multiline codec, which basically means that our input file may have multiple lines per log entry.
input {
tcp {
port => 9600
type => syslog
}
udp {
port => 9600
type => syslog
}
}
Filter section
Filter section contains plugins that perform intermediary processing on an a log event. In our case, event will either be a single log line or multiline log event grouped according to the rules described above. In the filter section we will do several things:
Tag a log event if it contains a stacktrace. This will be useful when searching for exceptions later on.
Parse out (or grok, in logstash terminology) timestamp, log level, pid, thread, class name (logger actually) and log message.
Specified timestamp field and format — Kibana will use that later for time based searches.
Filter section for Spring Boot’s log format that aforementioned things looks like this:
filter {
grok {
match => [ "message", "%{GREEDYDATA}" ]
}
}
Output section
Output section contains output plugins that send event data to a particular destination. Outputs are the final stage in the event pipeline. We will be sending our log events to stdout (console output, for debugging) and to Elasticsearch.
Compared to filter section, output section is rather straightforward:
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-%{+YYYY.MM.dd}"
workers => 2
}
}
Finally, the three parts — input, filter and output — need to be copy pasted together and saved into logstash.conf config file. Once the config file is in place and Elasticsearch is running, we can run Logstash:
bin/logstash -f logstash.conf
If everything went well, Logstash is now shipping log events to Elasticsearch.
Install Elasticsearch
Download elasticsearch zip file from https://www.elastic.co/downloads/elasticsearch
Extract it to a directory (unzip it)
Run it (bin/elasticsearch or bin/elasticsearch.bat on Windows)
Check that it runs using curl -XGET http://localhost:9200
Here’s how to do it (steps are written for OS X but should be similar on other systems):
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.1.1.zip
unzip elasticsearch-5.1.1.zip
cd elasticsearch-5.1.1
bin/elasticsearch
Elasticsearch should be running now. You can verify it’s running using:
http://localhost:9200
If all is well, you should get the following result:
Install Kibana
Download Kibana archive from https://www.elastic.co/downloads/kibana
Please note that you need to download appropriate distribution for your OS, URL given in examples below is for OS X
Extract the archive
Run it (bin/kibana)
Check that it runs by pointing the browser to the Kibana’s WebUI
Here’s how to do it:
wget https://artifacts.elastic.co/downloads/kibana/kibana-5.1.1-darwin-x86_64.tar.gz
tar xvzf kibana-5.1.1-darwin-x86_64.tar.gz
cd kibana-5.1.1-darwin-x86_64
bin/kibana
Kibana should be running
http://localhost:5601
First, you need to point Kibana to Elasticsearch index(s) of your choice. Logstash creates indices with the name pattern of logstash-YYYY.MM.DD. In Kibana Settings → Indices configure the indices:
Index contains time-based events (select this option)
Use event times to create index names (select this option)
Index pattern interval: Daily
Index name or pattern: [logstash-]YYYY.MM.DD
Click on “Create Index”
Create log-drain service in PCF
Create a user-provided log draining service and bind the service to an application. The configuration above tells logstash to listen on port 9600, so the user-provided service creation and binding might look something like this:
$ cf cups logstash-drain -l syslog://[logstashserver]:9600
$ cf bind-service [app-name] logstash-drain
$ cf restart [app-name]
where [logstashserver] is the name or IP address of the server where logstash is running and [app-name] is the name of an application running on Cloud Foundry.
About the Creator
Karthikeyan Sadayamuthu
Lead Developer @Comcast, Founder of devxchange.io
Comments
There are no comments for this story
Be the first to respond and start the conversation.