Bootstrapping an ELK-stack

Using this setup for providing an ELK stack is written for small applications. If you expect lots of logging, setup a custom ElasticSearch cluster yourself.

You can use the docker-compose.yml file in the test-project for bootstrapping an ELK stack.

To start use following command:

docker-compose up -d

This wil bootstrap 3 Docker containers:

  • ElasticSearch: official ElasticSearch image from ElasticSearch

  • Kibana: official Kibana image from ElasticSearch. Available at http://localhost:5601/

  • Logstash: custom Logstash image. This one extends the official Logstash image with a custom logback.conf. This logback.conf uses different ports to distinguish different environments. Also it is possible te enable plugins in this Dockerfile.

Index patterns

If desired, index-patterns can be created at startup. This is done by setting the 'logging.kibana.server' property to a valid Kibana url.

After doing so the IndexPatternsInstaller will run and create index-patterns for the hikari, requests, error and long-methods appenders. This will be done for every Spring profile that is active.

Index-patterns will have the following pattern: environment-logs-application_name-appender*.

Example index pattern:
prod-logs-logging_module-requests

Send logging to ELK stack

The logging module provides an easy way to send your logging to Logstash. This can be done by adding appenders to your loggers in the logback configuration file. These appenders ensure that the right format is being used so all fields are send to Logstash correctly.

The logging module provides the following appenders that you can use to send your data to ElasticSearch:

  • elk-methods: This appender uses MethodLoggerProvider to provide method, methodLevel and duration as fields for ElasticSearch.

  • elk-requests: This appender uses RequestLoggerJsonProvider to provide additional fields like remoteAddress, url and status for ElasticSearch.

  • elk-hikari: This appender uses HikariLoggerJsonProvider to provide the state of the Hikari Connection pool as fields for ElasticSearch. These fields include: total, active, idle and waiting connections.

  • elk-errors: This appender is configured with a ThresholdFilter so only logs of level ERROR are allowed.

Configuration the Logstash appenders

Before you can send your logs to Logstash you need to set the Logstash server in the application properties.

logging.logstash.server
logging.logstash.application

Once the properties are set you can use the provided appenders to send your logs to Logstash. You can start using the provided appenders by including them in your logback.xml

<include resource="elk-appenders.xml" />

You can then add these appenders to your configured loggers to forwards these to Logstash. An example to have your request logs forwarded to Logstash:

<logger name="com.foreach.across.modules.logging.method.MethodLogger" level="INFO" additivity="false">
    <appender-ref ref="elk-methods"/>
</logger>
<logger name="com.foreach.across.modules.logging.request.RequestLogger" level="OFF" additivity="false">
    <appender-ref ref="elk-requests"/>
</logger>
<logger name="com.zaxxer.hikari.pool.HikariPool" level="DEBUG" additivity="false" >
    <appender-ref ref="elk-hikari"/>
</logger>
<root level="INFO">
    <appender-ref ref="elk-errors"/>
</root>