Reading Time: 12 minutes

The former Netflix Architect Allen Wang posted back in 2015 on SlideShare: “Netflix is a logging company that occasionally steams video.” 

Five years ago, Netflix was creating about 400 billion events per day in different event types. Today, organizations can’t afford for their applications to have slow performance or experience downtime. To prevent that, engineers must rely on the data generated by their applications and infrastructure.

latest report
Learn why we are the Leaders in API management and iPaaS

Centralized log management and analytics solutions, such as Elastic, have become more popular as a way to monitor applications and environments. Failures can be detected and solved proactively. In this post you’ll learn how to use Elastic with MuleSoft. We’ll cover the key components of the Elastic Stack: Logstash, Beats, Elasticsearch, Kibana as well as four different options to externalize MuleSoft logs to the Elastic Stack.

Elastic Stack overview

ELK stands for the three Elastic products Elasticsearch, Logstash, and Kibana. 

To understand what the Elastic core products are we will use a simple architecture:

  1. The logs will be created by an application.
  2. Logstash aggregates the logs from different sources and processes it.
  3. Elasticsearch stores and indexes the data in order to search it.
  4. Kibana is the visualization tool that makes sense of the data.

What is Logstash?

Logstash is a data collection tool. It consists of three elements: input, filters, and output.

What is Elasticsearch?

ES (Elasticsearch) is a NoSQL database that is based on the Lucene search engine. ES provides RESTful APIs to search and analyze the data. Different data types such as numbers, text, and geo — structured or unstructured — can be stored.

What is Kibana?

Kibana is a data visualization tool. It helps you to quickly get insight into the data and offers capabilities like diagrams, dashboards, etc. Kibana uses all the data stored on Elasticsearch.

Beats

Beats is a platform for lightweight shippers that send data from edge machines that can be used between the data source and Logstash or Elasticsearch.

Beats provides different “beats” to ingest different types of data. See descriptions for the different types of Beats here

Read the official documentation on how to set up the Elastic Stack. 

4 options to externalize CloudHub logs to Elastic Stack

Option #1: Log4j – Elastic

  1. Add the HTTP log4j appender to your Mule application (snipped below).
  2. Replace the URL and specify the index you want to use (e.g. mule-logs).
  3. You need to use _doc or _create in order to create the Index on Kibana.
  4. Add the Elastic Authorization Key
  5. If you don’t want the token to expire then use ApiKey

Note: The credentials are the base64 encoding of the API key ID and the API key joined by a colon.

<Appenders>
<Http name="ELK"
url="https://d597bb4cc8214ec999beff51c1d97879.eu-central-1.aws.cloud.es.io:9243/mule-logs/_doc">
<JsonLayout compact="true" eventEol="true" properties="true" />
<Property name="kbn-xsrf" value="true" />
<Property name="Content-Type" value="application/json" />
<Property name="Authorization" value="ApiKey replaceWithKey" />
<PatternLayout pattern="%r [%t] %p %c %x - %m%n" />
</Http>
</Appenders>

Option #2: Anypoint Platform APIs – Elastic

  1. Follow the tutorial of “Observability-of-MuleSoft-Cloudhub” developed by Elastic Engineering.
  2. Four Logstash pipelines will be used to retrieve and forward the logs.
    1. Anypoint Login
    2. Get Cloudhub Logs
    3. Get API Events
    4. Get Worker Stats
  3. The four pipelines will call the data over Anypoint Platform APIs and forward them to Elasticsearch.
  4. Example of retrieving the organizationId with Logstash:
# 2. Get organization.id
  http {
    url => "https://anypoint.mulesoft.com/accounts/api/me"
    verb => GET
    headers => {
      Authorization => "Bearer %{access_token}"
    }
  }
  mutate {
    add_field => {
      organization_id => "%{[body][user][organization][id]}"
    }
  }

Option #3: Log4J – AWS SQS – Elastic

  1. Clone the SQS-log4j appender
  2. Build the SQS-log4j appender with Maven.
mvn clean install
  1. Add the dependency to your Mule applications pom.xml file:
<dependency>
    <groupId>com.avioconsulting</groupId>
    <artifactId>log4j2-sqs-appender</artifactId>
    <version>1.0.0</version>
</dependency>
  1. Add the SQS appender to the Mule log4j2.xml file and replace the values:
    ${sys:awsAccessKey}
    ${sys:awsSecretKey}
    mule-elk (only if the queue name is different)
<Appenders>
<SQS name="SQS" 
awsAccessKey="${sys:awsAccessKey}"
awsRegion="eu-central-1"
awsSecretKey="${sys:awsSecretKey}"
maxBatchOpenMs="10000"
maxBatchSize="5"
maxInflightOutboundBatches="5"
queueName="mule-elk">
<PatternLayout pattern="%-5p %d [%t] %c: ##MESSAGE## %m%n"/>
</SQS>
</Appenders>
  1. Go to the folder where Logstash has been installed.
    (e.g. cd ~/logstash-7.0.0)
    Create a new config file in the config folder and call it: logstash-sqs.conf (you can also change the name)
  1. Insert the following code to your logstash-sqs.config file and input your SQS values:
    eu-central-1 (only if your region is different)
    mule-elk (only if your queue name is different)
    enterYourSQSAccessKey
    enterYourSQSSecretKey
input {
  sqs {
    region => "eu-central-1"
    queue => "mule-elk"
    access_key_id => "enterYourSQSAccessKey"
    secret_access_key => "enterYourSQSSecretKey"
  }
}

filter {
  json {
      # Parses the incoming JSON message into fields.
      source => "message"
    }
}

output {
  elasticsearch {
    hosts => "localhost:9200"
    codec => "json"
    index => "mule-sqs"
    #user => "elastic"
    #password => "changeme"
  }
}
  1. Create an AWS account and search on the Management Console for “SQS.” 
  2. Create a new queue on AWS. The queue-name will be used afterward. 
  1. Make sure that the right permissions are given for the queue. You might need to create a new IAM user with the right role.
  1. Run Logstash:
Open your Terminal and change to the logstash directory e.g.:
cd ~/logstash-7.0.0


Run Logstash with the following command (make sure you tell logstash which .conf it should use. In our case it is logstash-sqs.conf):
./bin/logstash -f config/logstash-sqs.conf 
  1. Open Kibana on your local host and check the logs.

Option #4: Log4J – AWS SQS – Elastic

  1. Clone the JSON Logger Plugin Repository and follow the instructions. For additional information follow the tutorial by MuleSoft Professional Services. 
  2. Create on Anypoint Platform an Anypoint MQ.
  3. Copy the Queue ID and URL.
  1. In Anypoint Platform, create a Client App ID and a Client Secret on the left side.
  1. Open the JSON Logger Config in Anypoint Studio and go to the tab “Destinations.”
  1. Replace the URL, enter the Queue ID, Client App ID, and Client Secret.
  2. Trigger your Mule application and check in Anypoint Platform to see if you receive the messages to the queue.
  1. Add the AnypointMQ connector from Exchange.
  1. Add an AnypointMQ Subscriber, a transform message, and a HTTP request to your Mule Application.
  1. Configure AnypointMQ Subscriber with the Queue URL, Client App ID, and the Client Secret. Add the Queue ID (mule-elastic-queue) to the configuration.
  1. Configure the HTTP Request call to the Elasticsearch REST API. In our case, we configure the call to our localhost with the Index “mule-logs.”
    Important: Make sure you use the right port for the Elasticsearch instance.
  1. Add a transform message with the payload you want to send to Elasticsearch.
%dw 2.0output application/json---{    "user" : "MaxTheMule",    "post_date" : "2020-11-15T14:12:12",    "message" : payload}
  1. Go to Kibana and select Index Patterns on the left menu. Create a new Index Pattern for mule-logs.*
  1. As soon as you trigger the Mule App you will see all the logs on your local Kibana.

For more best practices on how to use Anypoint Platform, check out our developer tutorials.