JSON logging in Mule 4: Getting the most out of your logs

JSON Logger Mule 4 logo

This is a sequel to my previous blog post about JSON logging for Mule 3. In this blogpost, I’ll touch upon the re-architected version of the JSON logger for our awesome Mule 4 release while leveraging the (just as awesome) SDK!

A word on the new SDK

Before diving into the new JSON logger for Mule 4, I’d like to acknowledge the impact of the new SDK on some of the core features offered in this release of the connector. If you ever had the need to write a custom connector with DevKit in Mule 3, then think of SDK as your “DevKit wishlist” come true. I really have no other way to explain how cool and powerful our new SDK is (and it had to be!) as it powers every single connector in Mule 4. Thus, the ability to leverage the same framework for all components resulted in standardization and the much more strongly typed system that Mule 4 is today.

So what’s new with JSON logger for Mule 4?

If you are here, I’m assuming two things:

  1. You already know about Mule 4’s new architecture and paradigms.
  2. You already read part 1 of this blog series.

If you did, then you will understand that the JSON logger connector had to be re-architected to keep the main tenet around customization through editing of JSON schemas (rather than having to fully understand the SDK innerworkings) and leverage all the new capabilities offered by the new SDK.

If you ever used JSON logger in Mule 3, then these are some of the coolest changes in this release:

1. Location Info (huge!!)

location icon

Ok, we are set on the right path to logging our way to a better (devops) life, but then an issue arises and we need to figure out where inside the Mule application certain things are happening. How do we figure out where in our extensive beautiful code lies the specific log entry we are looking for? After all, we have tons of logger components throughout the application.

This was one of my top wish list items from the DevKit era and something I was desperate to incorporate on the Mule 3 version of the JSON logger – and guess what? In Mule 4 with SDK, it is now a reality! As per the docs, now SDK offers the ability to obtain context information for our operations and have access, for instance, to the Location object which basically gives you everything you want to know about where are you in your application. Given an image (or JSON) says more than a thousand words, this is what it looks like:

location info code screenshot

As you can see, critical troubleshooting information such as the name of the flow (rootContainer), name of the xml configuration file (fileName), and the exact line number where the logger is located (lineInFile) can now be part of the log metadata and configured through the global config element:

log location info screenshot

2. Disabled fields

checklist icon

A common request I’ve had ever since I published the first JSON logger was how to filter “sensitive” data, particularly in staging and production environments. After thinking about it for a long time, my answer is a combination of technology and development practices.

What I propose is decoupling how we log functional data (data that provides meaningful application state information without including sensitive data) vs. troubleshooting data (typically raw data coming in and out of different components). For this purpose, JSON logger provides two very distinct fields out-of-the-box:

  • message: Field meant for meaningful non-sensitive functional messages such as “Request received from Customer with ID: 1234”
  • content: Field meant for raw data typically useful during development and test phases (e.g. payload, attributes, vars, etc.)

If somehow we are able to point our development team to the right direction and have them log valuable non-sensitive data (a.k.a. PII or PCI data) in the message field, and any raw troubleshooting data into the content field, then we can leverage the “disabled fields” feature to filter certain log fields from being printed (e.g. content):

disabled fields screenshot

The above example will effectively tell the JSON logger that it needs to skip any field defined here (you can add many comma-separated fields, e.g. content, message, etc.). Taking this a step further, we could assign an environment variable (e.g. ${logger.disabled.fields}), which in lower environments could be null, but then on stage and production could be set to “content.”

As I mentioned, the approach is not bulletproof and requires having conscious developers adhere to certain best practices. But hey, better than nothing, I hope!

How to use it?

Just like its predecessor, the JSON logger for Mule 4 has been published to my own Exchange:

JSON logger for Mule 4 icon

So for a quick test drive, you can get it using the same guest account as before:

User: guest-mule
Pass: Mulesoft1

However, if you truly want to leverage and/or customize the component, you should get the source code and install it on your own Exchange.

To make things much easier, in this release I’m also providing a deployment script as well as a template settings.xml maven configuration file.

1. Clone the repo


And make sure you switch to mule-4.x branch.

2. Configure your settings.xml

Inside the folder template-files/ there is a file called settings.xml which can be used as a reference to configure your maven settings.

Inside this file you need to replace the following values:

  • CUSTOMER_EE_REPO_USER and CUSTOMER_EE_REPO_PASS are the credentials provided by Mulesoft Support for customers to access our private Nexus repositories
  • ANYPOINT_PLATFORM_USER and ANYPOINT_PLATFORM_PASS are the credentials you normally use to login into our Anypoint Platform. Just make sure you have Exchange Contributor role granted.

Note: This assumes you already have have Maven 3.5+ installed.

3. Run the script: deploy.sh

Once you’ve done all the pre-work, all that is left is running the script under the base folder called deploy.sh and passing your Organization Id as an argument:

./deploy.sh <YOUR_ORG_ID>

The script will basically run the command mvn clean deploy to deploy the following projects into your own Exchange:

  • jsonschema2pojo-mule-annotations → this project generates all the SDK specific annotations once you build the JSON logger connector
  • json-logger → is the actual JSON logger

4. Add the connector to your project from Exchange

Inside the folder template-files/ there is a file called settings.xml which can be used as a reference to configure your maven settings.

5. Best practices for the win!

Once you start playing with the connector, you will see the following configuration by default:

JSON Logger Config screenshot

The default properties assumed for those would be:

The rationale behind this is:

a. Your maven pom.xml file already has an:

  • Artifact Name: Equivalent to your application name
  • Artifact Id: Equivalent to your application version

Assuming you are following good SDLC practices, the chances are you are already maintaining your pom.xml thus, hard-coding or defining those exact same values elsewhere makes no sense. The recommended way to do it is by having your properties point to the pom.xml properties generated during build time leveraging something called “resource filtering.” In order to enable this, the <build> snippet inside your pom.xml should look like:

b. While disabledFields doesn’t have a default value pointing to an environment variable, it might be a good idea to have it doing so (e.g. json.logger.disabledFields) so that you can alter the logging behavior at runtime.

c. mule.env is an environment variable you should always aim to have by default (configured at the runtime level) so that you can easily tell which environment your application is running on.

Ok but how do I really, really USE it?

As promised in my previous article, in this post I will also show some basic reporting use cases leveraging Splunk because, let’s admit it, everyone likes nicely colored charts. You can find the source code for all the dashboards below under: /template-files/splunk.

Note: A huge thanks to one of our brilliant Solutions Architect’s Rahul Dureja for giving me a bunch of dashboard ideas and a crash course on Splunk SPL 🙂

Search: Out-of-the-box indexing

Well… this is it! The very reason why we are doing this in the first place. Brilliant data aggregation platforms like Splunk can easily understand our JSON data structures, index the fields within and ultimately empower us to create extremely useful dashboards as well as providing advanced searching capabilities.

JSON logger configuration code screenshot

As you can see, without any customizations, Splunk is already smart enough not only to index the main fields (shown in red) but also pretty prints the content field which contained a “stringified JSON”.

Dashboard: Data visualization

One of the most common requirements, particularly for DevOps, is to be able to monitor your API response times and number of transactions. The following dashboards can help with that:

  • Response times



Logger Response Times screenshot
  • Number of transactions



API Charts Dashboard screenshot

Dashboard: API Summary

Another great way to leverage the rich metadata we now have available, such as tracePoints, correlationIds, and locationInfo, is by creating advanced summary dashboards that can provide in-depth analytics such as:

  • API Calls per Resource
  • Recent 5 Successful Transactions
  • Recent 5 Failed Transactions
  • Unique Messages Received
  • Number of Successful Calls
  • Number of Errors
  • All API Transactions
API dashboard screenshot
API transactions screenshot

Dashboard: Record Trace

Another common requirement might be to see all the events related to a specific transaction end-to-end. This dashboard allows a table visualization of all the events associated to a specific correlationId.



record trace screenshot

But how do I get data into Splunk?

I’m by no means a Splunk expert (not even novice for that matter), but just to get you started, these are the two typical scenarios, depending on your Mule deployment architecture:

On-premises runtimes

When you have control over your servers, Splunk recommends the use of forwarders. Basically it only takes a one-time configuration on the servers for the logs to be forwarded to Splunk, and aside from the server configuration, the operation is non-intrusive from the Mule applications perspective.


This scenario is a bit trickier as we don’t have control over the cloud workers which renders the forwarder useless. However, another common mechanism available to send logs to Splunk (for Java applications) is through the Splunk Log4j Appender (which internally leverages Splunk HEC).

As soon as we explore this option we hit the immediate next roadblock, which is the way CloudHub, by default, overrides the application’s specific Log4j configuration and replaces it with its own.

Luckily, for such scenarios there is a feature called “Custom Log Appender” (disabled by default) that can be enabled by opening a support ticket on the customer portal.

Once we have the feature enabled, we need to mark the checkbox:

disable cloudhub logs screenshot

This tells CloudHub to leverage the log4j2.xml configuration provided with the app instead of replacing it with its own CloudHub configuration.

Note: Even though the message says “CloudHub logs have been disabled,” we still have the ability not only to publish log events to our own Splunk instance but also keep sending them to CloudHub default logging infrastructure (either as a backup or until we fully transition to Splunk).

Lastly, to help you jump start, I’m also providing a very basic log4j2.xml example that forwards logs both to CloudHub and Splunk. In order for this to work we have to:

1. Provide splunk.server environment variable with the Splunk server information.

2. Replace YOUR_SPLUNK_TOKEN with your own provisioned Splunk HTTP Event collector token (granted this should probably be an environment variable as well, but you get the idea).

3. Add the Splunk Log4j appender dependency to your Mule application:

4. Add the Splunk Repository:

How to customize it?

Like its predecessor in Mule 3, the whole rationale behind JSON logger is to be a customizable component through metadata changes in the provided JSON schemas without having to really know much about SDK itself.

In order to customize the JSON output data structure, we pretty much follow the same concepts as described here. However, using the annotations described below, a big change introduced on this version is that for global expressions, we no longer need define the field in both the loggerConfig.json and loggerProcessor.json. Instead, everything defined at the config level that we want to be printed in the output logs needs to be part of the nested object globalSettings inside loggerConfig.json.

Gotcha: If you define expressions inside the global config, make sure that the result of these expressions are fairly static throughout the lifecycle of the Mule app at runtime (e.g. appName, appVersion, environment). If you define something dynamic like “correlationId” (which, in theory, changes per request) then SDK will create a new instance of the global config for every new value which will end up in memory leaks.

Supported configurations

In order to tell the JSON logger how each field should be treated, we need to use the “sdk” object. Inside this object we can use the following attributes:

  • default → As the name implies, it allows to define a default value. It also implicitly makes the field optional, so it doesn’t require the user to input a value.



priority screenshot
  • required + example → Unless specified, all fields are considered “mandatory.” You can also explicitly mark a field as required == true. When having a required field, it’s very helpful to provide an example text that points developers to the required data.



add log entry screenshot
  • displayName → Specifies the name to be displayed in the Studio/flow designer UI.


message log entry screenshot
  • summary → Provides information about how to populate/use a specific field.



trace point screenshot
  • isContent → For scenarios when we need to log more complex data (e.g. based on expressions, payloads, attributes, etc.), we have the ability to define the isContent attribute to indicate SDK that the input will be a full fledged Dataweave expression. Finally, JSON logger will internally attempt to “stringify” the results of the expression and log it as part of the JSON object.



content output screenshot
log start screenshot
  • isPrimaryContent → This option only exists for scenarios where you need more than one content field, as SDK needs to be told which field should be the primary one.
  • expressionSupport (NOT_SUPPORTED / SUPPORTED / REQUIRED) → This field controls how the UI is generated for a specific field or object, e.g. if we want fields to be shown explicitly in the UI, we need to set the value to NOT_SUPPORTED.



global settings screenshot
  • placement → In SDK, there are two ways to organize where things are displayed:
    • order → Indicates in which order fields will be displayed



general global setting and JSON output screenshot
  • tab → Allows to display fields in different tabs.



correlation ID screenshot
  • parameterGroup → Allows to visually group a set of related fields.



global setting screenshot

The end?

See the latest updates on the JSON Logger by checking out this blog.

I always welcome feedback and feature requests, but I hope the current state will help you jump start on the wonders of Mule 4 and JSON logging!

GitHub Repository

Check out the GitHub repository for more information.

We'd love to hear your opinion on this post

67 Responses to “JSON logging in Mule 4: Getting the most out of your logs”

  1. What a great detailed post!
    The additions are great and this provides so much more feat.

    Main questions, why isn’t the JSON logger in the Mulesoft public exchange?
    Any plans on doing that?
    Can you provide some insights regarding support?

    Again, great work!

    • Hi Joost, so glad that you liked it!

      To your point, hosting assets in Mulesoft public exchange requires a much more formal (and time-consuming) process and support as you can imagine. Given I tend to use this component quite a bit I try to stay on top of bugs but besides that and having the code open-sourced, thats as much support as you will get today 🙂

    • very helpful comment….thanks a lot you saved lot of my time.

  2. I have trying to follow the steps but it fails with this message
    I have verified my role and it is Exchange Contributor .

    I get the same error from the bash script or when I follow the readme file.

    • Did you configure your settings.xml properly?

      You should have an entry similar to:


      • Hi Andres,

        I got the same unauthorized error when running the deploy script. I have the repository settings configured correctly. I opened a support case (00212833), and they were also able to recreate the same error. I’m wondering if there was a fix for the previous post, or I will have to wait for support to figure out the issue?


      • Hi Andres,

        I had the same “unauthorized” issue with the install script. I have the correct repository configuration. I was wondering if there was an answer to get it working for the original post. I opened a support case (00212833), and they were able to duplicate the exception.


  3. Hey Andres, Loved the article. Can we push logs to ELK using this ? If so can you try to give me a idea. How to achieve this.

  4. This is a great addition to the Mule. Splunk is an extremely expensive tool – I second the comment above – we need more support and examples of using the ELK stack to monitor Mule logs?

    • Hi Drew,

      One of my peers (thanks Biswa Mohanty!) provided the following log4j2 snippets for ELK

      Add the following <Appenders>

      <Socket name="Socket" host="ELK_HOST_IP" port="ELK_PORT">
      <JsonLayout compact="true" eventEol="true" />

      Add the following <Loggers>
      <AsyncRoot level="INFO">
      <AppenderRef ref="Socket" />

  5. Good morning, i have been developing a custom connector in java to treat logs and i was wondering how you managed to get the rootContainer.
    Thank you very much

  6. Readme says: “Mule supported versions Mule 3.4.x, 3.5.x Mule 3.4.1”
    Any plans to have Mule 4 version?

    Mule 4 does not have DevKit, only SDK, I see dependencies in code on a lot of version 3 stuff.

    Anyway, I tried to “mvn clean install” in only Mule 4 environment and it doesn’t compile. Too many dependencies to Mule 3 modules.

  7. Thanks Andres,

    This is an extremely useful addition. Wondering it supports the Updates to Log level from Runtime Manager ?

    I tried updating the log level to ‘DEBUG’ for the package ‘org.mule.extension.jsonlogger.JsonLogger’ but it doesn’t seem to work.

    However, it does work in the studio when we update the log level in log4j2 file for the same package.

  8. Hi Andres,

    If I change the priority type to DEBUG or ERROR its not logging those details configured in Could you please assist.

    I’m not sure if i missed out anything.

    Prabhu S

  9. Hi Andres,

    Our Splunk doesn’t like the format of the logging events (doesn’t recognize the output as json, doesn’t pretty print the stringified fields etc).

    Was there any changes needed to the log4j config to make this more easy for Splunk to understand that you are aware of? Right now we are just using the out-of-the-box log4j that comes with a new Mule project.


  10. Hi Andres,

    Please let me know how to work with DEBUG priority type and i’m not sure if I missed anything.

    Changed the runtime property in DEBUG
    Changed the json logger priority property to DEBUG
    but it is not working.


    • That’s odd… I just tried adding this in cloudhub > logging

      DEBUG > org.mule.extension.jsonlogger.JsonLogger

      and if you do it in studio, just edit log4j2.xml to:

      <AsyncLogger name="org.mule.extension.jsonlogger.JsonLogger" level="DEBUG"/>

  11. Hi there! thanks for the tool… I may be missing something but when I log into the exchange (from studio 7) using the creds you mention I see the connector but get a 401 when trying to add it… can you share some light?

    • Probably better if you just install it in your own exchange with the provided deploy.sh script

  12. Do you have any solution to this? I am facing same issues

  13. Hi Andres,
    I loved this article. I am using Mule 4. I am getting content in a string format elapsed with double quotes (” “). I want payload in a JSON format. Please help me.

    • Hi Phani,

      The problem is that JSON logger allows you to define a basic JSON structure but this structure needs to be defined at design time. Anything dynamic (e.g. the payload/attributes) you add to a content field for instance, will be treated as a string. It will be up to your log aggregation platform (e.g. Splunk) to parse and expand the strigified JSON for visualization

  14. Is there any ETA for adding Anypoint MQ forwarding to the Mule-4 version of the connector?

  15. Hi Andrew,

    I love this artical and thank you. In content section I am getting payload & attributes in a string format elapsed with double quotes but I want in a json formate. Can you please help me.

  16. Hello,
    I downloaded your json logger from exchange, but when I run my munit tests, it gives an error on the group id.
    Can you tell me how to fix it please ?

    • Would be better if you deploy to your local exchange by using the deploy.sh script provided

  17. Hello,
    I’m new to mulesoft, After finishing step 1 and 2.
    Can you tell me how to Run the script: deploy.sh ? where?

    • Hi Patricia,

      Basically just checkout the entire repo, go into the root of the repo and just run the script as indicated in the blog (passing your orgId and creds)

  18. Hi Andres –

    When I run the deploy script I get the following error:
    [INFO] ————————————————————————
    [INFO] Total time: 3.735 s
    [INFO] Finished at: 2019-08-16T01:48:33Z
    [INFO] ————————————————————————
    [ERROR] Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy (default-deploy) on project jsonschema2pojo-mule-annotations: Failed to deploy artifacts: Could not transfer artifact xxxxxxxxxxxxxx:jsonschema2pojo-mule-annotations:jar:1.0.0 from/to Exchange2 (https://maven.anypoint.mulesoft.com/api/v1/organizations/xxxxxxxxxxx/maven): Failed to transfer file https://maven.anypoint.mulesoft.com/api/v1/organizations/xxxxxxxxxxx/maven/xxxxxxxxxxxxxxxxx/jsonschema2pojo-mule-annotations/1.0.0/jsonschema2pojo-mule-annotations-1.0.0.jar with status code 401 -> [Help 1]

    I suspect my credentials are incorrect for:


    It keeps looking for Exchange2. I changed the shown above and still have the problem. What should the value be for ?

    Any suggestions appreciated.

  19. I seem to be having the same problems. Suggestions appreciated for a fix. Thanks!

  20. Hope this helps others who may have struggled with Exchange authentication problems … be sure and do the following:

    My server settings in ~/.m2 needed to match the server settings in /json-logger/template-files/settings.xml

    Once I updated ~/.m2/settings.xml it worked.

  21. Hello Andrew,

    I am trying to publish this asset to my exchange using the bash script provided unfortunately getting below error message. Can you please provide your input.

    Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy (default-deploy) on project jsonschema2pojo-mule-annotations: Failed to deploy artifacts: Could not transfer artifact 86dea295-617f-467d-9be2-58d704258829:jsonschema2pojo-mule-annotations:jar:1.0.0 from/to Exchange2 (https://maven.anypoint.mulesoft.com/api/v1/organizations/86dea295-617f-467d-9be2-58d704258829/maven): Failed to transfer file https://maven.anypoint.mulesoft.com/api/v1/organizations/86dea295-617f-467d-9be2-58d704258829/maven/86dea295-617f-467d-9be2-58d704258829/jsonschema2pojo-mule-annotations/1.0.0/jsonschema2pojo-mule-annotations-1.0.0.jar with status code 401 -> [Help 1]

  22. I do not have these below details because i am learning on installing this json logger on my training account.

    CUSTOMER_EE_REPO_USER and CUSTOMER_EE_REPO_PASS are the credentials provided by Mulesoft Support for customers to access our private Nexus repositories

  23. Hi Andres,
    Just curious…why did you have to use the apis provided by Google?


  24. Hi Andres,

    The content is printing with “\” and “\n”. Can you provide some solution or insight?

    “content” : “{\n \”payload\”: {\n \”component\”: \”json-logger:logger\”,\n \”fileName\

  25. Hi,

    Great Post.

    I am getting the following error. Caused by: org.mule.runtime.api.exception.MuleRuntimeException: org.mule.runtime.deployment.model.api.DeploymentInitException: PropertyNotFoundException: Couldn’t find configuration property value for key ${json.logger.application.version}

    Could you advise what the issue is ?


    • Ideally you can populate those values from maven pom.xml

      Take a look at the maven section in the blogpost and make sure you are using the maven feature called “resource filtering”.

  26. I keep getting a JXB error when trying to publish to Exchange:

    Failed to execute goal org.bsc.maven:maven-processor-plugin:2.2.4:process (process) on project json-logger: Error executing: java.lang.NoClassDefFoundError: javax/xml/bind/JAXBContext: javax.xml.bind.JAXBContext -> [Help 1]

    • You are probably using a later version of java that no longer comes with jaxb libraries. I was able to fix it and the project probably should be updated also to the latest parent dependancy.


  27. For 4x Version: Have you tried ‘json.logger.disabledFields=content’? I set it in my props file but doesnt seem to work. content field is still rendered in the json. Also, on the config my Disabled Fields seems to want an expression. Yours above looks to take a String. I can crack open the source but just wanted make sure it wasnt user error.

    • Hi there, in order to work with disabledFields you need to either:

      – Set the fields as a DW expression: e.g. for content field it would be “content”
      – Or, set the fields as a property: e.g. p(‘json.logger.disabledFields’) which would read a property such as this json.logger.disabledFields=content

      Ideally you want to use properties so that you can disable the fields per environment (e.g. Production)



  28. Great Job Andres Ramirez !!!!!

  29. Hi Andres,
    I want to disable filed “queueName” which is part of content in log message. for example message structure that dispalys in log is
    “content”: {
    “attributes”: {
    “queueName”: “abc”

    I tried the properties
    logger.disabled.fields= “content.attributes.queueName” or logger.disabled.fields= “queueName”
    but still queueName is not getting disable please suggest how can disable particular field in complex structure.

    • Hi Mukesh,
      Disabled fields is a feature meant to disable root level fields defined in the loggerProcessor.json schema. e.g. content field (or any other custom field you may have added) but not for data inside the content field. However, I’m about to release a new version (2.0.0) which will include a new feature called DataMasking for obfuscating sensitive (JSON only) data inside content fields (e.g. you will pass the field names or json paths of data you want masked) so stay tuned!

      • Hi Sir ,

        How to use masking .
        what will be the syntax for it ??

        Will you publish a document also for using the latest json logger ?

  30. Great tool – I’m curious on the Pretty Print dropdown for Mule 4, expression is one of the options, and I’m wondering what that would be used for? To define our “own” pretty print?

    • Hi Mike,
      The rationale for the expression field would be if you want to pass the value through a property file (e.g. ${logger.prettyprint})

  31. Hi Andres,

    Is there any new version for this which supports logs forwarding to AnypointMQ with some additional configuration to logger ? I have seen in some other blog of yours, probably for Mule3 it’s there.

    • Hi Dileep,
      I’m about to release a new version (2.0.0) which will include a feature called “External destinations” which will allow you to publish log events to either: Anypoint MQ, JMS or AMQP (RabbitMQ). Stay tuned!

  32. Hey Andres, this is a great component. One query – What is purpose of “Logger Scope” ?

    • Thanks Hanubindh!

      The scope component is meant to calculate elapsed times specific to a particular part of your flow logic or connector (e.g. calling another API) so it will print a before and after log entry with a calculated “scopeElapsed”. I’m working on a new blogpost to highlight all the new features in v2 🙂

  33. Andres,
    Thank you for this detailed explanation. This adds a lot of value to the Flows that I am working on. Is it possible for you to give us some insights on “scopes” to calculate elapsed times Etc. ?

  34. Hello Andres

    Great article and I am successfully configured JSON Logger in my application and am getting the logs as well in Splunk. But I dont see the full log. For example I have 11 records from a database coming in logs but in Splunk am seeing only first few lines. What could be the issue. Please revert. See the sample data below. I am getting only this much. Not the full information from console.

    “timestamp” : “2020-06-04T06:58:01.730Z”,
    “content” : {
    “payload” : [ {
    “planeType” : “Boeing 787”,
    “code2” : “0001”,



    • It’s easier for Splunk to digest the logs if you send them in one-line as opposed to “pretty print”. Give it a shot and let me know!

  35. Thanks, this is a great article, thank you for publishing.

    A few questions:
    • Anypoint Supported Runtimes – Are there specific runtimes that would only be supported by the Mule 4 logging connector?
    • On-Premise / Cloud-Hub / Runtime-Fabric (RTF) Compatibility – has the Mule 4 logging connector been tested across all potential Anypoint deployment environments?
    • Anypoint Public Exchange – Are there plans to publish the Mule 4 logging connector to the Public Anypoint Exchange?

  36. Hi Andres : Can I use this JSON logger to write the logs to the MULE_HOME/logs/.log of my server if I don’t want to forward my logs to anypoint MQ or a JMS destination in the “Destination” configuration of the connector ? Assume that I am not using any appenders in log4j2.xml and its a on premise installation of Mule runtime on my physical servers.

  37. Something I’m noticing when upgrading from json-logger 1.0.7 to 2.0.1.

    Previously, it was logging the correlation id to both correlationId and contextMap.correlationId fields. Now it only logs to the correlationId field. The issue is that mule components internally log the correlation id to contextMap.correlationId. Previously we were able to still group them together in DataDog by contextMap.correlationId but now the two log sources are fragmented.

    We have a workaround by making a code change to the json-logger source code, but it would be ideal to have a solution that’s more adaptable to future releases.

    Is there currently a way to control how the output of the correlation id is mapped that I’m just missing? Or is there anything planned for future releases that would reintroduce that behavior from the earlier version? Thanks.