Reading Time: 15 minutes

Collectively, we all have a part to play in saving our planet. Even as developers, architects, and product owners we have the power to design, build, and operate applications with sustainably in mind interweaved at every stage of the SDLC. 

My goal is to show you how you can adopt sustainable engineering practices with MuleSoft by breaking it down in two elements: 1) How Mule applications themselves can be optimized, and 2) how the underlying infrastructure can be better utilized to reduce wasted energy, but first let’s look at some of the wider benefits. 

Benefits of sustainable engineering

latest report
Learn why we are the Leaders in API management and iPaaS

Reframing sustainability as one of the key priorities in technology is not only a step in the right direction towards reducing our environmental impact, but it also fosters a whole spectrum of positive outcomes, such as: 

  • Reduction of IT operational costs: Designing applications to be more performant and optimizing the utilization of your underlying infrastructure naturally yields cost savings.
  • Improved company image: Be recognized as a green titan driving positive environmental impact. 
  • Attracting the best talent in the market: Younger generations who are entering the job market tend to have strong environmental values and opt for employers that are committed to sustainability.

Sustainable architecture and application development

Now that we’ve highlighted why adopting sustainable engineering practices is important, the first question is: “how can we get started?” When it comes to integration architecture, application design, and the iterative process of implementation and test, you can reap tangible results by incorporating some of these suggested green best practices.

Switch to Mule 4

Mule 4 Runtime is built upon reactive programming which not only improves application performance but it makes development easier. Reactive programming combines asynchronous data streams and non-blocking operations which means that your Mule application does not have to wait for IO intensive operations to complete before doing other tasks as mentioned in this whitepaper

In Mule 3, you manually have to assign a processing strategy to each flow, but this is automatically self-tuned in Mule 4 to optimize flow execution and thread switching. Threads are no longer idle so your application can make the most of the resources available. The net effect is greater concurrency and throughput while reducing the overall system overhead. 


Implement a caching strategy in your application to reduce load on the mule instance and increase message processing speed. This is particularly powerful when the app receives repeated requests for the same information or it needs to process large repeatable streams. MuleSoft offers Cache scope and HTTP Caching which allows you to easily configure and implement caching in mule apps.

Slim down your Mule applications

Compress large files or images in your mule apps by leveraging the Compression module and selecting either the zip or GZip compressor strategies. Avoid sequences of set-variable/set-payload as this will generate a new event for each component. Instead, combine these in a single ee:transform operation. Moreover, remove useless metadata or variables during flow execution when they are no longer needed. Some examples include limiting the variable scope or using the remove variable transformer to reduce memory consumption. 

Next, take advantage of connectors or APIs that allow you to specify exactly what data you need to retrieve (i.e., if you only need 5 fields out of the possible 25 fields, send a request for those 5 fields only). As an example, the Salesforce and Database Connectors (among many others) allow you to build queries to narrow down the desired dataset through filtering and specifying fields. Similarly, OData and GraphQL APIs give you the ability to request the data you need — nothing more — and consequently slim down the payload size. 

On the other side of the coin, when designing APIs it is best practice to ensure the query and URI parameters allow your consumers to filter data and request only the data they need. Implementing pagination is particularly important if a single request could return thousands of records resulting in a network traffic spike and consume significant processing power for both the client and server. Learn more about how to implement pagination patterns with MuleSoft.

Get insight into your application’s consumption

During the development stage, incorporate either dynamic code analysis or use performance monitoring tools to understand how the Mule app consumes resources, analyze energy profiles, and identify which parts of the code are least efficient. To get started, you can complement performance test best practices by monitoring the CPU, memory, disk, and network resources consumed by the Mule runtime engine via sysstat. There are a whole host of other tools for application performance monitoring, as well, like VisualVM or YourKit Java Profiler. 

Deployment and operation

Once your application has evolved through the early stages of the SDLC, where you deploy your application and how you choose to operate it will impact its carbon footprint. Creating a mechanism to report back on application health and server utilization is critical as it means you are able to finetune application performance and server efficiency while better serving your end users.

Whether you choose to deploy to the cloud or host the application yourself, there are some of the factors you should take into consideration. 


In general, choosing PaaS and iPaaS is a greener option due to the virtualization of servers and markedly improved energy efficiency through sheer scale of infrastructure. MuleSoft’s CloudHub leverages AWS’ data centers which are 3.6x more efficient than the average enterprise data center. Furthermore, when companies manage their own infrastructure, in the majority of cases server utilization is as low as 15-20% and the unused capacity ends up as wasted energy. Opting to deploy to the cloud means you can benefit from optimization of IT resource allocation and reduced overall operational costs. 

In relation to operation, CloudHub makes it easy to monitor your applications as you can track a rich host of metrics via OOTB functionality or define your own custom metrics. You can visualize your application’s performance with Anypoint Runtime Manager dashboard or using Anypoint Monitoring for more in-depth metrics and create alerts when a certain threshold is exceeded or a specific event occurs. Runtime Manager dashboard is limited to CPU, memory usage, and number of messages, but via Anypoint Monitoring you can track metrics specific to the JVM (e.g. thread count, committed virtual memory, JVM uptime) or infrastructure (e.g. JVM CPU % Utilization, System CPU % Utilization, System Memory).

With this data shining a light onto your application performance and virtual server utilization, you can finetune the vCore size and number of workers to optimize resource allocation. As an example, if your Mule application at its peak is only consuming a few percent of the vCore’s CPU, then consider using a smaller vCore. 


If you have chosen to manage the infrastructure yourself, the operation and maintenance of your Mule applications can still benefit from resource optimization and reduction of wasted energy through hybrid or Anypoint Runtime Fabric (RTF) deployment. 

With a hybrid deployment, your Mule servers are registered with Runtime Manager agent. Thus, you are still able to set application alerts and monitor your servers, such as when they reach a certain CPU usage threshold. You are also able to deploy more than one application on a given server (which is not possible on CloudHub) and can therefore better utilize your underlying infrastructure, especially if you have installed the Monitoring Agent to get more enriched data.

When selecting RTF,  you can deploy multiple containers on one server while enjoying application separation at the operating system level. This gives you more flexibility and means that you can make the most of the server resources. Containers with higher utilization use much less energy than virtual machines with lower utilization given the same workload. MuleSoft provides two options for a RTF deployment: 

  • Self-Managed Kubernetes: Install on an existing Kubernetes environment that you operate and manage (Amazon EKS, Azure Kubernetes Service, and Google Kubernetes Engine).
  • VMs/Bare Metal: MuleSoft provides required software infrastructure components, including Docker and Kubernetes to install on virtual machines that you operate and manage.


By reading this blog, I hope you feel empowered with knowledge and tools to design, build, deploy, and operate Mule applications with sustainable engineering principles in mind. This is just the beginning of my journey to explore how MuleSoft can shape sustainable behaviors in technology and identify strategies to reduce our carbon footprint. 

If you want to continue learning and understand how else you can improve your Mule applications, check out these recommendations on how to maximize application process performance.