Hello from MuleSoft’s performance team!
This post describes a real-world tuning example in which we worked with a customer to optimize their Mule ESB application.
A customer presented us with an application that was to be a proxy to several endpoints. As such, it needed to be very lightweight since the endpoints introduced their own latency. We required the application to provide high throughput and minimal latency.
This real-world example shows how we helped the customer tune their application from a number of angles.
Performance always matters
An application and its environment should be designed with performance in mind. The application, the server it runs on, the database it communicates with, the network it communicates over: all of these elements should be performant. Creating an efficient, elegant mechanism is not only important for a business, but a matter of skill and pride for engineers.
Although true, perhaps that is not the answer one is looking for.
If you’ve ever worked on performance issues with an IO- intensive app, there is a good chance you already know that the application performance degrades when the disks are stressed out. This fact is usually well known, but the reasons behind it aren’t always clear. I’d like to try and clarify what’s going on behind the scenes.
In a typical scenario, when data is written to a file, it is first written to a memory area reserved as the page cache.
Today I will introduce our performance test of the Batch Module introduced on the Mule’s December 2013 release. I will guide you through the test scenario and explain all the data collected.
But first, if you don’t know what batch is, please read the great Batch Blog from our star developer Mariano Gonzalez, and for any other concerns you also have the documentation.
It’s Saturday night. You realize you don’t have your cell phone and won’t be able to check on your fulfillment system. Chuckling, you remember without nostalgia the electric panic that used to set in over such a conundrum. Now you don’t give it a moment’s thought. After the weekend you arrive to work and try to avoid that blinking little red light, always in the periphery, nagging from the telephone set. Twenty-three more irate complaints demanding tedious,
Do you want share properties between mule instances? or just between different flows within Mule?. Then mule session properties are what you are looking for.
Once you have a working Mule ESB application you may be wondering how fast it can run. Here we will discuss a simple method for measuring the throughput of your application using Apache JMeter.
Bear in mind there are many ways to improve performance (simple changes can yield great performance boosts). We will explore them in greater detail in a follow-up blog post covering Mule application tuning.
Opposite to men that with the years we get slower (at least that’s my case), the new version of Mule 3 showed an improvement in performance compared to previous Mule ESB versions. In general plenty of effort was put to profile and optimize Mule for high concurrency scenarios, which led to improve the way messages were handled in transports and transformers.
Mule 3.1 performance is in average 10% better than its predecessor version 2.2.7,
Our RESTx project – a platform for the rapid and easy creation of RESTful web services and resources – is largely written in Python. Python is a dynamic, duck-typed programming language, which puts very little obstacles between your idea and working code. At least that’s the feeling I had when I started to work with Python several years ago: Never before was I able to be so productive, so quickly with so few lines of code.
As an integration framework and broker, Mule ESB is the platform of choice for implementing and deploying enterprise integration solutions and related services. In this series of posts, I’m going to look at situations beyond the traditional integration scenarios where using Mule ESB has enabled the implementation of effective and elegant solutions.
In this first installment,