Performance always matters
An application and its environment should be designed with performance in mind. The application, the server it runs on, the database it communicates with, the network it communicates over: all of these elements should be performant. Creating an efficient, elegant mechanism is not only important for a business, but a matter of skill and pride for engineers.
Although true, perhaps that is not the answer one is looking for. When does performance matter? is too broad. The following discussion helps clarify the question, how to answer it, and what to do about it.
If you’ve ever worked on performance issues with an IO- intensive app, there is a good chance you already know that the application performance degrades when the disks are stressed out. This fact is usually well known, but the reasons behind it aren’t always clear. I’d like to try and clarify what’s going on behind the scenes.
In a typical scenario, when data is written to a file, it is first written to a memory area reserved as the page cache. The page holding the newly written data is considered dirty. After a period of time per the kernel IO policy, the kernel flushes the dirty data to the device queue to persist to hard disk. Once the data gets to the queue, the rest is mechanical: The device driver reads the IO requests, then spins, seeks and writes to the physical disk block where the file is. The journal file is written first if enabled, then the actual file.
In a recent discussion with a few other engineers, an idea of disabling file system journaling came up to improve disk write latency. While this is certainly true because it is one less disk operation per write, the actual time gained is negligible because the journal file is in the same block as the file to be written. The benefits of having the journal file to restore the disk from a crash far outweighs the little latency saved.
Today I will introduce our performance test of the Batch Module introduced on the Mule’s December 2013 release. I will guide you through the test scenario and explain all the data collected.
But first, if you don’t know what batch is, please read the great Batch Blog from our star developer Mariano Gonzalez, and for any other concerns you also have the documentation.
Excited? Great! Now we can start with the details, this performance test was run on a CloudHub’s Double worker, using the default threading profile of 16 threads. We will compare the on-premise vs cloud performance. Henceforth we will talk about HD vs CQS performance. Why? On-Premise and CloudHub users will be using by default the HardDisk for temporal storage and resilience but, this is not very useful on CloudHub as if for any reason the the worker is restarted, the current job will loose all its messages, then if Persistent Queues are enabled the Batch module will automatically store all the data with CQS (Cloud Queue Storage) to achieve the expected resilience.
It’s Saturday night. You realize you don’t have your cell phone and won’t be able to check on your fulfillment system. Chuckling, you remember without nostalgia the electric panic that used to set in over such a conundrum. Now you don’t give it a moment’s thought. After the weekend you arrive to work and try to avoid that blinking little red light, always in the periphery, nagging from the telephone set. Twenty-three more irate complaints demanding tedious, manual data forensics? Hardly, just a happy customer congratulating you on the vast improvement in your support center.
A dream? No, Mule 3.2 is here! The Mulesoft team is overjoyed to announce Mule 3.2, chock full of goodies for the enterprise and the Mule community.
Do you want share properties between mule instances? or just between different flows within Mule?. Then mule session properties are what you are looking for.
Once you have a working Mule ESB application you may be wondering how fast it can run. Here we will discuss a simple method for measuring the throughput of your application using Apache JMeter.
Bear in mind there are many ways to improve performance (simple changes can yield great performance boosts). We will explore them in greater detail in a follow-up blog post covering Mule application tuning.
Opposite to men that with the years we get slower (at least that’s my case), the new version of Mule 3 showed an improvement in performance compared to previous Mule ESB versions. In general plenty of effort was put to profile and optimize Mule for high concurrency scenarios, which led to improve the way messages were handled in transports and transformers.
Mule 3.1 performance is in average 10% better than its predecessor version 2.2.7, performing better when the number of concurrent consumers gets bigger and much better when dealing with XSLT transformations (around 15% better).
The test cases and setup were similar as the ones used some time ago to benchmark Mule 2.0.2. For more details on these benchmarks please refer to Whitepaper Perf Test Results.
Our RESTx project – a platform for the rapid and easy creation of RESTful web services and resources – is largely written in Python. Python is a dynamic, duck-typed programming language, which puts very little obstacles between your idea and working code. At least that’s the feeling I had when I started to work with Python several years ago: Never before was I able to be so productive, so quickly with so few lines of code. I was hooked. No wonder some people describe it as ‘executable pseudo code’: Just write down what you want and for the most part, it will actually work.
Now there’s a price to pay for this increased developer productivity, where the interpreter somehow figures out at run time what it is that you actually want to do and how to let your code deal with all sorts of different types: Python is interpreted, it’s dynamic and this means that for the most part it’s slower than compiled languages.
In this article here, I will talk about an interesting optimization technique for Python, which will come as a surprise to many Python developers.
As an integration framework and broker, Mule ESB is the platform of choice for implementing and deploying enterprise integration solutions and related services. In this series of posts, I’m going to look at situations beyond the traditional integration scenarios where using Mule ESB has enabled the implementation of effective and elegant solutions.
In this first installment, I will talk about using Mule ESB as a frontal load-throttling middleware for applications that are not able to handle sudden peaks of requests, as illustrated in the following diagram.
I’m proud to announce that we have released the new Mule ESB Management Console (MMC) — this is an important step forward for Mule ESB.
We built MMC based on significant feedback from our customers, and we put the product through two early access pre-releases to incorporate feedback from real users. I must say that I’m pretty pleased with the end result — I hope that you find it as useful as we do.