Reading Time: 6 minutes

We often get asked to help tune applications running on for optimal performance. Over time, we have developed a methodology that helps us deliver on that request — and we wanted to share that methodology with you.

To-Do Before Tuning

latest report
Learn why we are the Leaders in management and

Here are a few questions to ask before tuning. Performance tuning requires affirmative answers for (1) and (2), plus a concise response to (3).

  1. Does the application function as expected?
  2. Is the testing environment stable?
  3. How does the application need to be tuned?

Donald Knuth maintained that “premature optimization is the root of all evil“. Make sure the application runs properly before tuning it.

Performance Tuning Process Overview

Design Phase Tuning

  • Tune Mule's flows
  • Tune Mule's configuration settings.

Runtime Environment Tuning

  • Tune the Java Virtual Machine (JVM).
  • Tune the garbage collection (GC) mechanism.

Operating System Tuning

  • Tune the ulimit
  • Tune the TCP/IP stack

Use an iterative approach when tuning. Make one change at a time, retest, and check the results.  Though it may be tempting to apply several changes at once, that hasty approach leads to difficulties linking causes with effects. Doing one change at a time makes it apparent how each modification affects performance.

Performance Testing Best Practices

Use a Controlled Environment: Repeatability is crucial to running performance tests.

The testing environment must be controlled and stable. To help ensure stability, use:

  • A dedicated host to prevent other running processes from interfering with Mule and the application
  • A wired, stable network
  • Separate hosts to run other dependent services (e.g., MySQL, ActiveMQ, other backend services)
  • Separate hosts for running load client tools (e.g., Apache Bench, JMeter]

WARNING: A dedicated VM on shared hardware is not *controlled*, a similar environment should be used or it must be the only VM running in the server.

Use Representative Workloads

Representative workloads mimic the customer use cases in the real world. The planning of the workload usually include analysis of the payloads and user behaviors. Payloads can be designed to vary realistically in terms of size, type and complexity. Its arrival rate can also be adjusted to imitate actual customer behavior by introducing think time in the load test tool.  For testing proxy scenarios, artificial latency may also need to be added to the backend service.

Clarify Requirements and Goals

It is important to specify the performance criteria for an application's use case. Here, use case refers to an application running in an environment to achieve particular goals. Different types of requirements lead to different configurations. For example, use cases emphasizing throughput should utilize parallel mark-and-sweep (MS) garbage collection (GC). Cases focusing on response time may prefer concurrent MS (CMS). Those GC techniques are themselves tuned differently. As such, a case cannot be tuned until after performance requirements are defined.

Here are some questions that may help clarify requirements:

  • What are the expected average and peak workloads?
  • Is the use case emphasis on throughput or response time?
  • What is the minimum acceptable throughput?
  • What is the maximum acceptable response time?

Want to learn more?

Download our Performance Tuning Guide to learn from our experts how to choose the right tools, set performance goals for throughput, latency, concurrency, and large payloads.  The guide also provides guidance for how you can design applications running on Mule for high performance.