Beyond Integration, Part 1: Peak Time Handling

motif

As an integration framework and broker, Mule ESB is the platform of choice for implementing and deploying enterprise integration solutions and related services. In this series of posts, I’m going to look at situations beyond the traditional integration scenarios where using Mule ESB has enabled the implementation of effective and elegant solutions.

In this first installment, I will talk about using Mule ESB as a frontal load-throttling middleware for applications that are not able to handle sudden peaks of requests, as illustrated in the following diagram.

Beyond Integration: Peak Time Handling

In this scenario, burst of requests are generated by clients and need to be processed by an application. These requests either do not expect any response, like TCP packets sent over a socket one after the other, or expect a simple response that is not dependent of their actual processing, like a basic acknowledgement. The application is unfortunately not designed to perform asynchronous processing: it is designed to process messages synchronously, as they come. Hence the need for load throttling.

Implementing such a load-throttler is incredibly easy with Mule and this because Mule, at its very core, follows the design principles of SEDA. The Staged Event-Driven Architecture, or SEDA, “refers to an approach to software design that decomposes a complex, event-driven application into a set of stages connected by queues” (Wikipedia). Such a design allows an application to degrade gracefully under load, as queues get filled up and consumed by workers whenever they regain the capacity to do so.

Let’s look at a simple example where the target application accepts messages over HTTP and takes one second to process each request. Let’s use curl and a simple loop to confirm that it behaves as expected:

Indeed, around ten seconds have been spent performing these ten HTTP POST operations on the slow application

In Mule, let’s now configure a simple HTTP bridge service, as shown here after:

In this bridge, the inbound and outbound stages are decoupled, as per SEDA’s design. Hence the fact that communicating with the outbound application is a slow operation will not affect the capacity of the inbound phase to accept messages rapidly. Let’s confirm this with running the same command line operation, but this time targeting the throttler HTTP service instead of the application:

Now we’re talking! In 300 milliseconds, Mule has accepted all the incoming POST requests and started to dispatch to the slow application in parallel. But what would happen if Mule would crash while still dispatching messages to the slow application? With the current configuration: messages will be lost. If this is not an option, we can add an intermediate persistent VM queue between the HTTP inbound and outbound endpoints in order to gain durability of the messages pending dispatching. Here is how we would do so:

If you re-run the timed command line to send messages, you’ll notice that adding the VM queue intermediary doesn’t affect the overall performance of the solution.

Before we deem the mission as being accomplished, we must be very clear on one thing: the order of messages will not be guaranteed with this solution. If you trace the requests that hit the application, you’ll notice that the overall order is respected in the sense that request #9 arrives generally after request #1, but request #1 and #2 will arrive in disorder. In software, like in life, you can’t have it all: by getting lax on accepting and dispatching requests, we gained performance but lost ordering…

Moreover, there are still load limits we can hit if the peak of incoming requests is high enough to saturate the buffer used to accumulate them. In that case, we could gain further scalability by rolling out additional Mule instances and have a network load balancer spread the load on them. Our slow application will remain oblivious of all the upfront traffic intensity, leaving to Mule the responsibility of sweating it out!

Of course, what have been demonstrated with HTTP in this example could be done with other Mule transports. So have fun and don’t be scared of peak times anymore!


We'd love to hear your opinion on this post


12 Responses to “Beyond Integration, Part 1: Peak Time Handling”

  1. Great post David! One nice thing about that configuration is you can introduce an intermediary transport like JMS to enforce ordering, durability and retry behavior for the messages via an exception strategy. You can also introduce a wiretap router in the inbound endpoint and use a CEP solution like Esper to monitor the inbound traffic rate.

  2. Thanks for the comment John: you’re suggesting some very valuable additions here!

    One note about ordering: I think that there will be more setup to be done to enforce it, like the number of consumers and dispatcher threads for example, without that you may end up producing/consuming concurrently to/from your JMS queue, leading to a loss of ordering.

  3. Also, if the ordering is important for a group of messages, the Resequencer inbound router can be used to reorder messages for consumption.

  4. Hi David,

    very nice article.
    However, I’d like to see a second installment where you use a JMS queue for “buffering” inbound messages: queues guarantee both processing order and message durability, and while you can sacrifice the former, you can rarely avoid the latter (unless there’s some kind of transactional behavior starting from the inbound endpoint, which is not the case in your sample). I think it would be more correct and “real world”-oriented.

    Cheers!

    Sergio B.

  5. Thanks for your comment: as I said above, using a JMS queue as an intermediate destination (or a VM one for that matter) will not automatically give ordering as you will be accepting incoming messages in multiple threads, dispatching these messages to the queue in other multiple threads and will be picking from the queue again with another set of threads. You can decide to work with single threads when dealing with the JMS queue, but you’ll still be potentially writing to it in disorder because of the multiple parallel HTTP receivers. You can also, as Ross suggested, opt for the Resequencer but it’s not applicable to all scenarios.

    And, believe it or not, this is coming from the real world 🙂

  6. Thanks for your insights David.

    There are a number of ways to deal with message ordering, such as the already cited resequencer pattern, or message groups when feasible.

    The real problem, there, is that collected messages aren’t durable that way: so you answered your clients that messages have been received for later processing, but you could violate that “promise” if you suddenly crashed.

    I know it can be used in real world scenarios, I used similar techniques by myself 🙂
    But, you can afford to lose messages only in limited scenarios.

    Cheers!

    Sergio B.

  7. Fair point Sergio: I’ve updated the post to show the addition of a persistent VM queue as an intermediary between the inbound and outbound HTTP endpoints. Thanks for your constructive comments, which helped making this post more valuable.

  8. Thanks to you for sharing 🙂

    Just one more question: is it a JMS queue? Or some kind of “internal” persistent queue?

  9. It is indeed a Mule specific type of internal message queues: for more information, see http://www.mulesoft.org/documentation/display/MULE2USER/VM+Transport

  10. Hi David,

    Thanks for the article. I have an issue related message ordering when we are dispatching the JMS messages to WMQ End point.We are sending 2 messages, assume messages are
    Conf1, COnf2 , we are generating the message and sending in correct order but when it is being dispatched to WMQ Listener, the Conf 2 is getting delivered first.

    We are using a WMQ EE Connector. Validating by Mule Correlation Id, the order of dispatch Conf1 and then Conf2, where as somehow the JMS Timestamp is overrding the messages, hence in WMQ listener the Conf2 is getting read before Conf1.

    Could you please explain bit more about the ordering of message or the resequencer in the outbound endpoint.

    Thanks in Advance, Denison

  11. Generally speaking, I don’t think JMS providers guarantee message ordering. It seems WMQ supports the notion of logical ordering of message groups but I’m unsure if the WMQ EE Connector leverages this feature.

    Also, on the Mule side of things, bear in mind that if you have multiple concurrent producers or consumers, it is well possible that sending or receiving JMS messages happens out of order. If that is the case, configure the transport to have only a single dispatcher and consumer.

    Finally, since you’re a EE user, I suggest you contact MuleSoft support directly.

  12. Thanks David. we are able to manage with attributes No of Consumers= 1 and made the outbound router as synchronous.. Even if performance is slower now we could able to acheive it. Thank You again for the prompt response….. 🙂