Reading Time: 7 minutes

In the vast majority of cases, HTTP requests are processed synchronously: the operation that the client wants to perform on the targeted resource is executed by the same thread and the result is returned right away. This is usually done by connecting the HTTP layer directly to the service layer.

This post demonstrates a slightly different approach where HTTP requests are first sent to a messaging layer, then processed by dedicated agents whose responses are eventually returned synchronously to the client that is blocked waiting.

latest report
Learn why we are the Leaders in API management and iPaaS

Some typical use cases for this agent-based approach include:

  • Situations where the number of threads able to concurrently process requests is less than the number of threads potentially coming from the web tier,
  • Multiple Mule application deployments where a single HTTP entry point is exposed and processing agents are available in different applications.
  • Bridging from the DMZ to your private LAN where JMS/AMQP is used for internal communication.

The approach demonstrated in this blog is intended for processing requests that do not take a long time to process. Requests that trigger long running processes are better handled with a completely asynchronous approach (ie. disconnect request submission from result deliverance).

As you will see in the coming examples, the messaging layer we use is JMS. These examples would work the same with AMQP. We will start with a very basic single-agent setup and evolve to a more versatile one.

Single-agent service

The first example we’re going to look at consists in a simple service that performs a single operation: capitalizing all the words of a sentence. Here is the relevant configuration snippet (the full configuration is available at the end of this post):

Let’s try this service:

This promises to be a hit! If we take a closer look at the configuration we notice that:

  • A request-response bridge is used to synchronously connect an inbound HTTP endpoint to an outbound JMS one,
  • A pair of transformers take care of transforming the payload from its initial byte stream form to a String and also of sanitizing all incoming HTTP headers so they don’t conflict with JMS naming conventions,
  • Mule takes care of wiring everything together, performing synchronous requests over JMS via a temporary reply-to queue.

With this configuration in place, we can configure the number of available HTTP threads independently from the number of  JMS consumers.

Now what would happen if we want to roll-out more similarly awesome services? A naive approach would consist in adding a pair of {bridge, agent} for each new service. In the next section, we’ll see a better way.

Dynamic-agent service

The main idea is to use a single upfront bridge that can send request to a different JMS queue, hence a different actor, based on the URI used by the client that performed the HTTP call. This can easily be achieved by using a Mule expression, as shown here:

The somewhat cryptic Groovy expression takes care of turning request path into queue names, ie transforming:




With this new bridge in place, the previous capitalizer agent still works as-is but is now accessible under a different path:

We can then roll-out a new and equally crucial agent:

This new agent  is automatically accessible, but from a different sub-path:

This would work the same if the frontal bridge and each agent would be in different Mule applications, effectively exposing all the agents behind a single HTTP endpoint. An advantage of this architecture is that each application can have a different life-cycle, for example allowing the hot replacement of agents.

Try it out!

The recipes in this article are intended to water your mouth and show you what’s possible do with a few lines of configuration in Mule.

The complete configuration, with both services in it, is available here and can run be run with Mule ESB 3.2 CE.