High Availability – A walkthrough of new Mule 3.4 features

motif

When we started working on the Mule High Availability () solution we wanted to create the simplest and most complete ESB HA solution out there. With Mule 3.4 we have further enhanced the capabilities of the Mule HA solution. In this blog post we would like to share with you some details about some of the the following highlight HA features of Mule 3.4:

  • Dynamic Scale Out
  • Unicast Cluster Discovery
  • Distributed Locking
  • Concurrent File Processing

Dynamic Scale Out

scale out
The size of a Mule must often vary in time so as to adjust to the different request throughputs. Since it can be difficult to predict request throughput peaks, it is possible that an application’s load grows more than can be handled by the initial size of a Mule HA . This is where Mule 3.4’s dynamic scale-out capabilities comes into play. With the Mule 3.4 release, the Mule Management Console (MMC) can be used to add new Mule nodes to a cluster dynamically. This means that you won’t have to disband your cluster or even shut it down. All you need to do is to use the MMC console, make a few clicks, and your are done: You have scaled out your application in matter of minutes.

Unicast Cluster Discovery

In order to coordinate a cluster, each node must be able to communicate with one another. By default, Mule nodes use a multicast protocol to discover new Mule cluster members. This makes the creation of a cluster really easy. However, for security and performance reasons, the multicast protocol is sometimes disabled by IT. To address this issue, Mule 3.4 clusters can be also created without multicast. This is done by statically defining the URI of each Mule node within a Mule cluster’s configuration file.

Distributed Locking

lock
In order to manage concurrent access to resources within a cluster, we have built a lock factory that custom components (such as custom transformers and DevKit built components) can access programmatically. To keep things simple, locks created through Mule’s lock factory will work seamlessly on a single server as well as in a cluster so that you don’t have to worry about your deployment model.

Let’s see a short example that illustrates how this feature can be used. Let’s say that we have an http service and we want to keep count of how many times it was executed. This service is deployed in a clustered environment so not only the service can be executed concurrently but also it can be executed in different servers at the same time.

Using an object store is the way to share information between cluster members so we are going to use this feature to store the service execution counter. The custom component that will keep track of request calls would look like this:

 

Concurrent File Processing

Prior to the release of Mule 3.4, only one node in a Mule cluster was capable of consuming files. This was so that Mule could guarantee resource access synchronisation. With the benefit of the new Mule 3.4 Distributed Locking feature, we are now able to synchronise file access in an smarter way. As of Mule 3.4, all nodes in a cluster can consume files in parallel. This allows for better performance when processing large files since each node of the cluster can work on a different file at the same time.

Can’t wait to try these features?

If you want to start playing around with the new features described in this blog, you just have to download our Mule HA demo bundle. With this bundle, You can have a Mule cluster up and running in less than 15 minutes. And if there’s a particular feature you want to start using you can refer to the documentation for further details:


We'd love to hear your opinion on this post


5 Responses to “High Availability – A walkthrough of new Mule 3.4 features”

  1. Wonderful article and this is what I am looking for. It would be great if you can describe this with architecture diagram and its components.

    • Hi Siva,

      It would take a full blog or even more to describe HA architecture and its components. I will try to post one about it in the future.

      Pablo.

  2. Hi Pablo,

    Great article.

    It would be very helpful, if the RequestCounter example showed additional configurations needed. For instance, how the object store is configured and how a flow would be using the RequestCounter.

    Regards.

  3. Hi Juan,

    How the object store is configured is shown in the initialise method. You need to get a reference to an object store by using the ObjectStoreManager.

    RequestCounter is a MessageProcessor so you can use it in your mule flow configuration as follows:

    
    <!-- inside a flow, after a message source or another message processor -->
    <custom -processor class="org.mule.example.RequestCounter"/>
    

    If you want to access the “counter” variable from another MessageProcessor or any other MuleComponent you can do it as follows:
    ObjectStoreManager objectStoreManager = muleContext.getRegistry().get(MuleProperties.OBJECT_STORE_MANAGER);
    objectStore = objectStoreManager.getObjectStore(“request-counter-objecstore”); //returns same object store as in RequestCounter
    int counter = this.objectStore.retrieve(“counter”);

    Pablo.

  4. Hi Pablo,

    Regarding Unicast Cluster Discovery, global configuration file can be exchanged to a special central service where each Mule node can register itself in this service on start up.
    Advantage: you don’t have to know where your Mule nodes which is more flexible. Each node knows of each other through this service.

    Ivan