When we started working on the Mule High Availability (HA) solution we wanted to create the simplest and most complete ESB HA solution out there. With Mule 3.4 we have further enhanced the capabilities of the Mule HA solution. In this blog post we would like to share with you some details about some of the the following highlight HA features of Mule 3.4:
- Dynamic Scale Out
- Unicast Cluster Discovery
- Distributed Locking
- Concurrent File Processing
Dynamic Scale Out
The size of a Mule cluster must often vary in time so as to adjust to the different request throughputs. Since it can be difficult to predict request throughput peaks, it is possible that an application’s load grows more than can be handled by the initial size of a Mule HA cluster. This is where Mule 3.4’s dynamic scale-out capabilities comes into play. With the Mule 3.4 release, the Mule Management Console (MMC) can be used to add new Mule nodes to a cluster dynamically. This means that you won’t have to disband your cluster or even shut it down. All you need to do is to use the MMC console, make a few clicks, and your are done: You have scaled out your application in matter of minutes.
Unicast Cluster Discovery
In order to coordinate a cluster, each node must be able to communicate with one another. By default, Mule nodes use a multicast protocol to discover new Mule cluster members. This makes the creation of a cluster really easy. However, for security and performance reasons, the multicast protocol is sometimes disabled by IT. To address this issue, Mule 3.4 clusters can be also created without multicast. This is done by statically defining the URI of each Mule node within a Mule cluster’s configuration file.
In order to manage concurrent access to resources within a cluster, we have built a lock factory that custom components (such as custom transformers and DevKit built components) can access programmatically. To keep things simple, locks created through Mule’s lock factory will work seamlessly on a single server as well as in a cluster so that you don’t have to worry about your deployment model.
Let’s see a short example that illustrates how this feature can be used. Let’s say that we have an http service and we want to keep count of how many times it was executed. This service is deployed in a clustered environment so not only the service can be executed concurrently but also it can be executed in different servers at the same time.
Using an object store is the way to share information between cluster members so we are going to use this feature to store the service execution counter. The custom component that will keep track of request calls would look like this:
Concurrent File Processing
Prior to the release of Mule 3.4, only one node in a Mule cluster was capable of consuming files. This was so that Mule could guarantee resource access synchronisation. With the benefit of the new Mule 3.4 Distributed Locking feature, we are now able to synchronise file access in an smarter way. As of Mule 3.4, all nodes in a cluster can consume files in parallel. This allows for better performance when processing large files since each node of the cluster can work on a different file at the same time.
Can’t wait to try these features?
If you want to start playing around with the new features described in this blog, you just have to download our Mule HA demo bundle. With this bundle, You can have a Mule cluster up and running in less than 15 minutes. And if there’s a particular feature you want to start using you can refer to the documentation for further details: