Clustering done right

October 5 2011

8 comments 0

In Mule 3.2 a group of stand-alone Mule instances can be configured to act as a cluster. One or more applications runs in each instance – or node – and the cluster processes requests as if a single unit. A node goes down, the application is still running; the more nodes, the more throughput. And the more nodes, the greater the headache. How many Putty sessions are you already running, let alone a group of new sessions to manage all those nodes?

The Mule Enterprise Management console treats the cluster as a single unit of operation for common management functions. In many respects a cluster is no different than a single instance, and the console leverages this opacity. If I want to deploy an application I distribute it in a single operation. Below is a screen showing how an application and cluster are associated in a deployment. The application is loaded and started automatically for each node in the cluster.

Deploying an Application to a Cluster

Similarly, an application can be stopped and started in a single operation.

Cluster Application Control

And even better, Mule flows and end-points can also be controlled across all nodes as if the cluster were a single processing unit.

Application and deployment status are also monitored at the cluster level:

But there are times when deeper detail is needed. Visibility and control at the node level is also available. Nodes can be monitored…

Cluster Node Monitoring

…and restarted/stopped:

Cluster Node Control

Finally, alerts can be raised when critical events occur in a cluster, such as when a cluster node goes down.

The ability to cluster Mule 3.2 instances delivers tremendous value but potentially introduces a new dimension of management complexity. The Mule 3.2 ESB Enterprise Management console strikes the right balance between streamlining cluster management while providing the right level of control and visibility.

Download the Mule Enterprise trial version.

We'd love to hear your opinion on this post

8 Responses to “Clustering done right”

  1. 1, Thanks for a nice cluster overview!
    2. Was wondering if you could comment on the MuleSoft example app, named “widget”. In this example, used to show High Availability, you have to first start a JMS producer located in one of the nodes. Then, you deploy the widget app via MMC console to the nodes and configure each app to talk to the JMS producer.
    3. Questions: do you have to have a JMS producer running to enable clustering? Besides configuring a cluster of Mule nodes, what else is required to get HA clustering working?

  2. Mule Clusters can be used with many kinds of messages sources in addition to JMS producers. In fact, any Mule transport can be used in a clustered application. For example, you can use HTTP end-points behind a load balancer or polling end-points such as File or FTP. In the latter case the cluster will automatically elect a node to poll the file or ftp source.

  3. How can I test to make sure the clustering is working correctly.

  4. […] Enterprise now has an out-of-the-box clustering capability that lets you configure a group of Mule instances as an active-active cluster, using the management […]

  5. […] the community edition and valuable new enterprise capabilities in the enterprise edition (such as clustering and root-cause analysis), so you might be wondering how we are planning to improve on Mule […]

  6. Hi Ray,

    I am curious to know the possibility of clustering MULE ESB COMMUNITY EDITION. Since it doesnt have a management console, Is it possible to cluster CE using configuration files.


  7. Where can i find the Clustering configuration files and the cluster name in the file and the files in which the platform applications are deployed.