In Mule 3.2 a group of stand-alone Mule instances can be configured to act as a cluster. One or more applications runs in each instance – or node – and the cluster processes requests as if a single unit. A node goes down, the application is still running; the more nodes, the more throughput. And the more nodes, the greater the headache. How many Putty sessions are you already running, let alone a group of new sessions to manage all those nodes?
The Mule Enterprise Management console treats the cluster as a single unit of operation for common management functions. In many respects a cluster is no different than a single instance, and the console leverages this opacity. If I want to deploy an application I distribute it in a single operation. Below is a screen showing how an application and cluster are associated in a deployment. The application is loaded and started automatically for each node in the cluster.
Similarly, an application can be stopped and started in a single operation.
And even better, Mule flows and end-points can also be controlled across all nodes as if the cluster were a single processing unit.
Application and deployment status are also monitored at the cluster level:
But there are times when deeper detail is needed. Visibility and control at the node level is also available. Nodes can be monitored…
Finally, alerts can be raised when critical events occur in a cluster, such as when a cluster node goes down.
The ability to cluster Mule 3.2 instances delivers tremendous value but potentially introduces a new dimension of management complexity. The Mule 3.2 ESB Enterprise Management console strikes the right balance between streamlining cluster management while providing the right level of control and visibility.
Download the Mule Enterprise trial version.