Fast and Slow through the Air

motif

Handling endpoints with disparate speed when the platform is in the cloud

A fairly common integration requirement is to accumulate data coming in real-time or near real-time, hold and consolidate the records, then send the transformed messages to another system on a fixed schedule (e.g. daily etc.) for business reasons, especially if the endpoints are legacy systems. For on-premises integration platforms, this use case is rather straightforward to implement. For cloud-based integration platforms though, which are generally geared toward real-time processing and lack access to local file storage, this requirement does seem to pose some technical challenges. Fortunately for , with the built-in persistent queue feature and the Mule Requester Module, the implementation is as easy as doing it with legacy on-premises platforms.


Fast and Slow

Fast and Slow can play nice together


Only three simple steps needed to be taken to enable an application on CloudHub to handle this kind of requirement. First, turn on persistent for the application via the “Advanced” section of the “Settings”:persistent_queues

Persistent queuing is important to ensure zero message loss (ZML) when the messages may be held for an extended period before the scheduled time arrives to send the data downstream.

Then in the application, queue up the messages that need to be processed on schedule by sending them to a virtual message (VM) outbound endpoint with a one-way exchange pattern:

Note that there should not be a corresponding VM inbound endpoint in the application.

Finally, retrieve the messages and process them on a fixed schedule by wrapping a Mule inside a poll scope with cron expression for scheduling (for example, daily at midnight), with the VM URL being the resource value in the configuration: 

poll-requester

These simple steps will enable a CloudHub application to integrate endpoints with vastly different message flow rates.

Fast meets Slow


We'd love to hear your opinion on this post