Reading Time: 25 minutes

In my post yesterday, we did a brief introduction to the pattern. Today we are going to do a similar overview of the broadcast pattern which is a kinetic version of the migration pattern.

Pattern 2: Broadcast

What is it?

Broadcast can also be called “one way sync from one to many”, and it is the act of moving data from a single source system to many destination systems in an ongoing, near real time or real time, basis. Typically “one way sync” implies a 1:1 relationship and to us it is just a instantiation of the broadcast pattern which is a 1:Many relationship, hence we chose the name broadcast even though it will manifest itself as a 1:1 in many integration applications like our Salesforce to Salesforce templates that we recently made available.

latest report
Learn why we are the Leaders in API management and

Whenever there is a need to keep our data up to data between multiple systems, across time, you will need either a broadcast, bi-directional sync, or correlation pattern. The distinction here is that the broadcast pattern, like the migration pattern, only moves data in one direction, from the source to the destination. Now, I know what you are going to ask next, “What is the difference between the broadcast pattern and the migration pattern which is set to automatically run every few seconds?” The main distinction to keep in mind is that the broadcast pattern is transactional meaning that it does not execute the logic of the message processors for all items which are in scope, rather it does it only for those items that have recently changed. So you can think of broadcast as a sliding window that only captures those items which have field values that have changed since the last time the broadcast ran. Another major difference is in how the implementation of the pattern is designed. Migration will be tuned to handle large volumes of data and process many records in parallel and to have a graceful failure case. Broadcast patterns are optimized for processing the records as quickly as possible and being highly reliable to avoid losing critical data in transit as they are usually employed with low human oversight in mission critical applications.

Why is it valuable?

The broadcast pattern is extremely valuable when you have any generic situation where system B needs to know some information in near real time that originates or resides in system A. For example, you may want to create a real time reporting dashboard which is the destination of multiple broadcast applications where it receives updates so that you can know in real time what is going across multiple systems. You may want to immediately start fulfilment of orders that come from your CRM, online e-shop, or internal tool where the fulfilment processing system is centralized regardless of which channel the order comes from. You may be a ticket booking system that has sales channel that broadcast bookings on their sites to your booking system. You may have video game servers that need to publish the results of a game to a players account profile management system. You may want to send a notification of the temperature of your steam turbine to a monitoring system every 100 ms. You may want to broadcast to a general practitioner's patient management system when one of their regular patients is checked into an emergency room. There are countless examples of when you want to take an important piece of information from an originating system and broadcast it to one or more receiving system as soon as possible after the event happens.

When is it useful?

The broadcast patterns “need” can easily be identified by the following criteria:

  • Does system B need to know as soon as the event happens – Yes
  • Does data need to flow from A to B automatically, without human involvement – Yes
  • Does system A need to know what happens with the object in system B – No

The first question will help you decide weather you should use the migration pattern or broadcast based on how real time the data needs to be. Anything less than approximately every hour, will tend to be a broadcast pattern. However, there are always exceptions based on volumes of data. The second question generally rules out “on demand” applications and in general broadcast patterns will either be initiated by a push notification or a scheduled job and hence will not have human involvement. The last question will let you know whether you need to union the two data sets so that they are synchronized across two system, which is what we call bi-directional sync and will cover in the next blog post. Different needs will call for different patterns, but in general broadcast the broadcast pattern is much more flexible in how you can couple the applications and we would recommend using two broadcast applications over a bi-directional sync application. However sometimes you need to union two datasets across two systems and make them feel like one, in which case you would use a bi-directional sync.

What are the key things to keep in mind when building applications using the broadcast pattern?

Input Flow:

You generally have 2 ways to trigger the broadcast applications, and two ways to pass the data between the source system and the integration application. Below I walk through some of the pros and cons of each of the three combinations that make logical sense.

Trigger via a notification and pass the data via payload of the message notification.

  • Pro: minimal number of API calls since the integration application is not asking the system about changes, rather it waits to be notified.
  • Con: the code and/or feature that sends the notification needs to be configured or developed differently, with unique knowledge and skills, for each system, and is managed from a different place for each source system in your environment.
  • Con: if the integration application is not running properly or is down you could lose data, so a high availability or queue type approach is needed to make sure you don't lose the unprocessed events.

Trigger via a notification message, but have integration application pull the data from the source system.

  • Pro: much easier to achieve a highly reliable system than the option above since the pull will only run when the application is healthy and the watermark only be updated once the items are successfully processed. In other words, you get auto retry.
  • Pro: with less coding, essentially just a http call to initiate the pull which will do the job of grabbing and moving the data.
  • Con: can become really chatty when the source system has a lot of changes.
  • Con: still requires the coding or configuration of the outbound message notifying the integration application that there is data waiting to be pulled.

Use a scheduled job or poll element which runs the application every X units of time and have the integration app pull the data from the source system.

  • Pro: easy to create a highly reliable system.
  • Pro: only need the skill of using the integration platform and the API or connector. For example, we provide Anypoint Studio and our connector suite to make this even easier.
  • Pro: in very dynamic systems, where changes are on the order of seconds, you can use a polling approach and save API calls since you can batch process many records at a time.
  • Con: in low frequency systems, you will use a lot of API calls to find out that nothing has changed, or risk data sitting for a while before being synchronized if your polling interval is too large.

We designed and built all of our templates using the last approach due to the fact the full application needed to accomplish the use cases is contained inside the Mule application and does not require any specific code on the source systems. This enables us to keep a much cleaner divide between the systems while still providing the processing logic for anyone who wanted to change to a push system by adding a http endpoint that would capture the payload and feed it into our business logic flow. That said, whether a pull or push can be used effectively also depends on the source systems. In our first set of templates they are all Salesforce which gives you both options. I am certain that future systems will require us to use a push notification model to trigger and or receive the payload when building templates based on best practices.

Objects and Fields:

One key concept to keep in mind when designing and using broadcast applications is that objects are being moved but the value is the data in the fields. Usually a flow or process of an application will be built around a specific object. Our templates are designed and built per object to maintain their atomic nature, and can be compounded to create larger applications. Since a flow handles an object, its the object that is being moved. This is the only practical way to build the integration because trying to make an application that moves field values as payloads would be very expensive in terms of development cost and performance given the nature of the system APIs that are used to get this data. So you are generally stuck moving objects, but the values and changes occur at the field level. This means that you have to synchronize whole objects even though only one field has changed, which exposes you to a problem when you have multiple broadcasts feeding into the same system. For example, if you have CRM A and CRM B both broadcasting contacts into CRM C, you will have to handle cases where both systems change the same or different fields of the same contact within the time interval of the poll, or prior of the two messages being processed. The elegant solution here is to have a detection that notices that there are two messages being processed which merges the values of the two payloads, with the later one winning if the same field was affected. This is something that we have not yet incorporated into our templates, which risk potential data loss in the case that the same objects are modified in two systems within a polling interval (that is usually on the order of a few seconds), but it is something that we are looking to address. Another mitigation to this problem is to define a master in the integration application configuration properties at either the system, object or field level. Having a clear master will make the result still have minor data loss (or inconsistency between the origin system and destination system) of those field values, but the behavior will always favor one of the systems so that at least you will have consistency which is something that you can correct periodically or via an MDM solution. Again, this is only a problem at small interval when you have multiple highly dynamic systems that produce broadcast messages into one common system or a bi-directional synchronization between two systems. For example, the chance that you would have your sales reps update the same account with two different values within seconds is very low.

Exception Handling:

Ideally the broadcast integration application would never encounter problems, all data would be consistently formatted, no null values would be provided for required fields, APIs would not change, and APIs on both sides would be at 100% availability. The truth is that unfortunately this is not the case. One of the really nice things about our batch module is that it has built in error handling, meaning that it will tell you which messages errored out and will even pass them to a exception management flow where you can further process the messages to either fix them or appropriately categorize them and report on them. All of this without locking up, meaning that it will continue processing the other records. When we build our templates out, we don't yet capitalize on this feature, but it is something that you should definitely extend before taking a template to production. A simple exception management strategy can be to log all the issues to a file where someone can review them, or pass them via an email to someone to take a look at. A more sophisticated approach would look for the common problems that are known, like a null value in a required field and replace it with dummy data like “Replace me” for a string and resubmit that message. This would make the message pass and be processed correctly without avoiding the loss of the value that would be generated from that message being processed for the rest of the fields. Similarly, you can create a notification workflow based on the values in the payload so that different people are notified based on different errors for those exceptions that cannot be fixed by a prescriptive design time solution.

Watermarking

For our broadcast templates we leverage our watermarking feature which keeps track of when the last poll was done such that we grab only the newest set of objects that have been modified in that time window. This is something that is only needed in the case where the polling mechanism is employed, and not needed when processing a notification that was pushed. In general using a last modified timestamp as the query parameter in the systems that provide it is the cleanest way pull only the latest content. In the systems that don't have something like this, you may have to either process all the records, or first scan a transaction record from where you can derive the objects that need to be updated, or come up with a clever way to capture only those items to be processed.

Scoping, Transforming the Dataset, and insert vs update.

Given that we covered these topics in the migration pattern post, and that the considerations here are effectively the same please refer to that post to get some additional comments on those themes.

In summary, broadcast is another critical integration pattern and is defined as the act of moving data from one to one or more systems in as near real-time fashion as you want. It is valuable when you have data in one system and would like to have that data in other systems as soon as possible. We also walked through some things to keep in mind when building or using an application based on the broadcast pattern. Check out our integration templates that use the broadcast pattern. In the next post I will walk you through a similar set of information for the bi-directional sync pattern – stay tuned!