Messaging aka the Not So Enteprisey Integration Patterns

Picture cool kids in startups, cranking code as if their lives depend on it, focusing on the proverbial MVP above all else. At this stage, who cares if technical debt accumulates as fast as code gets written? It would be a waste of time and focus to try to keep the field as green as it was initially. Then the worst happens: the cool kids have it right, people love their new app and traffic starts to surge. Though strong, the duct tape that olds the application together starts to show signs of fatigue. Maintenance becomes painful, adding new features is excruciating. The blood of the architecture that has been sacrificed on the altar of time-to-market is calling for revenge.

One of the most typical architectural mishap that comes back to haunt startups is tight coupling: the whole system is a monolith where coupling manifests itself both from a temporal manner (everything is synchronous) and a lack of abstraction in the interactions between the subsystems (everything knows the intimate details of everything else).

The good news is that there is hope: the giants of past time, upon whose shoulders everything is built, have fought these problems and won. Take Hohpe/Woolf’s Enterprise Integration Patterns () for example. They discuss how can be used to alleviate coupling issues. Sure enough, the “enterprise” term in the name is dubbed “run away!” by our startups’ cool kids. So in this post we’ll look at a few of these patterns and how they could be used beneficially in modern applications. And hopefully these patterns will feel more lovely than enterprisey!

There are 65 patterns in the EIP catalog. We’ll cover just a handful while exploring different scenarios that typical web applications face.

Publish Subscribe

With the advent of streaming APIs, whether based on long polling or web sockets, applications have the need to be able to reach established connections and deliver messages to them. Suppose you have multiple web servers with established web sockets with client browsers, how do you send a message to a single web socket ? To a group of them? To all of them?

Using a publish/subscribe messaging pattern allows to implement such a behavior: when a web socket gets established, the corresponding server-side entity can subscribe to different topics in order to receive global, group or personal messages and route them back to the socket. Whether using a messaging broker (for example with JMS or AMQP), a broker-less approach (like with ØMQ) or a distributed actor or process model (like with Akka or Erlang), the publish-subscribe pattern allows reaching out to decoupled and distributed systems.

Inbound Insulator

Not all inbound requests are born equal. Some need to be processed synchronously but some can be processed later. Deciding to decouple the acceptance of a request from its processing can be the single difference between a system that scales and one that doesn’t. Moreover, introducing such decoupling allows scaling the processing components independently from the request-accepting ones.

The channel adapter pattern is what we’re after here: with this pattern we could be accepting HTTP requests and adapt them into messages ready to be sent to a messaging channel. Another pattern we also rely on is guaranteed delivery. Indeed, once a message has been accepted and sent to a messaging channel, we don’t want to lose it! Having a strong guarantee that it will eventually be delivered to the intended recipient is primordial.

Graceful Retries

No web application is an island anymore: with the explosion of the API economy, a typical application will have to integrate several external APIs. This is good news as it means one can focus on its core features while using existing services for all the ancillary tasks. The not-so-good news is that APIs can be flimsy: the provider can be down, the interface may have been carelessly changed, etc… Therefore, an application has to account for potential failures of the remote API and the related need to retry calls to it.

Thus, instead of directly interacting with a remote API, an application can decide to interact with it via a messaging channel and rely on both the already mentioned guaranteed delivery pattern and the transactional client pattern. These two patterns combined ensure that the message representing the desired interaction with the remote API will be delivered and redelivered until the remote call succeeds.

Of course, limiting the number of retries and dealing with undeliverable messages, for example if a provider decided to break its API, is essential to avoid thrashing on a request that simply will never be able to go through. This is when the dead letter channel pattern comes into play, as the dump truck destination of all messages that could not be delivered and are in need of a, potentially manual, failure cause analysis.

Messaging Backbone

Using a messaging backbone can be a powerful tool to decouple the sub-systems of an application. One advantage of such an approach is that it acts as a natural circuit breaker: if one component of the application is in pain, the rest of it can still function well. For example, if the statistics part of the application is down, the rest of the system will keep working. When the statistics system will be back on-line, it will catch-up at its own pace and without disturbing anything.

But not all interactions are one-way. What about request-reply interactions? Does it make sense to use a messaging system for them when an HTTP request could do? Yes, it does make sense because the handling of the response can be delegated to a closure (callback method) called by a different thread. Instead of blocking the requester until the response is ready, as it’s typically done with HTTP requests (keeping in mind that there are asynchronous HTTP clients out there), the response handling code gets executed only when the response arrives. The request-reply and return address messaging patterns are both at play in that case.


Mouth watered? So is everything rosy in the messaging world? As always, there are caveats to consider. Let’s list a few and try to provide potential remediation strategies at the same time:

  • Running highly available message brokers in-house can be complex: this can be alleviated by using messaging services like Amazon SNS/SQS, IronMQ or CloudAMQP. Or going broker less with ØMQ.
  • Tracing what’s happening in a message-oriented distributed architecture can be daunting: the correlation identifier and message history patterns are handy if this is a concern.
  • Some message brokers don’t guarantee the delivery ordering or can deliver the same message more than once: to deal with these situations, the resequencer and idempotent receiver patterns are respective solutions to consider.
  • Message sizes are typically limited to several kilobytes: if you need to carry heavy payloads, store them in an external storage and refer to them in messages (for example with a URI),
  • The transient nature of messages can be scary (what if everything crashes) or can even hinder auditing or statistics gathering: in that case the message store pattern will save the day!
  • It is typically impossible or non-trivial to alter in-flight messages once they’ve entered a messaging broker: if you need to perform such operations, it’s better to consider using a shared database for storing messages and polling consumers to retrieve them.
  • Testing message-oriented systems can be difficult: mine this white paper for ideas on how to achieve this.

Cool Enterprise

So if you’re a cool kid coding your guts out in a hot startup or maybe a less-cool-less-kid developer dealing with architectural challenges, don’t be put off by the enterprisey sound of EIPs: take a look at them, it’s a treasure trove of architectural goodness!


We'd love to hear your opinion on this post

2 Responses to “Messaging aka the Not So Enteprisey Integration Patterns”

  1. This type of behavior plagues start-ups and the corporate world alike. To be more specific about the ‘behavior’ that I’m referencing, let’s call it ‘delayed knowledge’. This happens when you have someone focused on rapid delivery of a particular product or feature at the expense of long term debt.

    I’ve seen this allot when it comes to unit testing and I’m no exception to the rule. Initially the return on investment is low and you feel compelled to ‘cut corners’. Since the investment of unit tests pays off quickly, we’ve witness a fairly large change in start-ups and corporations when it comes to unit testing, but messaging is another beast, especially in the start-up world.

    When it comes to MOM related technologies, it’s harder for start-ups to really know when they should make the required changes in their architecture to introduce messaging. Applying fundamental concepts like abstractions, isolation of responsibility, etc. are key to making the transition to such a change in your platform. Sadly, these core principals are neglected “because you’re building a startup”. I think the idea of building a start-up is often viewed as ‘cut corners, be sloppy and go live!’


  2. Enjoyed the blog. In my past experience I had written some single thread approaches to running multiple step integration processes and while it was simplistic to debug and maintain the shared log and payload it definitely suffered when there were failures, either systemic like the server crashing or individual tasks failed, and restarts were messy.

    On the flip side I’ve also inherited the ownership of a home grown message platform and it had so many issues with stability and scalability.

    I’m looking forward to my next opportunity where I can recommend a commercial ESB platform and try to get things “more right” from the start.

    I also think it would be great for these ESB hosts to build in a more robust Exception Handling Management module. Ideally in cases where you could distribute exception resolution and have built in tracking and auditing and reprocessing functionality. This could put more power into business users for resolving data exceptions that are not systemic vs. having development/IT dealing with production support related to data cleansing.