Reading Time: 15 minutes

The API-led connectivity approach to enterprise integration has simplified how enterprises interact with their customers. It removed the bottleneck and complexity involved in integration, whether point-to-point or SOA-based ESB integration, leading to increased agility, speed, and productivity while decentralizing and democratizing access to enterprise data. 

But how can we make sure that the data we receive is up to date, not stale? Event-driven architecture (EDA) can complement API-led architecture to guarantee that we are looking at a near real time updated data set thereby ensuring that any further action taken is based on the latest and greatest information available at that time.  

We’ll uncover how EDA can ease API workloads and help enterprise companies be more efficient and consistent in real time. Before we do that, let’s dive into what event-driven architecture is and the different types of events there are. 

What is event-driven architecture? 

Event-driven architecture is an architecture style that bases integration on the detection of events that occur in an enterprise organization and act on it in near real time. Multiple applications can receive the same event, which may trigger various actions depending on the recipient application (consumer). 

The architecture methodology utilizes an asynchronous communication model. The originating application (producer) sends the event information to the event broker who maintains the event either temporarily or permanently before passing it on to the recipient applications. 

3 types of events in event-driven architecture 

Events are records of activity on a system. It can be a change of state or just the current state of the system recorded as an event. Events are immutable and are ordered in the sequence of creation. Events can be time-sensitive, meaning they are relevant for only a certain duration, which can be defined using a time-to-live (TTL). The most common way to describe an event in EDA is how they are processed. Based on processing, events can be classified as simple, complex, and streaming events. 

  • Simple events: These events are processed independently, and in most cases, in near real time. Event consumers listen to the events and process them when the events are received. 
  • Complex events: These are events where a series of events collectively results in an actionable event. These events can be used to detect anomalies or identify trends. The events are rule-based typically apply business rules, which either define an opportunity or a threat. 
  • Streaming events: These are usually associated with the IoT where devices emit a constant stream of data. These streams are analyzed for patterns or are aggregated over a time window to make sense of events. 

How can EDA and API-led connectivity work together?   

EDA and API-led connectivity are two very distinct styles of architecture: 

  • EDA relies on asynchronous communication, whereas API-led is synchronous. 
  • EDA is loosely coupled, while API-led is tightly coupled. 
  • API-led is point-to-point integration, while EDA can be 1:many. 

They both have their strengths and weaknesses, and they complement one another; thus, EDA and API-led connectivity are perfect to pair together and provide a comprehensive solution for an enterprise organization. Here are a few examples to illustrate this point. 

1. Backend system updates 

ALC works on a three-layered architecture with Experience, Process, and System APIs. At times, Process APIs may be overburdened by the need to call a substantial number of System APIs. This adds to the overall response time of the Experience APIs if the requests are synchronous. If you’re utilizing parallel processing, these System API calls will still add to the overall processing demand of the system. The response will very much depend on the processing capabilities of underlying systems. 

We can rationalize the number of System API calls made by Process APIs to gather static data like customer information. The system that is the Single Source of Truth (SSoT) can asynchronously update all other systems with the latest and greatest data set using EDA, which eradicates the need to call the CRM system for every Process API call that warrants customer info. 

MuleSoft provides connectors to various systems which can help in capturing the data change and publish it as simple events to the event broker. MuleSoft applications can then act as event consumers to subscribe to these events and update the end-systems.   

2. System overload and delayed processing 

Every system has a finite capacity of processing requests. If there’s a sudden burst of requests, the system starts failing after it reaches its peak capacity. This can lead to multiple retries, further overloading the system. Another scenario is if the end-system isn’t available 24/7, meaning you’d need to retry until the system becomes available. 

MuleSoft applications can generate simple events for every request at the Experience or Process layers, which can be stored in an event broker until the end-system is ready to process the request. That means that you can reuse the Process APIs or System APIs built with API-led connectivity, thereby delaying the processing until the end-system capacity is available. This also means that the requesting system will receive the acknowledgement of the request and a message stating the request will be processed in time. It reduces unnecessary retries and helps decrease the load on the system.

3. Streaming data

We are surrounded by IoT devices which capture useful information and produce data streams. Streaming data can also come from other sources. API-led connectivity cannot effectively manage this constant stream of data. 

A better approach is to utilize an event-driven architecture. A distributed streaming platform can be used to build real-time streaming data pipelines. The captured data can then be processed using MuleSoft streaming capabilities. MuleSoft provides various ways you can connect to and work with your preferred distributed streaming platform. 

There are various other scenarios where EDA can support API-led connectivity. You need to carefully evaluate pros and cons of both the architectures for the particular use cases and see which makes more sense. But before you choose, let’s examine the common pitfalls that can derail your journey with EDA. 

latest report
Learn why we are the Leaders in API management and iPaaS

What are the downsides of event-driven architecture? 

EDA has been around for a while, but its implementation in enterprise organizations falls below expectations with the potential of what EDA can do. The reason for this is various challenges which restrict how EDA is utilized. 

Standardization and cataloging 

Event-driven architecture has long existed with no clear standards being followed in the messaging infrastructure. It limited the use of EDA to a group within the enterprise as there was no common language to communicate between teams on how events were defined. 

Even within a singular team, there were challenges to support EDA. AsyncAPI has bridged the gap; it is a standard way to define asynchronous APIs, similar to what you can do for REST APIs using OpenAPI or RAML. It defines a standard contract which helps users learn how to interact with the Event APIs, how to connect, what channel to subscribe to, and the message format. 

Another issue with event-driven APIs is discovery. Traditional API management does not support event APIs due to lack of standardization, which is why cataloging has always remained a challenge. With the advent of asynchronous APIs, this has changed. You can publish your AsychAPIs to Anypoint Exchange, which can now act as a SSoT for your synchronous (REST) and asynchronous (event-driven) APIs.   


The decoupling of event producers with event consumers implies that you can update multiple consumers at the same time. It also means tracking that all systems that were processing the update have actually processed it. You also need to look for event loopbacks, i.e. event producers getting updated for the event due to chaining of events. Monitoring and observability becomes a challenge. Architects need to factor this in while designing and should avoid overusing EDA.


Governance has been an Achilles’ heel for both API-led connectivity and EDA. Governance is not implemented in many enterprises; if it is, it’s often manual in nature, which makes it error-prone. Anypoint API Governance overcomes this challenge by not only allowing developers to conform to governance rules at design time, but also automatically checking for compliance and notifying relevant stakeholders. You can also incorporate governance checks in your CI/CD pipeline.


With multiple event consumers at varying processing capabilities for a given event, the overall system might not always be in sync. However, once all systems have processed the event, the system will have consistency eventually. 


While API-led connectivity facilitates customer interactions in B2B and B2C integration models, EDA often serves Application-to-Application (A2A) integrations better. It can reduce the hops required to gather information from various applications by asynchronous means, updating application data in near real time to avoid stale data. 

EDA can be a great addition to an enterprise architecture on its own or in tandem with API-led connectivity, especially if you can manage the complexity that comes with using EDA and clearly define when to use which or both together.