How to design message-driven and event-driven APIs

Asynchronous messaging is critical to creating a truly scalable system, where various services can communicate with each other easily, can scale up and down independently, and where one service failing won’t cause all the other services to fail. With the trend of microservices in full swing, this has become even more important. As Tim Bray from Amazon stated: “The proportion of services I work on where queues are absolutely necessary rounds to 100%.”

Messages vs events

Before we go any further, let’s cover the difference between messages and events. While closely related and often using the same architecture, there are some core differences between the two. 

If the producer must confirm that the information or command is delivered, knows who the intended recipient is, and likely wants some kind of response or action to occur, then it’s messaging. 

An event is something that happens and the service where it happens publishes it to an event stream, regardless of what actions occur after that (if any). Other services that are interested in that type of event can subscribe to receive them. There can be any number of subscribers that will receive each event, including zero.

In other words, in a message-driven system the publisher knows the intended recipients, whereas in an event-driven system the recipient decides what event sources it wants to subscribe to. The Reactive Manifesto has a good description of this.

The typical design process (aka: no process)

When you add messaging to your application, typically you piece it together as-you-go. First, you realize you need it because the coupling between your service and another is causing problems, maybe it’s too slow or too brittle. So you setup a message broker, like Kafka or RabbitMQ, and you start sending messages to it. If it’s your online store, you send your “order” messages to an “orders” topic. Then you get your other services to consume those messages. The message formats are designed on-the-fly by the producer and the consumers (aka: subscribers) will take what they get and deal with them. 

If you want to make a change to the message format, you inform the other services about the change so they can update their code to handle it (probably through an email or a Slack message). Once the dependent services update their code, you can  start publishing your new message format. Everything works, and everyone is happy… for a while.

Then a year later, another developer joins your team and is tasked to add a new feature to one of these systems, but they have no idea how these services are communicating with each other, as there’s nothing more than the code and the working system to look at. So they have to dig through the code and the logs to figure out what the messages look like, where they are going, and where they are coming from.

Or maybe after running fine for the past year, something goes wrong in one of your services and it stops receiving messages. Where do you even start to figure out what went wrong? Did someone start sending invalid messages? Where are they coming from? Where are they going? What format should they be in?

Is there a better way?

You bet there is! A new project called AsyncAPI has sprung up to fill the need for a way to design your messaging and event APIs properly. 


AsyncAPI is a standard way to define asynchronous APIs, much like you can do for REST APIs using OpenAPI or RAML. Defining your APIs using these standards provides many benefits starting with a well-documented contract explaining how to interact with your service. Another service can then take your AsyncAPI spec and find all the information they need to connect, publish, and subscribe to your service. 

In addition, you or the users of your service can generate really nice documentation and code in various languages. 

Now, if you want to change how your service works, you’ll update your AsyncAPI specification and share that with other services which can then generate new code to match. 

Order processing example:

Let’s see a simple example in action. The following an AsyncAPI specification with two channels you might see in an online store:

This defines two channels, one called “orders” and another called “orders_paid.”

Notice the publish and subscribe lines are very similar to REST verbs like GET and POST, but for asynchronous services. Publish means that you can send messages to the channel to interact with this service and subscribe means you can receive messages from the service when a particular event happens. 

The “orders” channel is defined as a channel you can “publish” messages to, so the application will accept messages sent to that channel. The format of the messages it will accept is also defined, in this case an object containing details about the order (the “id”, “customer_id” and “amount”) and the data types of each of those fields. If you try to send something that doesn’t match the payload as defined, the messages would be rejected. 

The “orders_paid” channel is defined as a channel you can subscribe to which means you can receive events on the channel, in this case when a payment is processed successfully, you’d receive an event that the order was paid for. Also, every event you receive will be in the payload format as defined by this contract. 

Publishing AsyncAPI documents, like the ones above, creates a contract between your service and the developers using your service. They will know exactly which channels exist, what they are used for, the format of the messages, and where to publish or subscribe to them. 

What’s Next?

After you’ve defined an AsyncAPI spec for your service, you can use all the tools built for AsyncAPI. The current tools allow you to generate documentation and generate client libraries in various languages so others can integrate quickly and easily. 

For example, if you take one of the hello world AsyncAPI specs from above and run the following command:

That will generate a markdown file which documents your specification.

Further, you’ll soon be able to manage the full lifecycle using AsyncAPI in platforms such as MuleSoft’s Anypoint Platform. This will let you design, discover, share, monitor and manage your events and their associated APIs with your collaborators. Also, messaging infrastructure companies such as Solace will start automatically configuring their message brokers to match your API.  


As evented systems become more common, having a standard way to design and define how services work and how services can interact with each other is becoming increasingly important. AsyncAPI is a big step in the right direction by helping to define the contracts between services communicating via asynchronous messaging. 

Learn more about event-driven messages use cases by reading this blog about APIs and airports.  

We'd love to hear your opinion on this post

One Response to “How to design message-driven and event-driven APIs”

  1. Nice article! Great to see more being done daily with AsyncAPI. V2.0 of the spec has definitely increased activity. Industry adoption is key to driving use, and open-standards mean no vendor lock-in driving competition.

    I like your comparison of Event Brokers like Solace, to queue based messaging systems like IBM MQ, ActiveMQ, RabbitMQ and projects like Kafka where you pre-define the ‘bucket’ that each event goes to – he model is closer to a queue than pub/sub.

    Beyond the hype and marketing there is a lot to understand, and I’d like to take your comparison further.

    A key difference to highlight is that traditional messaging systems and Kafka send messages to a pre-defined ‘bucket’ which is a simple string. Just like your “orders” and “orders_paid”. Consumers of the messages are pre-known or pre-defined. This is tightly coupled and does not give you flexibility.

    With fan out (multiple uses of the same event) messages may need to be copied to multiple queues (slow). To process a subset of messages in a bucket, a client has to discard the ones it does not want (expensive, slow).

    With event driven systems, events are self-describing using a hierarchical topic taxonomy. Think of an event’s topic like “orders/new/channel/customer_id/payment_id/curr/amount” or “orders/paid/channel/customer_id/payment_id/curr/amount”.

    Subscriptions are not fixed, and consumers can subscribe at design OR runtime to multiple event streams using wildcards giving true decoupling of publisher and subscriber. A few examples are to subscribe to topic “orders/new/>” for all new orders, and subscribe to “orders/*/*/customer_id/>” for all orders by a specific customer. The broker does the filtering, and the filters can be applied at runtime. I can even have “orders/>” for audit/compliance/retention.

    Events are stored once and available to multiple consumers (fast). Message order is maintained across the entire taxonomy (powerful, flexible).

    For a more detailed explainer on the differences see here :