Reading Time: 12 minutes

This is a guest blog from a member of the MuleSoft community. Brad Cooper is a senior developer with more than 20 years professional experience in a variety of tools and languages, including Anypoint Platform. He holds numerous MuleSoft certifications, – including MuleSoft Certified Developer: Integration Professional – and hopes to add MuleSoft Certified Architect: Solution Design Specialist in the near future.

Over the past few years working with Anypoint Platform, most of my time has been spent with the same organisation. During that time, the APIs we developed evolved through a variety of architectural styles––ranging from the heavy, monolithic “all-in-one” application deployed onto a single runtime to lighter, fine-grained microservices deployed into containerized runtimes.

latest report
Learn why we are the Leaders in API management and iPaaS

In this series of blogs, I’m going to look back at each stage of this evolution, the reasoning and the pros and cons of each. Most importantly, I will describe how Anypoint Platform gave me the flexibility to start with a simple solution that meets my early needs, and improve rapidly as resources permitted.

Evolution 1: The monolith

Sometimes what we do is born out of necessity: developing the perfect system, while desirable, is not always the highest priority.

In my case, I was sent to work with a customer who had subscribed to Anypoint Platform for one year – a trial period during which time they would ensure the platform was the best fit for their needs.

This was the brief: create APIs to integrate software such as CRM, financial and document management, and proprietary legacy systems, as well as design and implement APIs to serve a new self-service portal for customers.

If all of that can be developed, tested, and put into production before the initial trial expired then, and only then, would the customer commit to the platform. Oh, and here’s the kicker: for the foreseeable future, I was to be the only MuleSoft resource on the project.

As is often the case, step one in the development was prototyping. First, an Experience API was designed in Anypoint Platform’s API Manager (nowadays this is greatly simplified using Anypoint Design Center). Using this API definition and the mocking service provided by the platform, the portal developers could begin their own development even though the APIs were yet to be implemented.

Next, a single MuleSoft project was created using Anypoint Studio and, inside that, basic System APIs were developed using out of the box connectors to prove the platform’s capability to integrate with each target system.

Now that the customer gained confidence in Anypoint Platform’s ability to deliver, it was time to move into a full development phase.

The need to deliver in a short timeframe trumped the desire to provide a perfect design. As a result, what was initially intended only as a prototype became the first part of the final system – and the monolith was born.

The implementation followed MuleSoft’s standard API-led principles, with System APIs performing low-level interactions with backend systems, an Experience API to service the portal, and Process APIs providing orchestration. These APIs were, however, all implemented inside the same project with a separate Mule configuration (XML) file containing each API. Additionally, the Experience API had externally accessible endpoints, while the Process and System APIs were invoked internally via flow-ref message processors.

Pros and cons of the approach

Now that we have a full application, here are some of the pros and cons of this approach.

Pros:

  • Achieved rapid development easily
  • Invocated separate APIs by simply adding a flow-ref component
  • Possessed few external contracts because the approach only involved designing a single API and gaining consumer approval up-front. In addition, we were able to add and modify more API layers on the fly
  • Owned a single deployable artifact, which provided simple, fast deployments.
  • Achieved the desired outcome to deliver the project on time. As a result, the customer becomes more committed to Anypoint Platform.

Cons:

  • Built a single point of failure for the entire application
  • Created limitations because Process and System APIs were not available for consumption outside this application
  • Made it impossible to deploy fixes and enhancements to individual APIs or to apply policies, SLAs, or monitor them to potential code conflicts, especially if working with larger teams

Conclusion

Would I recommend choosing this architecture for your applications? Only if the situation necessitated it.

Evolution 2: The multi-module monolith

With production running well, a happy customer, and more MuleSoft resources on the team, it was time to look at the next phase. From a business perspective, the priorities were additional APIs and extended capabilities, but from a technical perspective, a key goal was to address the accumulated technical debt and start on a path to a more ideal architecture: microservices.

With new team members focused on designing new APIs, I was able to focus on the first step in the architectural evolution: breaking the project up into smaller modules.

The intended approach was to start small and follow the path of least resistance. This evolution would involve identifying the purpose of each Mule configuration file, grouping them together by functional area (i.e. financial System API files, CRM API files, etc.), and creating a new application/Maven project for each. The goal was to have separate API projects––leaving the Experience and Process APIs specific to the portal in their own projects.

Once individual APIs were broken out, we would still need a way to consume them from the main application. It was not our goal to wrap each API with its own RAML definition at this time, so instead, we made use of Maven and Spring.

First, the new API was added as a dependency to the main project’s Maven POM so that it contents became consumable.

Next, we made use of Mule 3.x’s underlying Spring implementation. By using spring:import elements in our configuration files, we were able to make the Mule files in the separated project appear as though they were still in the same project. This meant that the existing flow-ref processors still resolved the target flows (and hence, no code changes were required).

Pros and cons of the approach

Given the above, what are the pros and cons of the approach? In terms of the pros, we achieved similar results to the original monolith. In addition:

Pros:

  • Worked on separate APIs with minimal code conflicts
  • Made it easy for future improvement.

Cons:

  • Faced similar challenges as with the original monolith.

Conclusion

While this is a step in the right direction given where we started, there’s still some way to go before we reach the ideal end-state.

Next time

In the next post, I’ll describe how we achieved a microservice architecture, using Anypoint Exchange to make the services available for collaboration. Then, I’ll close out this series by looking at how containerisation allows these microservices to be deployed in a way that enhances portability, scalability, and availability, as well as discussing the role that the upcoming Anypoint Runtime Fabric will play in this space.