This is a guest blog from a member of the MuleSoft community. Brad Cooper is a senior developer with more than 20 years professional experience in a variety of tools and languages, including Anypoint Platform. He holds numerous MuleSoft certifications, – including MuleSoft Certified Developer: Integration Professional – and hopes to add MuleSoft Certified Architect: Solution Design Specialist in the near future.
In my last post I described how and why, on a project I previously worked on, we started our journey with MuleSoft by building a monolithic application, and then began down the path towards a microservices-oriented architecture. I concluded the previous blog discussing how we managed to break the source for the project into smaller modules; however, building the application still produced a single, large monolith. In this blog, I will pick up the story and discuss the pros and cons of our approach, and our learnings along the way.
Evolution 3: Microservices 1 – turning modules into standalone APIs
The next logical step in our journey is to turn our individual projects/Maven modules into standalone APIs and consume them in a more dynamic way––that is, without including them statically in our main application.
To start this process, we began creating RAML definitions for each API. In our case, we already knew the set of flows we wanted to expose as API methods, and we already knew the data that each would need to receive and return. As a result, creating the RAML was straightforward.
Avoiding duplication with RAML fragments
What we found, however, was that we ended up with many identical RAML sections being added to each API, including common things like error structures and security schemes. At the time, this was unavoidable but we did our best to manage the issue by defining these elements in separate RAML files and then copying the files into each API.
Today, we have RAML fragments which can be created in the Anypoint Design Center and shared between API definitions via Anypoint Exchange. Sharing fragments in this way allows developers to reuse patterns and definitions already developed within the organization, leading to greater consistency between APIs, less duplication, and better maintainability.
Breaking apart monolithic Mule applications
Now that our APIs had RAML descriptors, all that was left to do was to consume them from the main application. In order to achieve this, they were first shared to Anypoint Exchange, after which developers simply needed to replace the old flow-ref processors in their code with HTTP connectors, identify the API to be invoked using the connector’s Search Exchange functionality, and select the appropriate HTTP method to use. By basing our RAML data types on existing objects – which were already passed between the flows – we also removed the need to add any new data transformations.
Pros and cons of the approach
- Smaller, more focused APIs
- APIs can be easily discovered and consumed by a wider audience, allowing for easier development of an API network
- Policies (security, SLAs, logging, etc.) can be applied to individual APIs as required
- Changes and fixes can be made to each API without rebuilding and redeploying unaffected modules
- APIs can be easily tested in isolation from each other
- More deployable modules to maintain.
- Deploying on the same Mule Runtime (i.e. on-premises) requires the creation of an additional Mule Domain to share resources such as HTTP ports.
At this stage in the project, we’d finally reached the architecture that we envisioned from the start. The finer grained controls we’d been able to implement on our APIs and the added simplicity of managing them through their lifecycles, coupled with their greater visibility and accessibility far outweighed any downsides that the approach brought with it.
Evolution 4: Microservices 2 – containerization
Containerization of applications has become a trend in the tech world over the past few years, and there are many reasons why this is the case––although that’s a topic that’s largely outside the context of this blog.
In our case, using a combination of a containerization platform (Docker) and a container management platform (Kubernetes) was the next logical step in our architectural journey (albeit more of a deployment architecture than an application architecture) and it has allowed us to build an API landscape that:
- Automatically scales each API both up and down as load demands
- Automatically registers new API containers with Anypoint Platform using API auto-discovery and Anypoint Runtime Manager REST API
- Allows for zero-downtime deployments of new API versions
- Allows us to have “nodes” on different clouds (i.e. some on-premises, some in AWS)
- Integrates fully into our CI/CD pipeline––allowing full build, test, deployment, and registration of APIs after code commits
Although we were forced to compromise on our initial architecture, prioritising speed of delivery over idealistic design, we found that the unified solution provided by the Anypoint Platform – combined with a clear picture of what we wanted to achieve – gave us all the tools we needed to get started and then to continue to evolve while at the same time implementing new features.
Although we’re happy with the solution as it now exists there are, of course, always going to be new changes that can add further improvement and simplicity. In our case management of our containers has been done “in-house” and has required specialist knowledge to implement. With Anypoint Runtime Fabric, MuleSoft adds this capability to it’s PaaS offering – allowing you to containerize and manage your applications directly from the Anypoint Platform––regardless of whether your deployment target is on-premises, AWS, Azure or a combination of these. We could use this capability to further simplify our landscape, but that’s a task for another day.