Reading Time: 21 minutes

Featured guest post on MuleSoft Blogs from Abraham Santiago, Principal Integration Consultant of WhiteSky Labs.

API middleware integration industry is definitely not short of Architectural principles, from top-down strategic initiatives like digital transformation and legacy modernization to developer-centric ones like -driven architecture, microservices, and DevOps. But beyond that and onto the fun and games of actual delivery of services to enterprise customers, there are trivial intricacies for architects and project team to solve. These challenges aren't customer specific; these are everyday nuances in the middleware integration space.

latest report
Learn why we are the Leaders in API management and iPaaS

What can be interesting in middleware integration professional services space is that when faced with a design challenge, we also have to consider pragmatic factors like customers organizational capability and processes. How mature it is and the groups involved – not only within the Software-Development Lifecycle (SDLC) but also during ideation and maintenance period. To be able to build a future-proofed implementation, we also have to observe and consider the historical movement of customer's business product evolution. These are some design cases where best practice can be challenged, where practicality and even experience-based intuition plays a role in decision making.

Case 1: How micro can a microservice be?

For one of our customers, we built separate ESB applications for (a) Frontend/Producer Interfaces, (b) Consumer interfaces, and (c ) Processes as service orchestrator – to maximize and minimize deployment for new business products and even product changes. Focussing only in the process layer, we have a design decision to make. Option 1 is a modularly built single application per business product with the product variances inside the application as functions. Option 2 we have to break down the functions / variances inside that product making them an independent application – more granular and smaller than Option 1.

Option 1 Single Mother process per Service
Option 2: More granular service

Purely on an architectural ‘best practice' standpoint, the most likely choice is Option 2 to maximize reuse. What we learned in our microservices journey is that it doesn't only benefit fast paced organization. It also benefits slower ones because it's less risky to deploy an independent application, lesser request for a transition between ‘Dev' to ‘Ops.' But here's the thing. What about the practical point of development maintainability and cost-savings? More application consumes more server load, wouldn't help the customer's cost for license if the total application footprint would take more server cores.

More application runtime means more overhead, like calling the same libraries/dependencies and more serialization/marshaling. We're faced with the debate of reusability vs. performance. Further, for customer's maintainability, segregating to this level of granularity could be too much for them to swallow. Isn't a modularly built application enough? Is breaking down further to more runtime apps needed? Bear in mind that there are a lot more other services in the actual project for them to understand and maintain – service frameworks around security, logging, notification, metadata audit, etc. Yes, Option 2 would ‘eventually' be easier for the customer's development team to maintain in the long run once they get the hang of it, but not the case in the early transition stages. Learning curve and operational readiness of the customer is a crucial factor to consider. Clearly, the decision here is closer to call than initially thought.

Case 2: Designing the Middleware's agnostic schema

Middleware schema is an internal schema. The nearest explanation may be found in 's ‘canonical data model/schema.' This is not an endpoint payload's schema; this is the inter-API communication schema in ESB/API Middleware. We were keen to have this in the overall architecture of our implementation and not go the pass through route, because part of the customer's requirement is to have a future-proofed integration solution, and the use case is many-to-many.

The objective is important for them not only to add inbound partners, but also to freely add or replace backend system in the future without throwing what's built in the integration platform. What this means is that there are more than one inbound format and more than one outbound format per business transaction message. The challenges are: (1) we now need to design this schema as agnostic as possible based on product perspective, not system; (2) during that stage of the project, there's only one inbound partner and only one backend system, e.g. an ERP backend system. That doesn't give us leverage regarding various samples to play and compare with to compose one.

Our sensible options were:

(1) Adopt a standard design pattern like SOA's canonical modeling

(2) To look into the top 2-5 systems in this industry, and study their data. We realized very early that doing these options will easily trap us into a black hole of working hours. We're not a fan of SOA's heavy upfront schema that would entail standardization exercise, and we don't want the orchestration and parameter lookup design to be based on this schema as its contract.

We didn't choose these so-called sensible options and instead went for the easy but safe route of ‘failing fast' by doing what's below, with the thinking of then organically growing and changing the schema as the need arises. Historically, product specification changes aren't drastic in the organization. We  approached the problem by not overthinking it, and used the minimum viable product way:

  • Immersed ourselves as much as we can to the customer's business process, focusing on data to consider, for us to convert them to fields
  • Per business transaction message was an independent schema, contrasting SOA's canonical principle
  • We combined the ‘relevant' inbound payload's fields from the partner and the outbound backend system's fields
  • We added two dynamic fields as identifier – see highlighted fields
This is the schema we come out with

We tested this design by playing with possible formats from an inbound partner using SoapUI and RAML, regression test them with JMeter. We tested parallel calls, sequence numbers in different orders, indented data not in it's natural location, etc. – to make sure what we designed is agnostic enough. In that exercise, we made minor adjustments. And as soon as the real second inbound partner integrates, we found another set of little adjustment to our internal schema. Our plan to organically grow the schema is working, plus the fact that we chose JSON helps – dynamic key/value standards works best for optional fields.  But the question remains, what if a new backend system arrives? The customer hasn't had that scenario to date.

Case 3: Infrastructure Setup, to expose or to semi-expose

It is true that an API/ESB middleware platform is just a regular application in a Unix or Windows server. So standard enterprise best practice in firewall, server hardening, and corporate internet policies applies. Easy to say that network design is supposedly an infrastructure-related call, but not in middleware service consulting. We are expected to have the wisdom and wider range of experience to be able to recommend the architecture to the customer even in this area. This case is about a hybrid implementation of API/ESB middleware. On a physical architecture standpoint, they have a more exposed middleware that will function as a gateway to abstract their data through APIs, then another as an internal middleware that does the traditional ESB work serving more for the sensitive mission critical LAN systems. The dilemma now lies in what layer in the network will we deploy these. There were two options.

Option 1: API Gateway in the cloud, On-Prem in DMZ
Option 2: API Gateway in DMZ, On-Prem in Private LAN segment

As part of the delivery team, at face value the upper-hand belongs to Option 1, having nearer the ‘cloud' would give us better agility because API Gateway can reach any host / URL in the cloud without much security process bureaucracy. Potentially,  a turn key is also an option if they avail cloud-based IaaS like AWS EC2 or ESB product/s that have a PaaS option. On the contrary, having them all within customer's network means better latency to LAN / private applications and more controlled security. Here's a breakdown of the high-level pros and cons.

Option 1 Pros: Potentially better latency to internet/cloud based endpoints, Lesser security bureaucracy, Turn key for PaaS managed service

Option 1 Cons: Security control isn't within, Longer latency to LAN endpoints

Option 2 Pros: Better latency to LAN endpoints, Security is more controlled

Option 2 Cons: Potentially longer latency to internet/cloud based endpoints, Longer transition because of security policy, Both are internally managed

Depends on where you're sitting, each item can be a good thing or not. The decision thereby lies in weighing what is more important – is it speed vs. security? Another thing to consider – where are the more critical endpoints located, are there predominantly more in the cloud or the other way around? In this case, we did go for Option 1. The factor where Broker and NoSQL database persistence endpoints of the project are in the cloud outweighs the rest. However, we can only manage what we can measure at that point. We bumped into a scenario where Option 1 isn't a good choice. We need to preserve the inbound payload after synchronously processing a transaction for us to FTP the original payload to a file server than in the LAN where this server has no public IP address. Now, with Option 1's setup where inbound payload was taken by the API Gateway housed in the cloud, we were forced to implement an inflexible approach to solving the FTP requirement, it would have been a cleaner approach had the API Gateway is in the DMZ. After encountering that one-off issue, we decided to revisit the setup before finalizing the decision.

The proof of the pudding is in the eating

Over the years, what we learned the hard way is that best practices are guides to help us design the implementation in the most logical and sensible way. However, in the integration space, there's no shortage of new scenarios and a combination thereof. You'd be surprised with how many unforeseen scenarios and situations that arise during delivery. At times, the practicality of the situation can sway the decision to go the other way, conflicting the initial impression. Customer's competency maturity, internal setup, and process, historical behavior are some of the factors to be considered before finalizing a holistic design and implementation decision. Unlike in product development, services consulting to enterprise customer can have personal experience winning over statistical evidence.

What's consistent is that agile approach is the more suitable delivery methodology. In middleware, more often than not, the specification will change as output arrives. Customer tends only to know what they want or elaborate them deeper upon seeing some output. Constant iteration and automated regression test suite like jUnit helps to spot design decision mistake early on. Fail fast principle allows the result to all come out in the wash so long as the defect happens within the sprint's development stages.

The Silver Lining

There's always two sides of the coin, for most of these ‘close-call' design and architectural decision points – there's always an upside to every downside. Especially in the API/ESB Middleware Integration space, there are many ways to deliver the integration goal the right way. Amongst the right ways, they're no true or false, only pros and cons trade-offs. Some would even bear fruit longer than the implementation result. What's key in these design challenges is to do adequate due diligence like immersion or discovery exercises to be able to have ample data to arrive at an informed decision. A good way to avoid reinventing the wheel, in case the same or similar challenge arises again, is to have some form of a registry repository in our implementations, which includes architecture and these situational customer profiles to serve as a knowledge center for the team.


WhiteSky Labs is a Premier Partner focused on offering 100% MuleSoft® solution to connect APIs, SaaS, and SOA.  Anypoint Platform™ is the #1 integration and API management platform, used by 35% of the Fortune 500 companies.

Abraham has a decade of experience in middleware software and solution architecture. He is exposed to leading large scale solution deployment for telecommunication, banking and logistics industry. He designed APIs and complex integration scenarios used for projects, proof of concepts and demo. He has championed integration specific agile methodology and DevOps principles both internal and onshore/offshore customer engagements. He is currently managing 20+ brilliant WhiteSky Labs integration consultants, governing multiple concurrent projects in the Manila office.