Many customers I meet are either evaluating or beginning their implementation of microservice architectures. Some of these customers are coming off big-bang projects that have failed to replace large legacy assets.
For these folks, breaking up the monolith in one-fell-swoop is too hard. Breaking it up piece-by-piece is the way to go. Microservices architectures have emerged as a viable pattern to break up the monolith. As microservices have evolved, new technologies have emerged to help solve problems of microservice deployment at scale. One such technology is the service-mesh. In this post, let’s explore the relationship of a service-mesh to existing integration technologies.
As we begin to scale up a microservice architecture, we start to see problems of scale. Application developers will increasingly need to integrate microservices with one another. The complexity of interdependencies between these services means that application development frameworks, and their resulting runtimes, will need to manage and monitor large numbers of services––this is challenging.
To address these challenges, we have seen a revolution in layer 7 technologies that support functions including load-balancing, service discovery and SSL termination. From our old friend – the hardware load-balancer – we have moved to software load-balancers, to in-process load-balancers, and, now, side-car load-balancers.
Traditional hardware and software load-balancers were subject to a single point of failure. In-process load-balancers solved this problem, as each application has access to its own load-balancer within the application framework. The problem with this approach is that in-process load-balancers require a separate implementation for each language––thus limiting its ability to support heterogeneous application architectures. Side-car load-balancers have emerged to address this issue.
Side-car load-balancers are dedicated local APIs that an application can utilize for layer 7 services. They are realized as APIs and, while they are not as performant as in-process load-balancers, they afford a level of abstraction that allows for language independence. In this case, each application will have its own load-balancer. These individual side-cars need to be managed by a centralized control plane. The resulting runtime and control plane are collectively known as a service-mesh.
Layer 7 load-balancers, in general, have always occupied an uncomfortable space between integration and application – neither one nor the other, and typically the domain of the operations staff. As we can see from the discussion above, as application development technologies are being revolutionized by microservices and DevOps, the traditional domain of layer 7 load-balancers has expanded to encompass the concept of a service-mesh and the migration of this functionality from operations staff to developers.
Similarly, from the integration side of the house, we have seen an evolution. Managing services at scale has been a challenge in large integration deployments for some time. While some application developers may imagine integration developers still slogging it out on their centralized ESBs, the reality is that modern integration architectures have evolved into federated platforms for exposing and consuming business services.
The concept of an application network is a prevalent pattern for developing and managing services at scale. An application network is a way to connect applications, data, and devices through APIs that exposes some or all of their assets and data on the network. That network allows other consumers from other parts of the business to discover and use those assets.
Once these assets are deployed, they need to be monitored and managed; an application network includes services for monitoring the security and availability of the network. An application network will likely employ layer 7 services in its runtime fabric to help support application delivery. These services may be provided through traditional load-balancers or more modern service-mesh frameworks, or a combination of both.
As we can see, the notion of a service mesh and application network share similarities in that they are both concerned with managing heterogeneous services at scale. It’s worth considering how these concepts differ. One way to think about it is that the service-mesh is the internal workings of a specific application or application stack. It operates at the application level.
In contrast, an application network concerns itself at the enterprise or inter-enterprise level. In a large enterprise or extended supply chain, there will be many applications that need to interoperate to create business processes and user experiences. Some of these applications may be deployed on the public cloud and operated by third-parties. An application network allows visibility at the enterprise level, rather than at the more granular application layer.
The concern and focus area of a service-mesh and an application network also differ. From the service-mesh side, application developers are concerned with the ability to quickly deploy component software into a complex environment; and to do so in a way that provides high availability and performant operation of the application. Its concern is operational and its focus is on application network functions. An application network is more analogous to microservices in that it involves the ability to develop applications rapidly, and provide organizational agility, through the rapid reuse of services. Its concern is at the business-level and its focus is on application logic and choreography.
This contrast highlights the complementary nature of service-mesh and application networks. While some overlap exists, to break up monoliths enterprises will need frameworks for reuse and mechanisms to manage applications at scale. This is a rapidly evolving area in enterprise architecture. Moving forward, enterprises deployments require a combination of application network and service-mesh functions in order to deliver transformational change to their organizations.