What we learned from failed SOA implementations

February 17 2016

0 comments 0

It’s been almost 30 years since the concepts behind SOA, specifically the notion of decomposing monolithic applications as discrete functions, were first introduced.  Many organizations embarked on the journey towards SOA, but results have been mixed.  Though SOA has several benefits and can be a powerful architectural paradigm, many SOA implementations have fallen short. In this blog post, we explore the primary reasons SOA, as an approach, failed to deliver on expectations.

Reason 1: Limited end user engagement

Imagine a band playing a concert where they don’t care about the audience. Many SOA implementations fit this description.  Too often, the backend details of the core systems of record being exposed by SOA projects used data models that only domain experts could understand. This made implementation difficult due to the particularities of backend applications. When the number of services is small, it’s no big deal to solve this by going directly to the responsible colleague. But this quickly becomes unscalable. The expectation that we would build several services and businesses would self-serve never happened at scale for most SOA implementations.

Reason 2: Heavy-handed design-time governance

Traditional design-time governance has both favorable and unfavorable consequences. Building services in ways that keep enterprise data secure is critical, but as SOA implementations grew more complex, this governance model created heavyweight work flows that slowed things down to a crawl or even blocked change without a full impact analysis — which ultimately dissuaded many users from even participating in the process. SOA implementations that succeeded focused more energy on making sure APIs and services were well-designed (and potentially built for a specific purpose), thereby requiring less energy in managing and implementing change once deployed in production.

Reason 3: Heavyweight deployment and operations  

Traditional SOA stacks were heavyweight and complex, containing several bits and pieces of software. Adding to this complexity, the software stack demanded extra components to operate like a database, and application servers.  Standards like WS-* tried (and failed) to reduce this complexity by solving for every requirement. Ultimately, this prevented people from operating, developing, changing and migrating SOA artifacts.  Changing one artifact created multiple (and unknown) downstream impacts. Those who succeeded had hardcore integration use cases among core systems of record which belong only to central IT teams and had slow release cycles. As integration use cases moved outside of central IT with the advent of SaaS, it became clear that these traditional SOA stacks couldn’t handle the speed of connectivity required.

Reason 4: Bottlenecks on enablement

Designing against proper architectural guidelines and having people and processes in place to approve projects is logical. To achieve this, many SOA programs established an ICC (Integration Competency Centre) or CoE (Centre of Excellence).  These centers quickly became bottlenecks at scale. The centralized bodies couldn’t move quickly enough — resulting in approval processes that were lengthy and expensive. In organizations where this Center was not politically strong, lines of business have simply bypassed it to get things done in an uncontrolled way (e.g. purchasing SaaS). In organisations where these centers were politically strong, those projects were simply blocked or forced to participate in the same slow waterfall process required of traditional use cases.

Reason 5:  Complex standards

SOA wasn’t intended just for SOAP/XML web services, but somehow this became the defacto standard. Despite the many benefits of SOAP, using it came with a high price.  SOAP web services require significant investment to build and maintain — and often deliver many more capabilities than are actually required to meet the service objective. REST, on the other hand, has gained popularity due to its simplicity and ease of use. In fact, most APIs created today embrace REST design principles. REST, unlike SOAP,  allows you to provide your audience with a stable and explorable web of resources that is very much ready for web scale.  Similarly, XML provides a verbose standard not really designed for high volumes of traffic outside and between data centres, making the simplicity and freshness of JSON more appealing.

Reason 6: One size doesn’t fit all data models

Canonical data models were meant to provide standards and designed to keep data in sync — yet siloed systems persist.  Why? Different parts of the business have different definitions of what a customer, a product, an invoice, or an order actually look like.  Successful SOA implementation are more pragmatic and think that domain models are usually more applicable. It lets it go the need for a centralized and unique canonical data definition.

Conclusions

SOA brings several benefits, but for the reasons outlined above these benefits are often never realized.  The good news for those thinking about SOA today: these failing have more to do with approaches, practices and tools than with the core principles of SOA.  

At MuleSoft, we recommend a modern approach to delivering the core principles of SOA with purpose-built design, a federated ownership model, regular audience engagement, easy to use tooling, a center for enablement and domain data models.  We call this API-led connectivity.  Download the whitepaper to learn more.


We'd love to hear your opinion on this post