Reading Time: 8 minutes

Emerging tech is messy and AI, in particular, is perhaps the messiest. In 2018, Gartner projected that by 2022, 85% of AI projects would deliver erroneous outcomes. The following year, Forbes suggested that 87% of AI implementations across all industries would fail to reach production, staying forever locked away in the figurative basement of the business. 

These projections alone are fairly concerning, but when set next to the broader rate of IT project failure they become even more painful. PMI research reports that 14% of IT projects fail. Why is the failure rate of AI projects so high — and furthermore, why is it so drastically different from the broader IT landscape? What causes failure at such catastrophic levels? 

latest report
Learn why we are the Leaders in API management and iPaaS

In my previous post, I talked about the business philosophy that successful companies employ to see value from AI implementations. While certainly valuable, it’s challenging to directly apply the mission statement for the smart enterprise to an organization. In this post, I’ll address API-led and other technical aspects of AI implementation — such as how API-led addresses AI, and how it can lower the high failure rates of these projects.

Why do you need an integration strategy for AI?

Before we dig too far into AI and API-led, if you’re not already familiar with the API-led approach to integration I recommend learning more about what API-led connectivity is before going any further. 

Next, I want to talk about the elephant in the room. What do integration and AI have in common? Fundamentally, any AI implementation is an integration problem and the failure rate is largely due to a failure of the integrations surrounding the AI itself. A solid integration architecture is a must for AI to be successful.

The way I think about AI and integration is like plumbing. Where integration primarily concerns itself with data flow, AI relies on data to provide insights to the organization. AI augments the data, but without a good method of transferring data to and from the AI system, any AI implementation will fall flat. Outside of AI, integration architecture predicts success or failure for AI projects. 

AI also struggles with problems that have already been dealt with in the integration space. Chief among these is large, monolithic architectures for these models. Oftentimes, AI systems are designed to solve a single problem, and as a result the system is highly interconnected and only serves a single use case. 

In the integration space, we’ve seen this create problems for scalability and development time. The same is true for AI systems. Monolithic models take a long time to develop, can’t be reused in upcoming projects to increase efficiency, and don’t scale effectively. 

What is API-led AI?

If you’re already familiar with API-led, these ideas will be familiar to you. API-led AI is a simplified, standardized way of viewing AI. The goal is to create small, reusable building blocks that you can deploy throughout your organization. For the more AI knowledgeable reader, this might seem impossible, or at least ill-advised. How can you break an AI model up into smaller pieces? While there are certainly models that can’t be broken down, many systems can. 

Let’s use a recommender system for a fast food restaurant as an example. Suppose I want to recommend menu items to my customers as they order, and I’d like to deploy this system for drive-thru, in-store, and mobile ordering. The traditional way to solve this is building three systems for each use. The team builds a model for drive-thru, tests it, and deploys it for use, then starts all over again for the next system. While some of the data going into these systems might be the same, there is relevant data for drive-thru that we don’t have for in-store or mobile. 

What does the API-led approach look like? First, we look at data features that are relevant to all three of the systems — like current offers, time of day, and restaurant location. With this, we build a single AI system that can be used for all three problems. While it won’t work by itself, we can augment the results of this system for each use case with smaller add-on models. That common model we built above can be reused for two other use cases without modification, dramatically reducing the time to market for subsequent but related systems. 

The better way to create a “smart enterprise”

API-led AI allows us to leverage the benefits of API-led for AI implementation. API-led creates a future-proof foundation that accelerates development through reuse and abstraction. API-led has already proven to be an effective solution to the problems AI needs to address. 

Learn more about the benefits of API-led by looking at these integration case studies.

Series Navigation<< What is a “smart enterprise”?The 4 D’s of AI development >>