Reading Time: 9 minutes

AI has emerged as a productivity imperative for enterprises. To get AI to reason effectively, make good decisions, and execute the right actions, it should be supplied with high quality data. This data includes both the transactional and reference data of our business objects: customers, products, etc., but also meta-data associated with how our organizations work with these business objects.

As our AI architectures start to take shape, and we move into the world of autonomous AI agents, it’s important that we consider the role that integration plays in serving up this high quality data to increase the accuracy and productivity of AI.   

Ingest

Traditional data quality technologies like Master Data Management (MDM) and Customer Data Platforms (CDP) are targeted at business processes where clean data is required – think part catalogs and customer segmentation. These technologies are concerned with the semantics of a business object and the capabilities to harmonize and refine a data set into these semantics. To fetch and process raw datasets, these products often come bundled with integration capabilities.

AI has forced a generational change in the capability of data quality solutions. Timeliness, accuracy, compliance, and providence are all attributes of data quality that need to be considered in the data sets we serve to AI reasoning engines.

As a consequence, features, including event generation, support for unstructured data, and support for large and dynamic datasets, are now required. A new breed of data quality solutions have emerged generically called data clouds. Conceptually, you can think of data clouds as being the working or scratchpad memory for AI.

Data clouds, like MDM/CDP, rely heavily on integration and incorporate capabilities often associated with integration platforms such as connectors to common enterprise systems. This overlap in integration capability forces one to consider whether this capability should come from the data cloud or dedicated middleware.

They come with prepackaged use case support and the integrations for this use case will be included in-the-box. To implement these use cases, it can be expedient to rely on the integration capability of the data cloud. For customers who want to embrace AI more holistically throughout their enterprises, it makes sense to implement middleware with an associated integration strategy

Intelligent actions

Data clouds are able to initiate events based on conditions that exist within their data model. An example of this is being able to offer real-time alerts to qualifying customers as they browse a commerce website. Similarly, we are developing AI agents that have the capability to generate events based on their reasoning. An event logically sets off an action. It’s then the role integration software to encapsulate this action.

We sometimes use the analogy that AI is the brain and integration is the nervous system. An action, initiated from an AI agent or a data cloud event, could be sending a simple email or kicking off a complicated shipping and logistics process. These integration actions can encapsulate a combination of batch integration, asynchronous events and the execution of highly performant restful APIs. 

We are working to increase the level of automation in our organizations using AI agents. A key consideration is how we enable AI to know the appropriate actions to execute for any given event? An integration engine stores a catalog of all the APIs in an enterprise, together with the relationships between them. This is called an application network graph.

An AI can be trained on the application network graph such that AI agents have the capability to know which actions need to be executed for a particular event. Such intelligence can also be applied within actions, removing the human-in-the-loop for actions that have traditionally required human input via a workflow engine. In this way integration helps to increase the accuracy and speed of AI actions.

Secure, governed, and audit-capable

With all of the above in mind, we also must consider the security, governance, and audit capacity of the data we feed to our AI. This data should be governed to prevent leakage and incorrect usage of private and proprietary information. Additionally, users should be able to audit the data sources feeding into AI tools at any time to explain the decision-making processes for said tools. Incorrect or toxic responses should be examined based on the data in the context of the decision. Integration, via API management, provides the capabilities to secure and govern the data coming into and out of our AI frameworks for increased accuracy and explainability.

AI evolution calls for enhanced support

As AI matures, integration and data strategies will need to evolve to support the needs of this new capability. Integration plays a key role to ingest and serve data to AI to make informed decisions. Integration houses the actions that we take from these decisions. It provides meta-data to increase the intelligence of AI systems, and most importantly, it helps accomplish this in a way that preserves the security and governance of our data.

While it may be expedient to think of data and integration as one, such thinking risks implementing solutions that will be unable to adapt as AI evolves over time. In the same way rarely access database tables directly, data will require refinement before consumption by AI engines. Integration is the bridge between AI and your data.