People often quote the idea that insanity is doing the same thing repeatedly and expecting a different result. Yet if a person puts the same prompt into the same LLM 10 times, they’d get 10 different answers. Does that mean AI is introducing us to a new era of insanity?
Most enterprises have now crossed the first threshold of generative AI adoption; they can create content, code, and insights on demand. The next frontier is confirmation. In a world where agents make decisions and invoke systems, the question isn’t “what can the model do?” but “how do we know it’s right?”
Building on the ideas from Weaving Trust into the Agentic Enterprise, we’ll explore the next step in the evolution of agentic systems: verification. In earlier discussions, we looked at how agents perceive and act within their environments. Now the focus shifts to how they verify and establish the mechanisms that allow trust to become measurable, explainable, and auditable truth across an interconnected network of agents.
That shift marks the beginning of the trust-to-truth era of agentic architecture.
From prompt engineering to trust engineering
Early adoption centered on tuning prompts and guardrails. Today, the focus is on designing feedback systems that can measure and prove correctness before an action is taken. These feedback systems are known as validator loops. These loops are a series of lightweight orchestration patterns that check an agent’s outputs against enterprise truth, policy, and confidence thresholds before anything leaves the platform.
At a conceptual level, the process is simple:
- Plan and act: The agent or LLM produces a result.
- Validate: Specialized tools check that result against APIs, data, or rules.
- Score and decide: The orchestrator aggregates those checks and applies a threshold (e.g. proceed only if confidence ≥ 0.8).
- Iterate or execute: If confidence is low, the agent regenerates or requests clarification; if high, the action proceeds safely.
What’s most interesting to me about this pattern is that it doesn’t require retraining large models. It wraps them in governed feedback logic that continuously earns trust.
How MuleSoft Agent Fabric enables the pattern
MuleSoft Agent Fabric is built to support this model through composable layers:
- Agent orchestration: An Agent Broker orchestrates the plan-act-validate cycle, coordinating agent-to-agent (A2A) and agent-to-tool (MCP) calls. YAML-based configuration defines which validators run and in what order.
- MCP tools: Expose validation logic as callable microservices. Some are deterministic (schema, policy, entitlement checks); others can use smaller verifier models for semantic or factual scoring.
- Governance: Enforce policies at the edge with a gateway, ensuring schema compliance, data masking, and minimum confidence headers before responses leave the network.
- Systems of record: ERP, CRM, or PLM systems that act as the sources of truth for validators to query.
- Visualization and monitoring: Provide traceability for what was validated, with which data, at what confidence.
Together, these form a trust fabric of an ecosystem where agents can verify their own work through modular, governable validation steps.
An example in action
Imagine a service agent that replies: “Your order for part #WIDG-X was billed at $1.00.” Before that message is sent, the Agent Broker automatically invokes a validator tool that queries the ERP system.
- If the ERP shows $1.05 ± 10%, the validator returns supported (0.92)
- If it finds no match, it returns not enough info (0.40)
- If it finds a contradiction, it returns fail (0.05)
The Agent Broker aggregates those results, compares them to enterprise thresholds, and only releases the response once it’s validated. The same pattern works for price checks, entitlements, compliance statements, or any fact that must be true before an agent acts.
MuleSoft Agent Fabric already provides the core building blocks for validator loops:
- Orchestration via the Agent Broker
- Externalized validation through MCP servers
- Policy enforcement through Flex Gateway
Enterprises are now beginning to layer their own domain-specific validators; for example, verifying claims against ERP, confirming entitlements from Salesforce, or enforcing industry compliance rules.
Why this matters for CIOs
Validator loops solve the biggest barrier to scaling AI safely, governance you can measure.
- Accuracy: Each output links to its evidence.
- Compliance: Every exchange respects policy and data boundaries.
- Auditability: Every step is observable and explainable.
- Flexibility: Business rules evolve through validators, not model retraining.
Enterprises move from trusting models to verifying outcomes, the architectural equivalent of continuous testing for AI.
From composability to verifiability
In the composable era, APIs connected systems. In the agentic era, validators connect truth. MuleSoft Agent Fabric makes it possible to turn each API and policy into a living validator that agents consult before acting, shifting integration from data movement to truth orchestration.
LLMs changed everything, and it’s unlikely that the next models are going to be where the fastest innovation in the enterprise will be in the future. Innovation will happen around creating the most trustworthy agentic architecture.
Validator loops are the backbone of that trust, the mechanism by which agents not only reason but also prove they’re right. They mark the moment when generative AI stops just talking, and starts being accountable.




