
SailPoint’s AI agent research found that 80% of organizations report their AI agents took unintended actions. Only 44% have formal governance policies in place. That 36-point gap is not a bug. It is the default operating state of most AI deployments right now. And the fix most teams reach for first makes it worse.
Why do AI agents contradict each other?
Three AI agents. One customer. One week.
Your marketing agent sends a premium positioning email Monday. Your sales agent follows with a discount offer Wednesday. Your support agent fires a win-back sequence Friday because the account went quiet.
All three had identical customer data. All three were optimizing for their own objectives. The customer, a $200,000 renewal, forwarded all three emails to your VP of sales: “Can someone tell me what’s actually going on over there?”
The data was perfect. The authority was missing.
So what does every team do next? Reinstall human review. Put a person in the loop between every AI action and every customer touchpoint. It feels responsible.
It is the most expensive non-solution available.
If a human approves every AI output before it ships, you have not automated the decision. You have automated the draft and kept the bottleneck. Within two quarters, your AI created more correction work than it eliminated. Your CFO is funding a babysitting layer instead of a leverage layer.

When demand exceeds capacity, “review everything” quietly devolves into “review nothing.”
Why can’t shared data solve the authority problem?
The problem is not the agents. The problem is that nobody told the agents what they own.
A CDP can tell every agent who the customer is. It cannot tell any agent what it is authorized to commit to on that customer’s behalf. You can have pristine, unified data and still get conflicting promises. The stack needs a decision layer that governs what an agent is allowed to do with the data it sees.
The composable canvas framework correctly identifies “control data” as a core layer of the modern martech stack: policies, permissions, guardrails. The architecture is right. What it does not answer is who holds the authority to act within those guardrails and under exactly what conditions.
Until those decision rights are explicit and machine-readable, control data is context, not authority.

Federal guidance on trustworthy AI is landing in the same place. Guardrails must be tested, rationales must be traceable, and human oversight must be reserved for boundary conditions rather than every output. Shared data can inform an agent. It cannot authorize one.
What does delegated authority actually require?
Delegated authority means encoding three rule categories for every decision an agent might make using the POP Framework:
- Permissions define what the agent can do autonomously and under what conditions.
- Obligations define what it must always do whenever certain triggers occur.
- Prohibitions define what it must never do, regardless of optimization pressure.
These rules cannot live in a policy document. An agent does not read your compliance handbook. They need to live in an enforcement layer that runs before any action reaches a customer. The agent queries the layer. The layer returns a pass, a flag, or a hard stop. Every decision generates a record automatically.
Think of it as the API for your company’s rulebook.
When that layer exists, the three-agent scenario plays out differently. Marketing sends the positioning email. Sales queries the authority layer before the discount, finds a flag that the account is in active renewal, and routes to a human instead of firing. Support sees the escalation flag and holds the win-back sequence.
One customer interaction. Coordinated. Coherent.
There is a subtlety most governance conversations miss. Even when authority is defined, if different agents interpret the same term differently, you still get inconsistent outputs. If marketing reads “high-value customer” as $100K lifetime spend, and support reads it as $50K active contract, authority drifts across contexts. Consistency of interpretation is a structural requirement of the enforcement layer itself.
What happens when the authority layer does not exist?
If your AI agents are optimized but uncoordinated, the problem is not the data layer. It is the authority layer.
Define what each agent owns, what requires escalation, and what requires a hard stop. Encode it. Enforce it. Until that layer exists, you are not running a governed AI stack. You are running a very fast improvisation engine with premium branding.
The enforcement layer that drives this change is Decision Architecture. But a gate without underlying structure is just a wall. Delegated Authority acts as the “wireframe,” giving tech and business leaders a shared language to define AI requirements without getting into the weeds. It ensures builder accountability and transforms AI from a black box into a glass box.
That invisible cost has a name. But first, there is a data problem hiding underneath it.
The post Delegated authority is the missing layer in the AI martech stack appeared first on MarTech.