
Early in my career, the technology contracts I signed were fairly straightforward. I knew what I was paying for, how many seats I had and what was included. Even as data volumes grew and systems became more complex, costs were still mostly understandable. You could estimate what it took to store, move and process data. It wasn’t always obvious, but it was knowable.
That sense of clarity is fading. Watching two new data centers rise just outside my neighborhood is a physical reminder of how quickly we’re scaling compute for AI — and of the entirely different cost structure it introduces, one that’s harder to see, harder to predict and harder to control.
Everyone is focused on the upside: productivity, creativity, velocity and new capabilities. But the financial architecture beneath those gains is still immature. Most organizations don’t know the actual cost of a single AI interaction, what drives usage spikes or whether model consumption is aligned to actual value. AI doesn’t behave like the infrastructure we spent decades learning to manage. It behaves like a series of invisible inference events happening everywhere at once, triggered by anyone.
As AI shifts from experiment to core capability inside marketing systems — powering content, personalization, segmentation, decisioning and orchestration — the reckoning becomes inevitable. If marketing and operations leaders don’t build real cost literacy and visibility now, AI will become the fastest-growing, least predictable line item in the martech budget.
Why this is different — and why the data supports it
AI doesn’t just add a new line item to technology budgets — it changes how cost behaves. Recent research shows that AI costs scale faster, less linearly and with far less visibility than previous generations of technology. The 2025 State of AI Cost Management Report found that 84% of companies are already experiencing measurable gross-margin erosion from AI infrastructure, with 26% reporting a margin impact of 16% or higher. More concerning, 80% of enterprises miss their AI infrastructure forecasts by more than 25%, signaling this is not a planning failure but a structural one.
At the infrastructure level, the cost curve itself is steepening. The cost to train the most compute-intensive models has increased at roughly 2.4 times per year. This is driven by accelerator hardware, specialized staff, interconnects and energy demands. While most companies aren’t training frontier models directly, these economics cascade downstream through API pricing, hosted platforms and cloud infrastructure.
For most organizations, however, the real exposure comes from inference — the cost of using AI at scale. As systems become more agentic and dynamic, a single request increasingly fans out into multiple model calls, retrieval steps, tool invocations and safety checks. Research on dynamic reasoning systems shows that while these architectures improve flexibility and performance, they also introduce significant overhead in tokens, latency, energy and infrastructure, with diminishing returns as complexity increases.
Importantly, this cost escalation is not inevitable. Empirical studies show that better agent design and orchestration can materially reduce spend without sacrificing performance. One recent paper demonstrated a 28.4% reduction in operational cost while retaining more than 96% of benchmark performance, underscoring that architecture — not just model choice — is a primary cost driver.
What makes AI especially difficult to manage is that many of its highest costs aren’t where organizations expect them. Beyond model usage, companies routinely underestimate expenses tied to networking, data movement, storage, redundancy, energy, cooling and operational overhead.
The result is a cost structure that is consumption-based, distributed and opaque by default. AI spend does not arrive neatly packaged as a license fee. It accumulates through thousands — or millions — of invisible interactions, triggered by people, workflows and increasingly by other machines.
Dig deeper: AI productivity gains, like vendors’ AI surcharges, are hard to find
Why this becomes a problem at scale
At an individual level, AI is already delivering value. Multiple studies show meaningful productivity gains for knowledge workers — faster drafting, quicker analysis and less time spent on repetitive tasks. That impact is real, visible and easy to feel.
What’s far less common is seeing those gains translate cleanly at the organizational level. Recent research highlights a widening gap between personal productivity benefits and enterprise-wide return. McKinsey’s The State of AI in 2025 reports that while AI adoption is widespread, only a small percentage of companies have successfully scaled AI into production in ways that deliver material financial impact. Many remain stuck in pilots, fragmented deployments or narrowly scoped use cases that don’t compound into durable advantage.
At the same time, spending is accelerating. Most organizations are investing aggressively in AI infrastructure while missing cost forecasts and experiencing margin erosion. This creates a dangerous dynamic: companies feel pressure to keep up in the AI race, even when the path to value isn’t clear.
This is how cost problems emerge quietly. Teams experiment in parallel. Tools proliferate. Usage grows faster than governance. Infrastructure scales before outcomes are well understood. The result isn’t reckless behavior — it’s misalignment. Investment decisions are being made faster than organizations can gain clarity on which AI use cases deserve scale, which should remain constrained and which should be shut down entirely. The risk isn’t that AI fails to deliver value. It’s that value emerges unevenly while cost accumulates everywhere.
This is where marketing organizations sit squarely in the blast radius — and also where they have the most leverage. Marketing teams are often early adopters, high-volume users and constant experimenters, embedding AI into content, personalization, decisioning and testing long before enterprise guardrails are fully formed. Without a transparent cost structure and ownership model, what begins as local efficiency can quickly become a systemic margin issue.
There’s a familiar lesson here. Just as strong brand foundations amplify performance marketing — rather than replace it — AI infrastructure must come before AI scale. Organizations that invest in the underlying structure, governance and operating model will get more value from the tools they adopt. Those who don’t risk spending their way into AI without ever building the base it needs to work.
Dig deeper: Scaling AI starts with people, not technology
How marketing organizations should think about AI cost structure
In many conversations, AI is treated as the outcome. In some products, that may be true. But at the enterprise level — especially in marketing — AI is a means to an outcome, not the outcome itself. Like Excel, dashboards or experimentation frameworks, AI is a tool. And like all tools, it is neutral by nature.
What makes AI different is not intent but variability. This tool operates across many models, workflows, agents and infrastructure layers, each with distinct and compounding costs.
Managing that variability requires structure. Below are the core steps marketing, operations and technology leaders should take to build cost awareness and control before scale makes those decisions harder.
1. Map AI workflows and match tasks to models
Action: Make AI usage explicit before you scale it.
- Inventory where AI is used across content, personalization, decisioning, forecasting, experimentation and agents.
- Break workflows into discrete tasks rather than treating AI as a single capability.
- Match each task to the minimum model capability required.
- Advanced: Start estimating cost drivers per task or action — tokens, context size, steps and agent involvement.
Start with repetitive, predictable work
AI cost is easiest to estimate when tasks follow a known path. Repetitive, manual workflows make it possible to test usage, observe token and model behavior and build reliable cost profiles before scaling more complex use cases.
2. Define the organizational infrastructure to own AI systems
Action: Establish a clear operating model for how AI infrastructure is owned and how agents are deployed, evolved and governed — with marketing empowered to operate at scale.
Suggestion: Marketing organizations should push for a shared-platform, distributed-execution model. In this model, core AI infrastructure lives with centralized technology teams. At the same time, marketing has the autonomy to deploy, iterate and scale agents on top of that foundation without needing a full-stack engineering team for every change.
- Establish clear ownership for AI capabilities, not just individual agents.
- Separate roles where possible:
- Product owner to set direction and roadmap for a class of agents or workflows.
- Business or data analyst to review usage, performance and cost patterns.
- Operational owner to maintain agents, manage versions and retire redundancy.
Without structure, agents sprawl
You get duplicated agents solving the same problem, inconsistent quality, uneven efficiency and replicated cost. Over time, the organization pays more simply because no one is accountable for consolidation, evolution or retirement.
3. Design a strong orchestration and context layer
Action: Control how AI components interact before complexity multiplies.
- Define orchestration rules: which agents do what, when tools are called and how decisions escalate.
- Invest in shared context, memory and caching so agents don’t repeatedly fetch or re-describe the same information.
- Ensure agents reason before invoking tools, rather than calling tools by default.
- Advanced: Explicitly manage agent-to-agent and agent-to-tool interactions, which are where costs accelerate fastest.
Dig deeper: Why your martech still feels like a cost center — and how AI changes that
4. Establish cost visibility and ongoing monitoring
Action: Make AI cost observable at the workflow level.
- Track cost by model usage, token consumption, context size and agent steps, including infrastructure and operational costs where applicable.
- Forecast for non-linear scaling — more agents, longer context and greater autonomy.
- Surface cost signals to teams building and operating AI, not just finance or leadership.
- Use thresholds, alerts and soft limits as signals, not punishments.
- Design visibility to help people understand why something is expensive and how to do it more efficiently.
- Start with organizational responsibility: systems should nudge better behavior before expecting individuals to self-regulate.
What is levelized cost of AI (LCOAI)?
LCOAI is a way to calculate the true cost of AI by spreading all lifecycle costs — infrastructure, inference, orchestration and operations — across the useful output it produces. Instead of focusing on licenses or token prices, it answers the question: What does one AI-powered action actually cost? This framing helps organizations compare architectures, models and workflows based on value delivered, not just usage consumed.
5. Standardize inputs and train for efficient use
Action: Reduce variability at the point of use.
- Train teams to write precise, minimal prompts.
- Standardize reusable prompt and context patterns.
- Cache and reuse outputs when appropriate.
- Automate context retrieval rather than relying on manual repetition.
Capability creep
Capability creep happens when an agent built for a specific task starts being used for adjacent or unrelated problems simply because it kind of works. Over time, people expand what they expect from the agent instead of designing a new one, increasing complexity, cost and failure risk. The result is higher spend, poorer performance and agents that quietly drift away from what they were designed to do.
When AI becomes infrastructure
Marketing leaders sit at the center of this moment. They are often the first to operationalize AI at scale — across content, personalization, experimentation and decisioning — long before enterprise guardrails are fully formed. That position carries both risk and leverage. Teams that treat AI as operational infrastructure, not just creative acceleration, will shape how value is realized across the organization.
This isn’t a call to slow down. It’s a call to mature. The next phase of AI adoption won’t be defined by who uses the most advanced models, but by who understands their economics well enough to scale.
The post Why AI is the most unpredictable cost in the martech stack appeared first on MarTech.