Advertising in AI is a trust experiment marketers can’t ignore

Building trust in business concept

The most significant advertising moment of 2026 happened on the second Sunday in February. It wasn’t the most popular Super Bowl spot or the most cinematic brand anthem. It was a pointed message embedded in one of the game’s most self-aware ads.

In attempting to position itself against a rival, Claude revealed a line the industry may be approaching faster than it realizes: What happens when artificial intelligence platforms begin making money from advertisers?

Claude’s maker, Anthropic, built its spots around a simple promise: “Ads are coming to AI. But not to Claude.” The humor worked because it dramatized an uncomfortable future — a vulnerable question interrupted by a sneaker pitch, a relationship concern met with a dating app promotion, startup advice followed by a payday loan offer.

The campaign differentiated Claude from ChatGPT, at least for now. But the subtext was larger than competitive positioning. The human-AI relationship is evolving.

The ads referenced OpenAI’s decision to test advertising in ChatGPT. Not inserted into responses, as the satire suggested, but displayed below answers for users on free or entry-level plans. The placements are labeled “sponsored” and, according to OpenAI leadership, don’t influence outputs.

On the surface, that seems straightforward. But AI isn’t experienced like a billboard or a banner. It is experienced as a conversation. As assistance. Increasingly, as companionship. That context changes the stakes.

The Facebook echo

The tension was crystallized in a recent New York Times opinion piece by former OpenAI researcher Zoë Hitzig titled “OpenAI Is Making the Mistakes Facebook Made. I Quit.”

She acknowledges a simple economic truth: AI is expensive to run and advertising can be a critical revenue stream. But she warns of something more profound — the ethical tremors that occur when monetization models begin to ride on patterns of human thought.

We’ve seen this movie before. In its early years, Facebook promised users meaningful control over their data and even the ability to vote on policy changes. Those commitments faded as advertising revenue surged. Financial incentives reshaped the product. The product reshaped behavior. Trust dissolved slowly and perhaps imperceptibly. 

Which is why, even if OpenAI insists ads and answers won’t cross streams, the shift itself matters. It has opened the barn door and is leading the horse out by the reins. Once advertising gets its hooves in the dirt, it tends to find purchase. (No pun intended.)

Trust isn’t just about privacy

Why is this so critically important? Because trust isn’t merely a privacy policy. It’s an expectation — the emotional contract users believe they are entering when they type something personal into a machine.

In my book “Appreciated Branding,” I argue that brands earn trust when their intent is unmistakably aligned with human needs, not when they quietly repurpose those needs into leverage for commerce.

The moment a platform transforms empathy-seeking inputs into advertising adjacency, the emotional math changes. Advertising in AI exposes a monumental cultural fault line: Are AI tools environments for honest assistance or conduits for monetization?

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

In traditional ecosystems like search engines, social feeds or television, we have a contextual contract. Ads live on the perimeter. We expect and compartmentalize them.

But in AI chat, the perimeter dissolves. The interface is the conversation, like talking to a therapist with a side hustle selling comfort animals.

There’s no sidebar or separate ad distraction. The experience is immersive and relational. If users begin to feel that their intimate questions are underwriting someone else’s revenue, the safe space becomes contaminated — and contamination spreads faster than clarification.

From an appreciated brand perspective, this is a hard bell to unring. Trust, remember, isn’t owned. It’s reinforced repeatedly through signals of both alignment and restraint.

Brands that operate with empathetic transparency understand that short-term monetization gains can create long-term relational losses. Once users suspect ulterior motives, they pull back, not just behaviorally but emotionally.

Embedding ads within an interface where users share personal concerns risks shifting AI’s identity from trusted helper to commercial shill. Trust exits the chat, and something far more expensive than infrastructure breaks.

The business case for restraint

To be clear, the business pressures are real. AI infrastructure is enormously expensive. Free tiers need support. Investors expect returns. Advertising is a proven, scalable monetization engine.

But here’s the strategic question marketers should be asking: What if monetizing attention inside AI erodes the very trust that makes AI valuable?

If users begin to believe their personal inputs are indirectly fueling commerce, they will adapt, self-censor, withhold context and seek paid alternatives or new platforms promising neutrality.

In other words, the data well runs dry. Advertising inside AI, whether in the chat or around it, could create a subtle but devastating behavioral shift: less honesty, less vulnerability, less richness in interaction. Ironically, that reduces the very effectiveness advertisers hope to gain.

A different path for brands — and the counterargument

This is where marketers need to think differently. If AI platforms can remain environments where people feel understood without being sold to, brands have a significant opportunity to earn trust. Allow them to be found through AI visibility, not paid AI placement.

AI already rewards brand clarity, utility and problem-solving partnerships that preserve user agency. That’s the appreciated branding principle at scale: solve first, then sell as a byproduct of solving.

Platforms that maintain a visible firewall between assistance and monetization may discover something counterintuitive. Preserved trust increases lifetime value. Brands that respect the emotional gravity of AI interactions may earn deeper loyalty than those chasing opportunistic impressions. 

History complicates this narrative. I’m old enough to remember when consumers said they would rather walk outside naked to get their newspaper than put their credit card number into a website.

We adapt. Norms evolve. What feels invasive today can become ordinary tomorrow. It’s entirely possible that clearly labeled, well-regulated advertising below AI responses becomes culturally acceptable. That users draw their own boundaries and move on. That trust recalibrates rather than collapses.

But the difference here is intimacy. Credit card data was transactional. AI conversations are relational. Once trust is fractured, it doesn’t reassemble as easily as digital payment habits.

The real experiment

Advertising in AI isn’t inherently immoral. It may even be economically necessary. But it’s a trust experiment, and trust experiments don’t offer unlimited retries.

If AI platforms miscalculate and users feel their vulnerability is being quietly monetized, the damage will extend beyond one company’s quarterly earnings. It will reshape expectations of human-technology interaction and shift the cultural agreement from “this tool is here to help me” to “this tool is here to extract value from me.”

Once that agreement shifts, rebuilding it will be far more expensive than any data center ever built. For marketers watching this unfold, the lesson is bigger than AI. Trust isn’t a feature. It’s infrastructure. Selling the ground beneath it will be a one-way, irreversible transaction.

The post Advertising in AI is a trust experiment marketers can’t ignore appeared first on MarTech.

Leave a Comment

Your email address will not be published. Required fields are marked *