
I went to a developer conference and, by accident, learned something profound about human nature. It started innocently enough — the “All Things AI Conference” in Durham, NC, had a title too good to pass up.
What I didn’t expect was to be the only marketer among 2,500 developers, nodding along as whurly (yes, that’s his real name), CEO of quantum computing company Strangeworks, dove deep into quantum computing and AI. I was in over my head. But sometimes that’s where the best insights hide.
It wasn’t until Luis Lastras, director of language and multimodal technology at IBM, began talking about “small models” that I finally recognized something. Luis said something that struck me that I didn’t realize: “Hallucinations are intentional.”
Say what?
The answer is…
According to Luis, hallucinations are a way for developers to learn how models work. Because the models operate autonomously, they don’t filter out what they output — at least not yet. Think of letting your grandfather, who lost his filter, loose at a dinner party.
It’s one of the things that IBM learned working with small models. These models validate their outputs at certain points as they generate them, to reduce hallucinations.
Anyone who’s worked with AI has experienced hallucination — from made-up sources to statistics that are just plain wrong. Lastras said they are little extra pieces of information that AI thinks are helpful, but weren’t asked for in the prompt.
He showed a demo of a prompt asking how many moons Mars has, and the response came back with two and their names, with the added extra — the distance from Earth, which was not requested. The distance between the planets may have been right, but validating that would require another step, so it might not have been.
How this evolved
However, humans are inclined to think the AI is always right.
In a study by Elon University conducted with 500 AI users (US adults) last year, almost 70% believed that AI models are at least as smart as they are, with 26% believing that they are “a lot smarter.”
What is more concerning is that we believe AI is thinking like a human. A Wall Street Journal article, “Even Smart People Believe AI is Really Thinking,” said, “Our cognitive biases developed to help us survive in complex social environments… [We have] evolved to view linguistic fluency as a proxy for intelligence, engagement, and helpfulness as indicators of trustworthiness.”
The same tendency that leads us to trust our linguistically adept fellows for survival is leading us to trust systems that appear to listen, understand, and want to help us.
So, the more AI tools and bots act like humans, the more likely we are to trust them. Which brings us back to the hallucination. The more AI tools act like they’re being helpful, the more likely we are to miss that “little extra” piece of information that wasn’t requested.
Bottom line
The convergence of intentional hallucinations and our deeply wired human instinct to trust fluent, helpful communicators creates a perfect storm of misplaced confidence.
As AI tools grow more sophisticated and human-like, our evolutionary instincts will only make it harder to maintain the critical distance needed to catch the errors, embellishments, and unrequested additions that slip through.
The good news is that awareness is the first step. Whether it’s IBM’s small models validating outputs in real time or simply slowing down to verify what AI hands us, the antidote to a cognitive bias millions of years in the making is something refreshingly simple — a healthy dose of human skepticism.
The SEO toolkit you know, plus the AI visibility data you need.
The post Why we trust AI when it makes things up appeared first on MarTech.