
Marketing measurement has changed a lot over the last few years. Attribution alone isn’t enough, and most brands know it. Incrementality testing and media mix modeling (MMM) are no longer optional.
Yet many teams are still stuck. Not because they don’t understand measurement, but because they don’t know how to act when the data isn’t perfect.
Measurement should create action, not delay it
Measurement exists to inform decisions, not to absolve teams of responsibility for making them. That sounds obvious, but it’s not how many organizations behave in practice. When attribution says one thing, an incrementality test says another and a model points slightly elsewhere, the instinct is to pause, ask for more analysis or wait for cleaner data.
Disagreement between measurement approaches is typical. Treating it as a reason to do nothing is the mistake. At some point, teams still have to decide what bet they’re willing to make with imperfect information. Pretending that any measurement playbook will remove uncertainty entirely is a fallacy.
The SEO toolkit you know, plus the AI visibility data you need.
Incrementality tests are the most powerful tool in a marketer’s toolkit, but they aren’t without challenges. Too often, those challenges prevent teams from getting started. And even when tests are run, they can prevent teams from acting on the results. Common phrases include opportunity cost, confidence intervals and results just represent a moment in time.
All of those are reasonable concerns. But the most significant risk isn’t that the tests aren’t perfect. It’s that nothing changes in the marketing program as a result, even if that means rerunning a test that’s more likely to get a cleaner read.
Why MMM feels harder to trust than attribution
Media mix modeling creates a different kind of discomfort, especially for teams coming from attribution-heavy environments. Attribution feels precise. MMM openly admits it is an exercise in correlation.
When a model suggests shifting spend and forecasts a positive revenue impact, the immediate reaction is to ask how that exact number will be validated after the fact. The reality is that it won’t be, and that’s OK.
Too many things change at the same time to expect clean validation. Pricing, promotions, product mix, seasonality and broader business decisions all move alongside marketing. Expecting a perfect before-and-after comparison misses the point.
This is where incrementality testing plays a critical role. It provides intra-time-period validation, helping account for confounding factors and complementing MMM. Together, they are far more helpful than either approach on its own.
Stop chasing big wins. The real validation comes from the P&L.
Marketing teams often optimize for big, obvious wins because they’re easy to point to. They make for great slides and compelling case studies, and they create the comforting sense that progress is being made. We’ve all seen them: “XXX% lift in ROAS.” or “Revenue up YYY% after one simple change.”
The details are always conveniently thin. The timeframe is suspiciously short. Somehow, none of these breakthroughs ever seem to translate into broader business results. That tendency is understandable. Marketing teams are often defensive, and in organizations that still view marketing as a cost center, there’s real pressure to present work as confidently and cleanly as possible.
But the goal isn’t to have a perfect record of winning tests. It’s not about producing perfect forecasts or generating impressive case studies. In reality, durable growth rarely comes from one big breakthrough. It comes from stacking minor improvements over time, such as slightly better allocation decisions, a more balanced channel mix and a sharper understanding of diminishing returns.
Dig deeper: 5 ways to improve marketing measurement in 2026
None of those are headline-worthy on their own. But they compound quietly, and validation happens cumulatively. That’s what actually shows up in year-over-year growth and healthier blended metrics.
A measurement system is doing its job if teams are confidently deploying capital and making optimizations that lead to sustained business growth over time.
Measurement should create confidence
I’ll never forget working with a CFO who helped me reframe the balance between rigor and urgency. After listening to me explain why I wanted to structure an experiment a certain way to isolate the impact of a significant change in how campaigns were run, he said something like this:
“I don’t care about isolating the exact impact to the exact changes made right now. I need to grow the business. And right now, when we scale spend, we don’t scale new customers, so something has to change.
I’ll be able to assess whether this is working by looking at the business data, not your marketing results. I don’t need to perfectly isolate whether the impact came from one specific marketing change or from several things happening at once.
At the end of the day, the business is a system. If we’re confident that multiple changes to the system are going to lead to profitable growth, we put our best foot forward and assess.”
That conversation reshaped how I think about the balance between being informed and being precise when placing bets. If your measurement approach makes teams afraid to act, it’s failing. The goal isn’t certainty. The goal is confidence to make better bets more often in pursuit of profitable growth.
The post If your measurement strategy delays decisions, it’s broken appeared first on MarTech.