As AI does more of the work, are we building the right leaders?

Business visionary points to AI icon

AI is accelerating how marketing teams analyze performance, but it’s also changing how future leaders are developed. As more of the work moves upstream into automated systems, fewer analysts are exposed to the messy, foundational problems that build judgment. That tradeoff is easy to miss until it shows up in real decisions. This is one of those moments.

It’s April 2026, and Q1 results are in. You are in a conference room with your team of senior leads and analysts who will soon run meetings like this themselves. The year-over-year numbers are on the screen, and the team is ready to present its findings and recommendations. 

On the surface, it looks solid. But you hesitate, knowing you are still dealing with many of the same measurement realities you have wrestled with for years. The data is fragmented, taxonomies are inconsistent, metrics don’t always align across platforms and definitions vary. Some reporting is modeled, some estimated, some simply not comparable. AI didn’t fix that and, if anything, may have obscured those issues or amplified the bias already embedded in the data.

You remember how last year’s numbers came together. You increased investment in podcasting, commerce media and the creator economy, even though none fit neatly within your framework. You added attention metrics while standards were still evolving, new streaming partners launched midyear and tracking errors surfaced late. Some campaigns were mislabeled and later corrected and an identity issue in Q3 carried into year-end reporting. Throughout the year, your team revisited naming conventions and classifications to clean up inconsistencies across systems and built your 2026 plans on that work.

Before the meeting shifts to insights and next steps, you pause and ask, “Before we get too comfortable comparing Q1 to 2025, did any of the same issues show up again?”

The senior leads glance at each other, knowing exactly what you mean. They walk through where estimates were used, where gaps existed and what assumptions were made. Across the table, junior analysts lean in and listen. They are smart and fluent in the tools, but this conversation is different. It’s not about what the system surfaced. It’s about what the system missed, and that makes it a leadership moment, where experience, judgment and context matter more than what appears on the screen.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

AI can’t replace the experience that builds judgment

The Q1 results came together faster than they did a year ago because AI handled most of the modeling and surfaced suggested actions, allowing the team to move straight into analysis. That efficiency is real, and AI is being embedded across planning, forecasting, anomaly detection and reporting, with most organizations still calibrating where automation genuinely adds value.

The issue isn’t AI doing more of the work. It’s what happens to those who never learned to do it without AI. That Q1 discussion required senior leaders to remember what broke in 2025, understand the ripple effects of identity disruptions and recognize that a number can be technically correct and still incomplete. 

That knowledge didn’t come from reviewing a dashboard. It came from stitching datasets together, fixing mislabels, restructuring taxonomies and rebuilding assumptions when frameworks didn’t hold. Junior analysts are increasingly learning in environments where much of that rebuilding happens upstream, leaving them with less hands-on problem-solving and, in some cases, less awareness that underlying measurement issues even exist.

If an analyst is trained primarily to review outputs, they may become excellent at reading what’s in front of them without ever understanding how the report was built, where the assumptions sit, how fragile those assumptions can be, or how data gaps should be addressed. 

Senior leaders question last year’s numbers because they have lived through tracking failures, identity disruptions and structural reclassifications. They have defended investments that were directionally right but difficult to prove and adjusted when external shifts wiped out benchmarks. They learned that clean reporting doesn’t always mean accurate reporting and if AI reduces the need for emerging practitioners to do the same hard work, we have to ask honestly what experiences will shape their judgment as they advance.

Developing leaders on purpose

Are we exposing developing analysts to what sits beneath the dashboard, giving them the context to spot anomalies, identify embedded bias and recognize when something has been mislabeled or tracked incorrectly? 

Can they connect the dots across systems and understand how those issues shape the bigger picture? 

Or are we allowing the efficiency gains of AI to quietly narrow the experiences that build real leadership capability?

These aren’t rhetorical questions. They are decisions that team leads, hiring managers and organizational designers need to make deliberately because the default path, absent intention, produces analysts who are underprepared when something breaks.

AI will continue to handle more of the operational workload and there’s no going back. The real question is whether the next generation understands what sits beneath the output, knows when results need to be re-examined and can recognize when something feels off rather than assuming the system is right. 

If we’re deliberate, AI can elevate the industry by freeing leaders to focus on strategy and growth while sharpening their ability to diagnose and solve complex problems. If we’re not, we risk developing a generation of leaders who are fluent in systems but underprepared when measurement breaks, classifications drift, or the data simply doesn’t make sense.

What this looks like in practice

Making that choice deliberately starts with a few concrete actions:

  • Assign junior analysts to data remediation work, not just reporting. When tracking breaks or classifications need to be rebuilt, that’s a development opportunity, not just a cleanup task. The analysts who do that work will carry the experience forward in ways that dashboard reviewers simply won’t.
  • Don’t just present conclusions in reviews, narrate how you got there. In front of your team, walk through what felt off, what you dug into and what assumptions you decided to challenge. That running commentary is exactly what developing practitioners need to hear.
  • Establish “beneath the dashboard” checkpoints in your workflow. Before results are finalized, require a structured review of where estimates were used, where gaps exist and what assumptions were made. This keeps critical thinking in the process rather than assuming AI handled it upstream.
  • Rethink how you assess developing talent. If your performance frameworks only measure how well someone works within the system, you will never know whether they can recognize when the system is wrong and you won’t build that capability in them either.

Remember, there will always be new tools and capabilities, but the leaders who thrive will not just know how to use AI. They will know how to question it. That capability is developed over time, through experience, not handed down through a dashboard. Invest in the next generation the same way you invested in yourself, by giving them the experiences that actually build judgment.

The post As AI does more of the work, are we building the right leaders? appeared first on MarTech.

Leave a Comment

Your email address will not be published. Required fields are marked *