Blog / Leadership in Complexity / When AI Progress Stalls, It’s Rarely About the Model


When AI Progress Stalls, It’s Rarely About the Model


Empty leather armchair facing a misty forested valley at sunrise through floor-to-ceiling windows

Something interesting happens when organizations begin working seriously with AI tools. Early on, there’s momentum. Things move faster than expected. People are energized. Then, sometimes gradually and sometimes all at once, progress becomes uneven. Decisions get harder. Teams that were excited grow cautious or frustrated.

The instinct is to look at the tool. Maybe a different model. Maybe a better prompt. Maybe more training on the technology itself.

That’s almost never where the answer is.

The pattern underneath the stall

What I observe instead looks familiar. Teams discover they don’t have shared agreement on who decides what — which outputs get verified, who owns the architectural choices, where human judgment is non-negotiable. Roles that seemed obvious turn out not to be. The work accelerates fast enough to outrun the clarity that was already thin before AI entered the picture.

A parallel dynamic takes shape at the leadership level. Executives see performance metrics and assume the transition is working. What they’re measuring is output. What they’re not measuring is whether the right things are being built, whether the team understands what they’re shipping, whether the speed is creating value or just creating more to manage.

Security deserves the same attention. The vulnerabilities showing up in AI-assisted work aren’t appearing because the tools are uniquely dangerous. They’re appearing because speed creates pressure to skip the review steps that were already inconvenient before AI. The gap isn’t technical. It’s the same human tendency to prioritize momentum over diligence that organizations have always had to manage consciously.

Progress stalls not because the tools are inadequate. It stalls because the human systems around the tools haven’t kept up.

This isn’t a technology problem. It’s the same problem that slows teams down in every other context: unclear agreements, roles that don’t reflect how the work actually flows, and organizations moving too fast to notice what’s eroding underneath.

The question worth pausing for

The teams I see navigate this well aren’t necessarily the ones with the best technical setup. They’re the ones who paused early enough to ask: what do we actually need from each other to make this work? They treated AI adoption the way a good team treats any significant change — as an invitation to understand one another again before assuming the tools will sort it out.

The model is rarely the bottleneck. The conditions around it almost always are.

If this reframes how you’re seeing the challenges you’re facing, a conversation can help turn that perspective into clearer choices.

(No agenda required — it's just a conversation.)

About Jeff Hayes

Jeff Hayes works with senior leaders navigating complexity, pressure, and change. His work focuses on helping leaders slow down, see patterns more clearly, and make sound decisions in uncertain conditions.