McKinsey says 73% of AI initiatives fail to deliver expected value. In engineering teams, the number feels higher.
Your team bought Copilot. Maybe Cursor. Possibly both. Six months later, the developers who were already fast got slightly faster. The developers who struggled still struggle. The complex tickets that took a week still take a week. Sprint velocity barely moved.
What went wrong?
The Productivity Paradox
AI coding tools accelerate the thing that was already fast: writing code. But writing code was never the bottleneck.
Here's how a senior developer actually spends their time on a complex ticket:
Copilot and Cursor optimize that 20-25% slice. Even if they make code writing 50% faster, the total ticket time drops by maybe 10-12%. That's real but underwhelming.
What Actually Moves the Needle
1. Reduce Context Acquisition Time
The biggest productivity lever isn't writing code faster — it's understanding the problem faster.
An engineer picking up an unfamiliar ticket spends 30-90 minutes before writing a single line: grepping the codebase, reading old PRs, Slacking teammates, tracing call paths.
What works: Tools that map tickets to code automatically. Paste a ticket, get the affected files, feature boundaries, and relevant history. This cuts 30-90 minutes to under 5.
What doesn't work: Asking ChatGPT to explain your codebase. It doesn't know your code. It will hallucinate plausible-sounding but wrong answers.
2. Eliminate Tribal Knowledge Bottlenecks
Every team has 2-3 people who are the de facto knowledge base. "Ask Sarah about auth." "Check with Mike about billing."
These people are bottlenecks — not because they're bad engineers, but because they're the only ones with context that isn't written down.
What works: Automated knowledge extraction from git history. Who changed this code, when, why? What regressions happened here before?
What doesn't work: Documentation initiatives. Nobody maintains docs. The wiki is always 6 months stale. Extract knowledge from artifacts developers already create.
3. Front-Load Dependency Awareness
The most expensive bugs: engineer changes File A without knowing it affects Feature B through a chain of dependencies. Feature B breaks. Another engineer spends a day debugging.
What works: Dependency graphs and blast radius analysis before coding.
What doesn't work: Relying on code review to catch dependency issues. By the time the PR is open, the damage is done.
4. Use AI for Reasoning, Not Just Generation
The most underutilized AI capability is reasoning about code — not writing it.
Use AI to:
Analyze blast radius of proposed changes
Identify test cases based on affected code paths
Generate build plans mapping requirements to implementation steps
Review code for logical errors
Don't use AI to:
Write boilerplate you'll need to read and maintain
Generate entire features from vague prompts
Replace architectural thinking
5. Measure the Right Things
Wrong metrics: Lines generated, Copilot acceptance rate, AI chat sessions.
Right metrics: Time from ticket to first commit, regression rate, cycle time for complex tickets, developer confidence score.
The Stack That Works
Teams getting real productivity gains use a layered approach:
Verification layer — run tests, check types (CI/CD)
Each layer feeds the next. Skip the understanding layer and you're generating code without context. That's how you get the 73% failure rate.
The teams avoiding it aren't the ones with the most AI tools. They're the ones using AI to solve the bottleneck that actually matters: understanding.
Keep Reading
The 73% failure rate traces back to one root cause: AI tools optimizing the wrong bottleneck. The real constraint is the Understanding Tax — the 20-35% of engineering time lost to context acquisition.
Glue is the pre-code intelligence platform that addresses the bottleneck AI coding tools miss. It gives developers the understanding layer — codebase context, tribal knowledge, blast radius analysis — that makes every downstream tool more effective.