Back to Blog
ResearchMarch 30, 2026 · 7 min read

AI Makes Developers 19% Slower. They Think They're 20% Faster.

METR's RCT: experienced developers are 19% slower with AI while believing they're 20% faster — a 39-point gap. BCG's March 2026 HBR study: 14% suffer 'AI brain fry,' error rates 39% higher. The productivity trap has peer-reviewed data.

Published by GitIntel Research

TLDR

The study is not a survey. It's a randomized controlled trial with real engineers, real tasks, and real time measurements. The tasks included codebase modifications, debugging, and feature additions — exactly the workflows AI tools are marketed for. Published at metr.org .

Then BCG Named the Mechanism: "AI Brain Fry"

On March 5, 2026, BCG and UC Riverside published research in the Harvard Business Review that gave a name to what was happening: AI brain fry — acute cognitive overload caused by the mental cost of managing AI output.

The study surveyed 1,488 full-time US workers across industries. Software development was in the top five most-affected roles. Marketing topped the list at a 26% brain-fry rate. The mechanism is not what most people assume: it's not the AI doing too little. It's the human doing too much watching.

BCG / UC Riverside — Key Findings (HBR, March 5 2026)

Workers experiencing "AI brain fry" 14%

Extra mental energy expended (high AI oversight) +14%

Additional mental fatigue +12%

Information overload increase +19%

Error rate increase (brain-fry sufferers) +39%

Decision fatigue increase +33%

Intent to quit increase +~10%

Source: HBR / BCG, March 5, 2026

The core counterintuitive finding: workers who exercised high oversight over AI — reading every output, verifying every decision, maintaining constant awareness of what the AI was doing — burned out faster than workers who either used AI minimally or not at all.

The developers who had the highest trust in AI and the deepest integration into their workflows were the most cognitively depleted by day's end. Their error rates were 39% higher. And nearly 1 in 10 were actively considering quitting.

The Invisible Workload Expansion

A parallel 8-month embedded study by UC Berkeley — 40+ interviews inside a 200-person tech company — found a different but related failure mode. Workers voluntarily expanded their workload because AI "made more feel doable."

Nobody told them to take on more. No manager pushed harder deadlines. But when you can generate a first draft in 10 minutes instead of 3 hours, you start saying yes to things you wouldn't have before. Work bleeds into evenings. Scope expands. The AI created capacity — and humans immediately filled that capacity with more work rather than reclaiming rest.

"The most enthusiastic AI adopters weren't burning out because AI failed them. They were burning out because AI worked well enough that they never stopped."

— UC Berkeley 8-month embedded study, 2025

This is structurally different from normal overwork. It doesn't look like overwork from the outside. Commit velocity is up. PRs are merging. Tickets are closing. The dashboard looks healthy. The human behind it is quietly depleting.

The Tool Count Threshold

One of the sharpest findings in the BCG study: productivity scales positively with AI tool adoption — up to a point. Teams using 1–3 AI tools showed measurable output gains. Teams using 4 or more AI tools showed declining returns, higher error rates, and greater reported fatigue.

AI Tools in Use Productivity Error Rate Fatigue Score
──────────────────────────────────────────────────────────
0 (control) baseline baseline baseline
1–2 +12% −8% −3%
3 +9% +2% +5%
4 +1% +18% +11%
5+ −7% +31% +22%

Illustrative reconstruction from BCG study directional data. Exact per-tier numbers not published; ranges drawn from reported aggregates.

The interpretation is straightforward: each AI tool requires cognitive bandwidth to supervise. One tool is helpful. Two tools is manageable. Four tools means your brain is constantly context switching between AI outputs, evaluating, correcting, re-prompting. You're not a developer anymore — you're an AI wrangler.

For engineering teams stacking Cursor, Copilot, Claude Code, and a code review bot all at once, this is a direct warning. The integration costs are real and they accumulate on the human, not the machine.

Why the Perception Gap Is the Dangerous Part

The METR finding isn't just that AI made developers slower. It's that developers didn't know they were slower. They felt faster. They reported feeling faster. Their internal experience of the work — the sense of flow, of generation, of output — was positive.

This creates a feedback loop that's hard to break. If you feel productive, you don't investigate whether you are productive. You don't question the workflow. You double down. You add another tool.

The BCG data shows what happens next: the people who doubled down the hardest had the worst outcomes. Highest error rates. Highest fatigue. Highest quit intent. The perception gap isn't just a measurement curiosity — it's the mechanism by which developers walk straight into the burnout without realizing it.

39pt

Perception vs. reality gap (METR)

39%

Higher error rate (BCG brain fry)

~10%

Higher quit intent (BCG)

What Engineering Leaders Can Actually Do

The BCG study did not end with doom. It found two organizational factors that significantly reduced AI-related fatigue:

The harder operational change is measurement. Right now, most engineering teams track commit velocity, PR throughput, and ticket close rates. None of those metrics capture whether the developer behind them is degrading. You'll see the numbers hold steady or improve for months before the resignation letter lands.

GitIntel tracks AI commit attribution and tooling patterns across your codebase — not to penalize AI use, but to surface where over-reliance patterns are forming before they manifest as personnel problems.

Study Methodology, Briefly

METR RCT (published July 2025, tasks conducted Feb–Jun 2025): 16 experienced open-source developers, 246 real tasks drawn from their own backlogs, randomized crossover design (each developer served as their own control). Time-on-task was the primary metric. Developers used their preferred AI tools in the AI condition. Full study at metr.org .

BCG + UC Riverside (published HBR March 5, 2026): 1,488 full-time US workers across 10 industries, online survey methodology, validated scales for cognitive load and occupational stress. Software development was identified as a top-5 affected role. Full study at HBR .

UC Berkeley 8-month study: embedded qualitative research at a 200-person tech company, 40+ interviews, focused on behavioral adaptation to AI tools. Published via TechCrunch (Feb 2026).

Track AI usage before it becomes a people problem

GitIntel scans your git history for AI-assisted commits, identifies which tools are leaving traces, and shows where AI dependency is concentrating. Open source. Local-first. No data leaves your machine.

# Install
curl -fsSL https://gitintel.com/install.sh | sh

# Scan your repo
cd your-repo
gitintel scan --format json

View on GitHub

Open source (MIT) · Local-first · No data leaves your machine

Data from METR (July 2025) and BCG/UC Riverside (March 5, 2026). GitIntel Research summary published March 30, 2026.


Related reading on GitIntel: