AI Wrote Your Code. Now Your Churn Is Up 41%.
GitClear analyzed 153M lines of code and found AI tools have driven churn up 41%, duplication up 4x, and refactoring to a 10-year low. The quality crisis is measurable and growing.
Published by GitIntel Research
TLDR
- Code churn — lines added then quickly removed or rewritten — is up 41% since AI coding tools became mainstream
- Code duplication has increased 4×: AI tools copy-paste patterns instead of abstracting them
- Refactoring has collapsed from 25% of changed lines in 2021 to under 10% today — the work that reduces complexity is disappearing
- 63% of developers have spent more time debugging AI-generated code than they would have spent writing it from scratch
- GitIntel scans your git history to show exactly which AI-generated commits are contributing to your churn rate
The Duplication Multiplier
The 4× increase in code duplication is the most structurally damaging finding in the dataset. Duplication is the root of maintenance debt: every duplicated block is a future bug that gets fixed in one place and not the other. It is also a sign that AI tools are generating code without understanding the abstractions that already exist in the codebase.
When a developer writes a function from scratch, they have enough
context to notice: "wait, we already do this in
utils/format.ts." When an AI completes a block, it
generates the freshest pattern from its training data. It does not
search your codebase for an existing implementation. The result is
proliferating copies of the same logic, each slightly different, each a
future maintenance burden.
Before AI tools (team of 8, 2-year-old codebase)
$ gitintel scan --metric duplication --repo . Duplication rate: 4.2% Clones detected: 187 Affected file pairs: 94
After 18 months of AI-assisted development
$ gitintel scan --metric duplication --repo . Duplication rate: 17.8% ↑ 324% Clones detected: 812 Affected file pairs: 406 AI-origin clones: 71% (576 of 812)
The GitIntel scan above is representative of what we see when analyzing repositories that crossed the "AI adoption threshold" in 2023–2024. Not all duplication is AI-introduced — but the AI-origin share consistently exceeds 65% in the repos we analyze.
The Refactoring Collapse
Refactoring is the immune system of a codebase. It is the work that reduces complexity, eliminates duplication, and keeps the architecture coherent as requirements change. In 2021, GitClear found that roughly 25 cents of every dollar of engineering effort went to refactoring existing code. By 2025, that figure had dropped below 10 cents.
The mechanism is straightforward. AI tools are optimized for generation. They are trained on new code being added, not on old code being simplified. When developers use AI completions for everything, the generate-and-accept loop crowds out the deliberate, human-led work of cleaning up what already exists.
TLDR
The Compounding Problem
High churn + high duplication + low refactoring creates a compounding trap. Code gets written (AI-assisted), gets churned (wrong context), gets duplicated (no time to abstract), and never gets cleaned up (AI tools don't refactor). Each cycle deepens the debt.
In a normally-maintained codebase, refactoring breaks this cycle. Without it, codebases that grew up on AI tools will require increasingly expensive rewrites over time — not because the code doesn't work, but because it becomes too tangled to change safely.
What the Developer Experience Data Says
The code quality metrics from GitClear align with what developers report in surveys. A striking 63% of developers say they have spent more time debugging AI-generated code than they would have spent writing the original code themselves. That is the churn dynamic in human terms: AI produces code fast, but it produces code that requires rework.
{[ [ "Spent more time debugging AI code than writing from scratch", "63%", ], ["Report productivity gains from AI tools overall", "74%"], [ "Junior devs (0–3 yrs) deploying code they don't fully understand", "40%", ], ["Trust in AI-generated code (down from 77% in 2024)", "60%"], [ "Senior devs (10+ yrs) reporting highest productivity gains", "81%", ], ].map(([label, val], i) => ( ))}| Developer Experience Finding | Rate |
|---|---|
| {label} | {val} |
Sources: Stack Overflow Developer Survey 2026, GitHub Octoverse, SlashData State of the Developer Nation.
The 40% of junior developers who admit to deploying AI-generated code they don't fully understand is particularly alarming. These developers are building their mental models of software architecture on a foundation they cannot fully read. When that code needs to be refactored — eventually it always does — neither the AI nor the developer who accepted it will know why it was structured the way it was.
Getting Control of Your AI Code Quality
The first step is visibility. Most teams do not know which commits in their history came from AI tools, and they certainly cannot answer questions like: "which modules have the highest concentration of AI-generated code that has been churned within 14 days?" That is exactly the signal you need to prioritize refactoring effort.
GitIntel adds this layer of attribution to your git history. Once you know which files are AI-heavy, you can cross-reference with standard quality metrics (cyclomatic complexity, test coverage, churn rate) to find the files that are both AI-generated and degrading fastest. Those are the high-priority targets for deliberate refactoring work.
Find AI-heavy files with high churn in the last 90 days
$ gitintel report
--filter ai-confidence:high
--sort churn-rate
--since 90d
src/api/handlers/order.ts AI: 89% Churn: 68% ⚠ HIGH PRIORITY src/utils/validators/email.ts AI: 95% Churn: 71% ⚠ HIGH PRIORITY src/components/Cart.tsx AI: 77% Churn: 44% → WATCH src/lib/auth/session.ts AI: 12% Churn: 8% ✓ STABLE
The combination of high AI confidence and high churn is your leading indicator of future technical debt. It tells you: this code was AI-generated, it is being changed frequently, and it has probably not been refactored into the right abstraction yet.
The broader principle is to make AI code a first-class citizen of your quality monitoring. The same way you track test coverage and lint violations in CI, you should be tracking AI code concentration in your highest-churn files. The tools to do this exist. The teams that adopt them now will spend less time in future rewrites.
Measure your AI code quality today
GitIntel is a free, open-source Rust CLI that scans your entire git history and maps AI-generated code by file, author, date, and tool. Cross-reference it with your churn data and you have the quality signal that no SAST tool gives you.
View on GitHub → More Research
Related reading on GitIntel: