Dario Amodei Said 90% of Code Would Be AI-Written. The Deadline Passed 6 Months Ago.
Amodei predicted 90% AI-written code by September 2025. GitHub reports 51% AI-assisted. GitIntel scans average 5.8%. The data behind the most debated prediction in software — and why the metric was unfalsifiable to begin with.
Published by GitIntel Research
TLDR
- • Amodei predicted "90% of code AI-written within six months" — March 10, 2025. The six-month window closed September 2025.
- • GitHub reports 51% of commits in early 2026 are AI-assisted or AI-generated — not the same as AI-written.
- • GitIntel scans of 13 major repos averaged 5.8% AI-attributed commits; highest was Deno at 41.2%.
- • Only 30% of AI-generated code suggestions are accepted by developers (GitHub Copilot data) — and only 3% of developers "highly trust" AI output.
- • The 90% prediction was technically unfalsifiable as stated — because "AI-written" was never defined.
The Prediction
On March 10, 2025, Dario Amodei told a conference audience that within the next six months, 90% of all code would be written by AI. The statement spread immediately — Hacker News, LinkedIn, tech Twitter. It was either the most confident forecast in modern software history or the most irresponsible one, depending on who you asked.
Six months from March 10, 2025 is September 10, 2025. That date is now six months in the past. In March 2026, Futurism ran the headline: "Exactly Six Months Ago, Amodei Said 90% of Code Would Be AI." Daring Fireball ran "Claim Chowder on Amodei's Prediction." IT Pro concluded the claim is "nowhere near reality."
So what does the actual data say?
What the Numbers Actually Show
| Source | Metric | Number |
|---|---|---|
| GitHub (Q1 2026) | AI-assisted or AI-generated commits | 51% |
| Stack Overflow 2025 | Developers using AI tools at least weekly | 65% |
| GitHub Copilot | AI suggestions accepted by developers | 30% |
| GitIntel scan (13 repos) | AI-attributed commits (Co-Authored-By) | 5.8% |
| GitIntel scan (Deno) | Highest AI commit % found in any repo | 41.2% |
| Stack Overflow 2025 | Developers who "highly trust" AI output | 3% |
| McKinsey (Feb 2026) | Reduction in routine coding time with AI | 46% |
The 51% GitHub figure is the one most likely to be cited as validation for Amodei's prediction. It's also the most misleading — and understanding why reveals the entire problem with the 90% claim.
"AI-Written" vs "AI-Assisted": A 39-Point
Gap
The 51% GitHub number counts commits that are "AI-assisted or AI-generated." That definition includes:
- A developer who accepted two Copilot autocomplete suggestions while writing 400 lines by hand
- A commit where Claude Code wrote a test suite and the developer approved it unchanged
- A PR where 8 out of 300 changed lines came from an AI suggestion
All three count as "AI-assisted." None of them are what most people mean by "AI-written."
THE ACCEPTANCE RATE PROBLEM
GitHub Copilot users accept approximately 30% of AI suggestions. For a commit to be "90% AI-written," developers would need to either accept nearly everything the AI generates or stop writing code entirely. Currently, 70% of AI suggestions are rejected or heavily modified.
The McKinsey study (4,500+ developers, 150 enterprises, February 2026) found that AI tools reduce routine coding time by 46%. That is genuinely significant. It is not the same as AI writing 90% of code — it means developers spend less time on the parts of coding that are most mechanical.
Architecture decisions, debugging, code review, security audits, dependency management, production incident response — AI tools assist with these but don't own them. The 46% efficiency gain applies to a specific slice of software work, not the whole stack.
What Our Scans Actually Find
We ran GitIntel across 13 major open source repositories — 6,500 commits total — looking for AI attribution markers in commit history. The results put a floor under the debate.
# Scan the most recent 500 commits
gitintel scan --format json --limit 500
# Sample output
{
"repo": "denoland/deno",
"commits_scanned": 500,
"ai_commits": 206,
"ai_percentage": 41.2,
"primary_agent": "claude-code"
}
Deno is the outlier at 41.2% — by far the highest we found. Across all 13 repos, the average was 5.8%. Even accounting for our known measurement gap (we only detect explicit Co-Authored-By trailers, not Cursor tab completions or ChatGPT copy-paste), the real number is unlikely to be 10x higher for most repos.
More striking: Tauri, a mature Rust project, had exactly 0 AI-attributed commits in 500 scanned. Next.js had 5.4%. Ollama — a product literally designed for running AI models — had 0.8%.
OBSERVATION
AI adoption follows the team, not the language or even the domain. Deno chose Claude Code as a core part of its development workflow. Most teams haven't made that choice at scale. The variance between 0% and 41% across repos of similar size and maturity suggests this is still a deliberate team decision, not an industry-wide baseline.
Why the 90% Number Was Always the Wrong Metric
Here's the part that the fact-checks tend to miss: even if AI did write 90% of code by lines, that would be the beginning of a different problem, not the end of the software quality challenge.
CodeRabbit's analysis of 470 production PRs found that AI-coauthored PRs generate 1.7x more issues than human-written PRs — 10.83 findings vs. 6.45 per PR. GitClear's analysis of 153 million lines found AI-driven code churn is up 41%, with duplication up 4x. Veracode's Spring 2026 report found syntax pass rates for AI code hit 95%, but security pass rates remain flat at 45–55% — unchanged since 2023.
The implication: if we scale AI code generation to 90% without solving the quality gap, we don't get 90% of the software benefits. We get 90% of the lines with the same 45–55% security pass rate — at 10x the volume.
THE TRUST CEILING
Only 3% of developers "highly trust" AI-generated code (Stack Overflow Developer Survey 2025). Only 33% trust it at all. For AI to write 90% of code, that trust number would need to either dramatically increase — or developers would need to stop checking. Neither has happened. The review step is not optional, and it costs time.
There is also a structural attribution problem. Most AI coding tools — Cursor, GitHub Copilot, ChatGPT — do not leave any trace in git history. Even at 51% AI-assisted, the majority of those contributions are invisible in the commit log. If 90% of code were truly AI-generated, most engineering teams would have no idea which 90% it was. Auditing, debugging, security review, and compliance reporting would all operate on a codebase with no provenance.
What We Can Say With Confidence
AI adoption is real and accelerating. 65% of developers use AI tools weekly. Copilot has 4.7 million subscribers. Cursor hit $2B ARR. These are not marginal tools.
"AI-assisted" is not "AI-written." GitHub's 51% number counts any commit where AI contributed at least something. The actual proportion of lines where AI made the dominant contribution is much lower.
Team adoption is highly variable. Deno is at 41%. Tauri is at 0%. The 90% claim implies a uniform industry transition that hasn't happened and may not be the right goal.
Quality constraints are real. For AI code percentage to scale toward 90% without a corresponding quality improvement, organizations would have to accept significantly higher defect rates. Most won't.
Attribution is the hidden problem. Before the industry can meaningfully answer "how much code is AI-generated," it needs tooling that can actually measure it. Most organizations can't.
The Unfalsifiability Problem
The Amodei prediction has a deeper methodological issue: it was never falsifiable as stated. There is no universally accepted definition of "AI-written code."
If a developer writes a prompt, Claude Code generates 200 lines, the developer edits 40% of them and writes 60 more lines — is that AI-written? Is it 55% AI-written? Is it AI-assisted?
GitIntel measures attribution by Co-Authored-By trailers — explicit, machine-readable signals left by tools like Claude Code. This is a conservative floor. The real percentage of AI contribution is certainly higher. But without standardized attribution, every claim about "what percentage is AI-generated" is measuring a different thing.
Futurism, Daring Fireball, and IT Pro all concluded the 90% claim hasn't materialized. They're right — but the more useful observation is that the industry still can't measure the metric precisely enough to know for certain. That's the problem worth solving.
What's your repo's AI percentage?
Before the industry can answer Amodei's question at scale, individual teams need to answer it for their own codebases. GitIntel gives you a floor measurement in under a minute.
# Install
curl -fsSL https://gitintel.com/install.sh | sh
# Scan your repo
cd your-repo && gitintel scan
Open source (MIT) · Local-first · No data leaves your machine
Sources: GitHub Q1 2026 data, Stack Overflow Developer Survey 2025, McKinsey Technology Report February 2026, GitIntel repo scan data (March 22, 2026), GitHub Copilot acceptance rate data. Amodei prediction confirmed by Futurism, Daring Fireball, IT Pro fact-checks (March 2026).
Related reading on GitIntel: