AI PRs Are Flooding Open Source. Maintainers Are Drowning.
AI PRs generate 1.7x more issues than human ones (10.83 vs 6.45 findings). PRs up 20% YoY. cURL killed its bug bounty. LLVM added AI-disclosure rules. The review bottleneck is here.
Published by GitIntel Research
TLDR
- • AI-assisted pull requests generate 1.7x more issues than human PRs (10.83 findings vs. 6.45 per PR)
- • PRs per author are up 20% YoY; incidents per PR up 23.5% — review bandwidth hasn't grown
- • cURL halted its bug bounty after AI-generated "reports" flooded the queue; LLVM added mandatory AI-disclosure rules
- • 63% of developers have spent more time debugging AI-written code than it would have taken to write it themselves
- • Cursor's "Agent Trace" spec (Feb 2026) attempts attribution at file/line level — but adoption is voluntary and tracking still falls to maintainers
- cURL — suspended bug bounty program in 2025 after AI-generated "security reports" with no reproducible steps flooded the queue; each still required human triage
- LLVM — added a mandatory policy requiring contributors to disclose AI involvement in patches, with human-in-the-loop sign-off before merge
- Linux kernel — Linus Torvalds publicly rejected AI-generated patches in 2024, calling them "complete and utter garbage" that wasted reviewer time
The Real Question: Velocity vs. Integrity
The cURL situation reframes the AI coding productivity debate. The question isn't whether AI makes individual developers faster (METR's controlled studies suggest it often doesn't, at least for experienced engineers). The question is whether AI increases total system throughput — including the review, QA, and incident-response load that code generation creates downstream.
Based on current data, the answer is: it depends entirely on whether your review infrastructure scaled with your generation velocity. Most teams' didn't.
The repos that will come out ahead are the ones that treat AI code generation and AI code review as a coupled system — not two separate decisions made by two separate teams six months apart. Visibility into your AI footprint is the precondition for that kind of coordination.
Know your repo's AI footprint
Run
gitintel scan
to see which files, authors, and time periods have the highest
AI-commit density — before your next review cycle.
# Install
curl -fsSL https://gitintel.com/install.sh | sh
# Scan your repo
cd your-repo && gitintel scan
# Output by file
gitintel scan --format json
Open source (MIT) · Local-first · No data leaves your machine
Data from CodeRabbit State of Code Generation 2026, Veracode Spring 2026 GenAI Security Update, and SecurityBoulevard.com (March 2026).
Related reading on GitIntel: