Back to Blog
ResearchMarch 29, 2026 · 7 min read

AI PRs Are Flooding Open Source. Maintainers Are Drowning.

AI PRs generate 1.7x more issues than human ones (10.83 vs 6.45 findings). PRs up 20% YoY. cURL killed its bug bounty. LLVM added AI-disclosure rules. The review bottleneck is here.

Published by GitIntel Research

TLDR

The Real Question: Velocity vs. Integrity

The cURL situation reframes the AI coding productivity debate. The question isn't whether AI makes individual developers faster (METR's controlled studies suggest it often doesn't, at least for experienced engineers). The question is whether AI increases total system throughput — including the review, QA, and incident-response load that code generation creates downstream.

Based on current data, the answer is: it depends entirely on whether your review infrastructure scaled with your generation velocity. Most teams' didn't.

The repos that will come out ahead are the ones that treat AI code generation and AI code review as a coupled system — not two separate decisions made by two separate teams six months apart. Visibility into your AI footprint is the precondition for that kind of coordination.

Know your repo's AI footprint

Run gitintel scan to see which files, authors, and time periods have the highest AI-commit density — before your next review cycle.

# Install
curl -fsSL https://gitintel.com/install.sh | sh

# Scan your repo
cd your-repo && gitintel scan

# Output by file
gitintel scan --format json

View on GitHub

Open source (MIT) · Local-first · No data leaves your machine

Data from CodeRabbit State of Code Generation 2026, Veracode Spring 2026 GenAI Security Update, and SecurityBoulevard.com (March 2026).


Related reading on GitIntel: