Your AI-Generated Code Can't Be Copyrighted. But It Can Infringe Someone Else's.
Supreme Court March 2026: pure AI code has no copyright protection under US law. But it can still infringe yours. 51% of GitHub commits are AI-assisted with zero documentation. EU AI Act enforcement hits August 2026.
Published by GitIntel Research
TLDR
- The US Supreme Court (March 2, 2026) left standing the rule that pure AI-generated code has no copyright protection under US law.
- The same code can still infringe existing copyrights if trained on protected material — "all the liability, none of the protection."
- 51% of GitHub commits in early 2026 are AI-assisted; most lack any documentation of human authorship.
- The EU AI Act enforcement deadline is August 2026: fines up to 3% of global revenue for non-compliant AI tool usage.
- The only defense: documented human creative control — logs, diffs, iterative edits — and tools that track AI attribution in git.
The Scale Problem: 51% of Commits, Zero Documentation
Here's what makes this a crisis rather than a theoretical edge case: the volume of AI-generated code has crossed a threshold where the legal uncertainty is no longer manageable by ignoring it.
| Metric | Figure | | --- | --- | | {row.metric} | {row.figure} |
That last row is the critical one. Our own GitIntel scans across
major open-source repos show that only around 5–8% of AI-assisted
commits include an explicit
Co-Authored-By
trailer. Tools like Copilot and Cursor don't add attribution
markers by default. Tools like ChatGPT leave no trace at all.
Which means the vast majority of AI-generated code being merged into production codebases today is undocumented — no record of which tool generated it, which model version was used, what the human developer's creative contribution was, or what training data the AI was using when it produced that output.
The EU AI Act Enforcement Clock Is Running
While US courts have been slow to create new law, European regulators have set a hard deadline: August 2026. That's when full enforcement of the EU AI Act's requirements for general-purpose AI providers kicks in, including:
- →
Training data transparency: Providers must publish summaries of what data their models were trained on and must respect copyright reservations made under Article 4(3) of the DSM Directive.
- →
Downstream obligations: If a coding tool was trained on GPL or LGPL-licensed code without appropriate compliance measures, the liability chain extends to the enterprise deploying that tool.
- →
Fines up to 3% of global revenue for non-compliance — not 3% of EU revenue, 3% of global revenue.
For a $10B revenue company, that's a $300M exposure. For context: the average enterprise software team now saves 3.6 hours per developer per week using AI tools (McKinsey, February 2026). At $150/hour fully loaded, that's roughly $28K per developer per year in productivity value. A 300 engineer team would need 10 years of those gains to cover a single maximum EU fine.
What Actually Protects You
Legal counsel will tell you to stop using AI tools until this is resolved. That advice will be ignored — and rightly so, because the productivity value is real. The practical path forward is documentation, not abstinence.
The human authorship evidence chain
To defend copyright in AI-assisted code, you need evidence of substantial human creative control. That means:
What courts look for in AI-assisted code authorship claims:
- Prompt design → human creative choices in how to instruct the AI
- Iterative editing → showing the human modified, rejected, refined
- Architectural decisions → the human chose the structure, not just the syntax
- Review artifacts → code review comments, PR descriptions, test writing
- Attribution trail → git history showing who did what, and with which tool
The last point is where most teams fail. Without an attribution trail, you can't demonstrate which commits involved human creative control and which were verbatim AI output. A commit that says `fix: update handler` could be 100% human or 100% AI — and there's no way to tell after the fact.
Recommended attribution practices
For Claude Code users
Claude Code automatically adds
Co-Authored-By: Claude
trailers when making commits on your behalf. Keep this enabled.
It creates a paper trail of which commits had AI involvement —
your baseline for any future authorship analysis.
For Copilot / Cursor users
Neither tool adds git attribution by default. Consider a commit-msg hook that flags commits where significant AI assistance was used, or establish a team convention for PR descriptions (e.g., "AI-assisted: Copilot used for X"). It's not automatic, but it's better than nothing.
For engineering leaders
Establish an AI code policy before August 2026. At minimum: (1) which tools are approved, (2) what attribution is required, (3) which code categories require human review. You'll need this documented for EU AI Act compliance even if your legal team doesn't require it yet.
What Comes Next
The Supreme Court's non-decision in March 2026 is not a permanent answer. Congress is being pressured from multiple directions: tech companies want copyright protection for AI-assisted work, content creators want stronger protection against AI training on their material, and international trade partners want US copyright law to align with emerging EU standards.
Meanwhile, litigation is accelerating. Copyright claims are shifting from training data arguments (did your AI train on my code?) to output arguments (does your AI's output reproduce my protected expression?). The Thomson Reuters ruling opened that door. More cases are coming.
The practical reality: the law will not catch up with AI adoption before your next production release. But the teams that are documenting their AI usage now — tracking attribution, maintaining human authorship evidence, auditing which tools are in use — will be in a dramatically better position when the regulatory environment crystallizes. The ones that aren't will be scrambling to reconstruct a paper trail that no longer exists.
The bottom line
At 51% AI-assisted commit rate and growing, the question is no longer whether AI-generated code is in your codebase. It's whether you can prove which commits had sufficient human creative control to be protectable — and which tools generated the rest. That answer lives in your git history. Or it doesn't.
Sources
- Morgan Lewis, "US Supreme Court Declines to Consider Whether AI Alone Can Create Copyrighted Works," March 2026
- Paddo.dev, "All the Liability, None of the Protection," 2026
- Bloomberg Law, "IP Issues With AI Code Generators"
- JDSupra / MoFo Tech, "AI Trends for 2026 — Copyright Litigation Shifts"
- McKinsey Developer Productivity Study, February 2026 (4,500+ developers, 150 enterprises)
- GitIntel scan data, 6,500 commits across 13 major open-source repos, March 2026
- Opsera, "AI Coding Impact 2026 Benchmark Report"
- EU AI Act enforcement timeline, European Commission, 2026
How much of your codebase is AI-generated?
GitIntel scans your git history and surfaces every AI-attributed commit — which tools were used, which developers, which files. It's the attribution trail you'll need.
# Install
curl -fsSL https://gitintel.com/install.sh | sh
# Scan your repo
cd your-repo
gitintel scan --format json
Open source (MIT) · Local-first · No data leaves your machine
Data collected March 2026. Legal information is educational only and does not constitute legal advice. Consult qualified counsel for specific compliance questions.
Related reading on GitIntel: