Back to Blog
SecurityMarch 29, 2026 · 7 min read

Vibe-Coded Apps Are Shipping With Critical Security Holes. Here's the Data.

10.3% of Lovable-generated apps had critical RLS flaws. 21% of YC W'25 companies are 91%+ AI-generated. 63% of vibe coders are non-developers. The security bill is coming due.

Published by GitIntel Research

TLDR

  1. The security gap is about to get much wider.

The median vibe coder is a product manager, designer, or domain expert who has a problem to solve and discovered that Claude, Lovable, or Bolt.new can build them a web app before their coffee gets cold. They are not thinking about RLS. They are not thinking about SQL injection. They are thinking about whether the UI looks right.

This isn't a criticism — it's a systemic gap. The platforms enabling vibe coding are optimizing for shipping speed. Security is invisible until it isn't.

The Scale Problem

The 10.3% RLS flaw rate at Lovable isn't the whole story. It's just the most measurable slice of a much larger pattern.

**> 45% of all AI-generated code contains at least one OWASP Top-10 vulnerability. ** That figure, from Veracode's 2026 State of Software Security report, has been essentially flat since 2023. Syntax pass rates have improved dramatically (95%+), but security pass rates remain stuck in the 45–55% range. AI code is getting cleaner. It's not getting safer.

What makes vibe coding different from ordinary AI-assisted development is the feedback loop. A professional developer using Claude Code gets a code review. Their PR goes through CI. A linter runs. A security scan might fire. They have colleagues who can push back.

A non-developer using Lovable to build a SaaS tool for their team has none of that. The ship button is the entire pipeline. When something is wrong, they find out from a user.

And the volume is growing fast. Gartner projects that **> 60% of all new code will be AI-generated by end of 2026 ** . Right now, AI-authored code makes up an estimated 26.9% of all production code globally — up from 22% the previous quarter (DX Research, Q1 2026). The slope is steep.

What to Check Before You Ship a Vibe-Coded App

If you've built something with an AI app builder and it handles user data, here are the five checks that catch the most critical issues. These are not exhaustive — but skipping them is how you end up in Goldsmid's next report.

1. Verify RLS is enabled on every table

If you're using Supabase (the default backend for Lovable, Bolt, and many others), open the Supabase dashboard, go to Authentication → Policies, and confirm every table that stores user data has at least one RLS policy active. No policy = any authenticated user can read all rows.

-- Check which tables have RLS disabled SELECT tablename, rowsecurity FROM pg_tables WHERE schemaname = 'public' AND rowsecurity = false;

2. Search your codebase for hardcoded secrets

34% of scanned apps exposed API keys in client code. If you can read your source and see a string that looks like a key, so can anyone who opens DevTools.

With GitIntel installed, scan for exposed secrets

gitintel secrets --path ./src

Or manually grep for common patterns

grep -r "sk_live|api_key|SUPABASE_SERVICE" ./src

3. Confirm admin routes are gated

Check every route that lets a user modify or delete data. Is it behind authentication middleware? Is it checking that the authenticated user owns the resource they're modifying? AI tools often generate CRUD operations without adding ownership checks.

4. Use parameterized queries — never string interpolation

SQL injection via unescaped inputs affected 6.2% of scanned apps. If you see code that builds a query by concatenating user input, rewrite it.

{// Bad — vulnerable to SQL injection const query = \SELECT * FROM users WHERE email = '${email}'`;

// Good — parameterized const { data } = await supabase .from('users') .select('*') .eq('email', email);`}

5. Audit what your AI tool did, not just what you asked it to do

This is where GitIntel fits in. Run `> gitintel scan ` to see exactly which commits were AI-generated, and which files they touched. Then review those files specifically. You approved the feature — not necessarily the implementation.

This Is Not an Argument Against Vibe Coding

The 21% of YC W'25 companies shipping 91%+ AI-generated codebases are not making a mistake. They are moving fast, validating ideas, and building things that would have taken 10× longer two years ago. That is genuinely valuable.

The argument is that the security surface of a vibe-coded app is different from the security surface of a hand-crafted one , and most of the tooling, workflows, and mental models we have for securing software were designed for the latter.

The asymmetry: A developer writing a Supabase query from scratch has to think about RLS because they are constructing the query. An AI builder generating a full-stack app produces the query as a side effect of fulfilling a feature request. The human's attention is on the feature, not the infrastructure it sits on.

The platforms are starting to respond. Lovable patched its RLS defaults. Bolt added a pre-deploy security checklist. Several new projects are building AI-native security scanning specifically for vibe-coded apps. But the apps already in production — the ones built before these patches — are still out there.

92% of US developers now use AI coding tools at least weekly (Stack Overflow Developer Survey 2026). The window between "vibe coding is a niche thing" and "vibe-coded apps are everywhere" has already closed. The security tooling for the new reality needs to catch up.

Audit your AI-generated commits

GitIntel shows you exactly which commits were AI-generated and which files they touched — so you know where to focus your security review.

# Install GitIntel
curl -fsSL https://gitintel.com/install.sh | sh

# See your AI-generated commits
cd your-app
gitintel scan --show-files

View on GitHub

Open source (MIT) · Local-first · No data leaves your machine

Data sources: Hashnode State of Vibe Coding 2026, Taskade State of Vibe Coding 2026, Veracode State of Software Security Spring 2026, DX Research Q1 2026, Gartner AI Code Forecast 2026, Stack Overflow Developer Survey 2026.


Related reading on GitIntel: