Why AI Writes Insecure Code (And What To Do About It)
If you've used ChatGPT, Cursor, Copilot, or Claude to write code, you've probably noticed something: the code works. It compiles. It does what you asked. It often even looks clean and well-structured.
What you probably haven't noticed is that it also frequently contains security vulnerabilities. Not because the AI is malicious, but because of fundamental limitations in how LLMs generate code.
Understanding why this happens will make you a better vibe coder.
The optimization problem
AI coding tools are trained to produce code that satisfies the prompt. When you say "build me a login page," the AI optimizes for a login page that works — one that accepts an email and password, validates them, and lets the user in.
Security is a negative requirement. It's about things that shouldn't happen: unauthorized access, data leaks, injection attacks. The AI isn't trained to think about what shouldn't happen — it's trained to produce what you asked for. So it gives you a login page. It just doesn't give you a secure login page.
Specifically, AI-generated login pages regularly ship with:
- •Hardcoded JWT secrets like const JWT_SECRET = "mysecret123"
- •No rate limiting on the login endpoint — brute-force attacks welcome
- •Sessions that never expire — a stolen token works forever
- •No CSRF protection on form submissions
- •Password validation on the client only — the server accepts anything
The training data problem
LLMs learn from public code. A massive amount of public code on GitHub is insecure. Tutorials, proof-of-concepts, hackathon projects, Stack Overflow snippets — this code was never meant for production. But it's in the training data, and it influences what the AI produces.
When thousands of GitHub repos have const API_KEY = "sk-..." in their source files, the AI learns that this is a common pattern. It doesn't learn that it's a bad pattern. It just learns that it's frequent.
This creates a specific set of issues:
- •Outdated patterns. AI suggests deprecated APIs, old package versions, and coding styles that were standard in 2022 but have known issues in 2026.
- •Tutorial-grade code. Code from tutorials omits security for brevity. The AI reproduces this brevity in production code.
- •Copy-paste vulnerabilities. When Stack Overflow answers with security flaws get thousands of upvotes, the AI learns to replicate them.
The context window problem
Security is a cross-cutting concern. It's not about one file or one function — it's about how your entire system fits together. Does this API route check authentication? Does the middleware run before the handler? Are the database policies consistent with the application logic?
AI tools work one prompt at a time. When you ask for an API route, the AI doesn't know what middleware exists in your project, what auth provider you're using, or what database policies you've set up. It generates code in isolation, which means security gaps form at the boundaries between AI-generated pieces.
Common examples:
- •AI creates an API route that assumes auth middleware exists — but it doesn't
- •AI creates a database query that assumes RLS is enabled — but it isn't
- •AI creates a frontend that hides admin buttons — but the API endpoint is still unprotected
- •AI creates a payment webhook that assumes signature verification happens elsewhere — but it doesn't
The speed problem
This is the most insidious one. AI makes you faster. That's the whole point. But faster code production without proportionally faster security review means more unreviewed code in production.
Before AI tools, a solo developer might write 200 lines of code a day. They'd review most of it mentally as they wrote it. With AI tools, the same developer can produce 2,000 lines a day. But they're not doing 10x more security review. They're often doing less, because the code looks right and the app works.
The result: entire applications ship to production with zero security review on any of the code.
The "it works" trap
Here's the fundamental problem: insecure code works. A SQL injection vulnerability doesn't throw an error. An exposed API key doesn't cause a compile failure. An unprotected admin route returns data just fine.
When you test your app locally, everything works. You can log in. You can create records. Payments go through. The vulnerability only becomes visible when someone exploits it. By then, your database is dumped, your API key has $15K in charges, or your users' data is on a paste site.
Functional testing doesn't catch security issues. You need a different kind of check.
What to do about it
None of this means you should stop using AI to write code. The productivity gain is real and the code quality is often genuinely good. But you need to add a security review step to your workflow. Here's what that looks like in practice:
1. Know the high-risk areas
Not all AI-generated code needs the same level of scrutiny. Focus your review on:
- •Authentication and authorization — login flows, middleware, permission checks
- •Database queries — anything that touches user input and a database
- •API routes — every endpoint needs auth, validation, and rate limiting
- •Payment handling — webhooks, checkout flows, subscription management
- •Environment variables — make sure nothing sensitive is hardcoded
2. Use established libraries for security-critical features
Don't let AI build your auth from scratch. Use Supabase Auth, Clerk, or Auth0. Don't let AI write raw SQL — use an ORM or query builder. Don't let AI handle payment webhook verification — use the Stripe SDK's built-in methods.
The pattern: let AI build the UI and business logic, but use proven libraries for anything security-sensitive.
3. Prompt for security explicitly
AI won't add security unless you ask for it. Be specific:
- •"Add authentication middleware to this route"
- •"Use parameterized queries, not string concatenation"
- •"Store the API key in an environment variable"
- •"Add rate limiting to this endpoint"
- •"Verify the webhook signature before processing the event"
This helps, but it requires you to know what to ask for. Which brings us to the last point.
4. Automate the security review
You can't manually review every line of AI-generated code for security issues. There's too much of it, and the issues are too subtle. You need automated scanning.
That's what ShipSafe does. Connect your GitHub repo, and we'll scan every file against 50+ vulnerability patterns, cross-reference your dependencies against CVE databases, and give you an A-F safety score. Every finding includes what's wrong, why it matters, and a copy-paste fix.
Keep building fast with AI. Just scan before you ship.
Check your own code
ShipSafe scans your repo against these patterns automatically. Results in under 2 minutes.
Scan your repo free