Vibe Coder Security Checklist: Securing AI-Generated Code (2025)
Vibe Coder Security Checklist: Securing AI-Generated Code (2025)
Look, I love vibe coding. Prompting Claude or Cursor to bang out a feature in 20 minutes that would've taken me a day? Amazing. Ship fast, iterate fast, that's the whole point.
But here's what nobody talks about: AI assistants write insecure code constantly. I'm not exaggerating. Stanford researchers found that about 40% of code suggestions from Copilot had security issues. And GitGuardian tracked 12.8 million secrets exposed on GitHub in 2024 - a good chunk of that from people copy-pasting AI output without checking.
So I made myself a checklist. Takes 5 minutes before deploy. Has saved me from some genuinely embarassing vulnerabilities.
The 5-Minute Check
Before you push to production, verify these five things:
That's it. Five things. If you do nothing else, do these.
Thing 1: Hardcoded Secrets
AI assistants love to show you working code. Working code often includes real-looking API keys as examples. And then you modify the example, forget to swap the key for an env var, and deploy your Stripe secret key to production.
I see this pattern all the time:
\\\javascript
// Claude gave you this
const stripe = require('stripe')('sk_live_abc123...');
// What you should have
const stripe = require('stripe')(process.env.STRIPE_SECRET_KEY);
\\\
Before you deploy, search your codebase for these strings:
- sk_live, sk_test (Stripe)
- AKIA (AWS)
- sk- (OpenAI)
- ghp_, gho_ (GitHub)
- apiKey, api_key, secret, password
If any of those return hits in your JavaScript files, you've got a problem.
Thing 2: SQL Injection
AI generates this vulnerable pattern more than you'd think:
\\\javascript
// What the AI gives you
const query = \SELECT * FROM users WHERE email = '\${email}'\;
// What you need
const query = 'SELECT * FROM users WHERE email = $1';
db.query(query, [email]);
\\\
The first one is catastrophically bad. Someone types \' OR 1=1--\ as their email and they just got access to your entire user table.
If you're using an ORM like Prisma or Drizzle, you're probably fine - they parameterize by default. But if you're writing raw SQL anywhere, double-check every single query.
Thing 3: Input Validation
AI assumes all inputs are friendly. They're not. Every single thing a user can input needs validation:
- Form fields (email format, length limits, required fields)
- URL parameters (is that ID actually a number?)
- API request bodies (does this match the expected schema?)
- File uploads (is that "image" actually an executable?)
The rule: never trust client-side data. Validate everything server-side, even if you already validated it on the frontend.
Thing 4: XSS (Cross-Site Scripting)
When AI helps you render dynamic content, it often reaches for innerHTML:
\\\javascript
// Vulnerable
element.innerHTML = userProvidedContent;
// Safe
element.textContent = userProvidedContent;
\\\
In React, watch out for dangerouslySetInnerHTML - the name's a hint that you probably shouldn't use it. If you absolutely must render HTML from user input, sanitize it with a library like DOMPurify.
Thing 5: Authorization Checks
This is the sneaky one. AI will write you a beautiful endpoint that does exactly what you asked. But it'll forget to check whether the user making the request is actually allowed to do that thing.
\\\javascript
// What AI writes
app.delete('/api/posts/:id', async (req, res) => {
await db.deletePost(req.params.id);
res.json({ success: true });
});
// What you need
app.delete('/api/posts/:id', async (req, res) => {
const post = await db.getPost(req.params.id);
if (post.authorId !== req.user.id) {
return res.status(403).json({ error: 'Not your post' });
}
await db.deletePost(req.params.id);
res.json({ success: true });
});
\\\
That missing auth check means anyone can delete anyone's post. Go through every endpoint that modifies data and ask: "Is there a check that verifies the user is allowed to do this?"
Prompting for Secure Code
You can actually get better code from AI if you ask for it explicitly.
Weak prompt: "Add user registration" Better prompt: "Add user registration with: bcrypt password hashing, parameterized database queries, input validation for email format, rate limiting on the endpoint, and store secrets in environment variables"The AI knows how to write secure code. It just defaults to the simplest working example unless you tell it otherwise.
Before You Deploy
Run a quick security scan. It'll catch exposed API keys in your JavaScript bundle and flag other obvious issues. Takes 30 seconds. Much faster than explaining to your users why their data got leaked.
Run security scan Run a security audit