AI_FOR_CYNICAL_DEVS
← Back to The Grind
Module 05 // 30 minutes // Copilot

Code Generation Without Shooting Yourself in the Foot

Copilot is already watching me type. They call it an 'AI Pair Programmer.' That implies a partnership. This is not a partnership. This is a hyperactive, Junior Developer hovering over my shoulder, screaming code snippets into my ear before I can even finish typing a variable name. I type `const` and it suggests a 40-line Fibonacci function. I am just trying to define a constant, you absolute maniac!

— Letters from Hell

Table of Contents


The Lie We All Tell

Everyone in your standup is lying about how they write code.

“Yeah, I implemented the authentication system yesterday” - they mean Copilot wrote it and they fixed three bugs.

“Just finished the payment integration” - they mean they asked ChatGPT how to use the Stripe API and copy-pasted with minor edits.

“Refactored the user service” - they mean the AI rewrote it and they’re hoping the tests still pass.

Most senior devs don’t admit this because there’s still this weird stigma. Like using AI is cheating. Like “real” developers write every line from scratch, consulting only their deep knowledge and maybe the official docs.

That developer doesn’t exist anymore. Or if they do, they’re shipping code half as fast as everyone else and wondering why they’re not getting promoted.

Here’s the truth: using AI to write code is now the default. Not using it is the exception. The question isn’t “should I use AI?” but “how do I use it without accidentally shipping a security vulnerability disguised as a helpful function?”

What’s Actually Happening When Code Appears

A developer typing at a keyboard while a ghostly AI entity hovers behind them, arms reaching around to type suggestions. The screen shows code appearing faster than the developer can read it. The AI has an eager, slightly manic expression.

Let’s talk about what’s really going on when you hit Tab and 20 lines of code materialize.

The AI has been trained on billions of lines of code. GitHub repos, Stack Overflow answers, tutorial blogs, open source projects, that one intern’s commented-out debugging print statements from 2019. All of it. The good code, the bad code, the code that should never have been written.

When you type function calculateTax(, the AI is thinking: “I’ve seen this pattern 47,000 times. In 82% of cases, the next parameter is amount. In 71% of cases, there’s also a taxRate parameter. In 23% of cases, there’s horrific floating point math that loses pennies. Let me suggest the most common pattern.”

It’s not understanding your business logic. It’s not considering your edge cases. It’s not thinking “what if the tax rate changes mid-calculation” or “what about international taxes” or “should this handle negative amounts.”

It’s autocomplete on steroids. Very good autocomplete. Disturbingly good autocomplete. But still autocomplete.

The dangerous part? The code looks right. It reads well. It has sensible variable names. It even has error handling sometimes. Your brain sees code that looks professional and thinks “this is fine.”

That’s when you ship the bug.

The Three Stages of AI-Assisted Coding

A triptych showing three panels: 1) A developer on a rocket ship labeled 'VELOCITY' looking thrilled, 2) The same developer crashed into the ground looking traumatized surrounded by bug reports, 3) The developer calmly working alongside a robot, both reviewing code carefully.

Everyone goes through these stages. Knowing where you are helps.

Stage 1: The Honeymoon

You just started using AI code generation. It’s amazing! You type a comment and get a whole function! You’re shipping features at lightning speed! Your velocity is through the roof! You feel like a 10x developer!

This lasts about two weeks.

Then you hit production issues. The code that looked perfect had a subtle bug. The AI suggested a deprecated API. The error handling didn’t handle the actual errors your system produces. The security “best practice” it implemented was actually from a tutorial about what NOT to do.

You spend three hours debugging code you didn’t write and don’t fully understand.

Stage 2: The Hangover

Now you don’t trust the AI at all. You review every suggestion with paranoid scrutiny. You second-guess everything. Sometimes you delete the AI’s suggestion and write it yourself out of spite.

This is actually the most productive stage, but it feels terrible. You’re slower than Stage 1 but way safer. You’re learning what the AI is bad at.

You stay here for a few months, getting better at spotting AI mistakes.

Stage 3: The Partnership (where you want to be)

You’ve developed intuition. You know when to trust the AI (boilerplate, standard patterns) and when to scrutinize carefully (security, business logic, anything money-related).

You use AI for the boring shit and your brain for the important shit. You’re fast AND safe. You’ve figured out the actual productivity boost isn’t from accepting every suggestion - it’s from accelerating the parts that don’t matter so you can focus on the parts that do.

Most developers are in Stage 1 or 2. Very few have made it to Stage 3. The ones who have aren’t loud about it because they’re too busy shipping.

The Password Reset Incident

Let me tell you about Marcus. Senior engineer, great track record, knows his shit. Started using Copilot about a year ago.

He’s building a password reset feature. Nothing fancy, standard stuff. Writes a comment: “Generate password reset token and email it to the user.”

Copilot generates a beautiful function. Clean code, good variable names, uses a crypto library for the token, even has email sending logic. Marcus reviews it, looks good, ships it.

Three weeks later, security audit finds the vulnerability. The function generates a reset token and emails it to whatever email address is provided in the request. It doesn’t verify the email belongs to an account. It doesn’t check if the user exists.

Anyone can trigger password reset emails for any email address. An attacker could:

  1. Flood any email with reset requests (DoS)
  2. Test which email addresses have accounts (information leak)
  3. If they control the email, they can reset anyone’s password (account takeover)

How did this happen? Copilot learned from tutorials. Tutorials often show the “happy path” without security considerations. The code LOOKED right. It had error handling, used proper crypto, sent emails correctly. It just didn’t validate the fundamental assumption: this email should belong to a user.

Marcus knew about this class of bugs. He’s fixed them before. But he was in Stage 1 - trusting the AI because the code looked professional.

The fix took 5 minutes. Finding the bug cost a security audit and Marcus’s confidence. He’s in Stage 2 now. Probably staying there for a while.

Why the AI Keeps Suggesting Deprecated Shit

Here’s something that will drive you insane: the AI confidently suggests code that hasn’t been best practice since 2019.

// You're writing React code
// The AI suggests:
componentWillMount() {
    // This lifecycle method has been deprecated since React 16.3
}

// Or suggests moment.js when date-fns is now standard
// Or suggests request package when it's been deprecated for years
// Or suggests var when everyone uses const/let now

Why? Because the AI’s training data includes all the old code that’s still on GitHub. The 2015 tutorial that ranks high on Google. The Stack Overflow answer from 2017 that has 10,000 upvotes. The open source project that hasn’t been updated in 4 years.

The AI doesn’t know time. It doesn’t know “this was good advice in 2018 but we’ve moved on.” It just knows “this pattern appears frequently in my training data.”

The fix

You need to know what’s up-to-date.

You can’t outsource this to the AI.

When it suggests something, verify it’s not from the deprecated pile. Check the docs. Look at the package’s GitHub - when was the last commit? Read the warnings.

This is the tax you pay for velocity. The AI makes you fast. Staying updated is your job.

The Problem with Looking Right

The most dangerous bugs from AI code aren’t the ones that crash. Those are easy to catch. The dangerous ones are the bugs in code that works.

Example #1: Developer uses AI to generate a date comparison function

function isAdult(birthDate) {
    const age = new Date().getFullYear() - new Date(birthDate).getFullYear();
    return age >= 18;
}

Ships to production. Works fine for months. Then on January 1st, someone born December 31, 2006 gets marked as an adult and buys alcohol. The function doesn’t account for birth month and day - it only looks at year.

The AI saw this pattern in its training data. It’s common. It’s concise. It works… most of the time. That’s the problem.

Example #2: AI generates error handling

try {
    await saveUser(userData);
    res.json({ success: true });
} catch (error) {
    res.status(500).json({ error: 'Something went wrong' });
}

Looks reasonable. Ships. Works. Until someone tries to debug why user creation is failing and gets “Something went wrong” with no details. The AI generated generic error handling that swallows useful information.

The pattern: AI generates code that works for the happy path and maybe one or two obvious error cases. It doesn’t think about edge cases, weird inputs, or debugging experience.

Your job is to think about the things the AI didn’t.

Learning to Spot the Lies

A detective with a magnifying glass examining lines of code on a wall like a crime scene, with red circles and 'SUSPICIOUS' labels on various code blocks. Some code has a fake mustache.

After enough time with AI-generated code, you start recognizing patterns. The tells. The things that look fine but aren’t.

Tell #1: Missing validation

The AI loves to assume inputs are valid.

function processPayment(amount, userId) {
    const user = await getUser(userId);
    const charge = await stripe.charges.create({
        amount: amount,
        currency: 'usd',
        customer: user.stripeId
    });
    return charge;
}

Looks fine until you think about it for 10 seconds:

  • What if amount is negative? Or zero? Or not a number?
  • What if userId doesn’t exist? getUser returns null?
  • What if user.stripeId is undefined?
  • What if the Stripe call fails?

The AI generates the happy path. You add the sad path.

Tell #2: Incomplete error handling

The AI adds try/catch but doesn’t think about what errors actually happen.

try {
    const response = await fetch(url);
    const data = await response.json();
    return data;
} catch (error) {
    console.error('Error:', error);
    return null;
}

This catches errors, but:

  • Doesn’t check if response.ok (non-200 status codes don’t throw)
  • Returns null on error (caller doesn’t know what failed)
  • Logs to console (might not be monitored)
  • Doesn’t distinguish between network errors, parsing errors, etc.

The AI knows errors exist. It doesn’t know how to handle them well.

Tell #3: Security “best practices” from tutorials

The AI learned from tutorials that demonstrate concepts, not production systems.

// Generate JWT token
const token = jwt.sign(
    { userId: user.id },
    'secret-key',
    { expiresIn: '24h' }
);

Tutorial code! Has problems:

  • Hardcoded secret (should be environment variable)
  • No token refresh mechanism
  • Only includes userId (might need more claims)
  • 24h expiry might be too long or too short depending on use case

The AI knows JWT structure but not JWT best practices.

Tell #4: Using libraries that might not be installed

The AI will confidently import packages you don’t have.

import _ from 'lodash';
import moment from 'moment';
import axios from 'axios';

// Hope you have these installed!

Always check your package.json. The AI doesn’t know what you have installed.

The New Skill: Speed Reading Code You Didn’t Write

Here’s a skill you didn’t need before: reading code fast enough to review AI suggestions in real-time.

When Copilot suggests 15 lines, you have maybe 3 seconds to decide: accept, reject, or accept-then-edit. You can’t deep-review every line. You need to skim and spot red flags.

What you’re looking for in 3 seconds:

Red flag check:

  • Security-sensitive operations (database, auth, encryption) → READ CAREFULLY
  • External API calls → Check error handling
  • Loops → Check for infinite loops or performance issues
  • Conditionals → Check for edge cases
  • Math operations → Check for division by zero, floating point issues

Green light patterns:

  • Standard library usage → Probably fine
  • Simple data transformations → Usually safe
  • Boilerplate code → Accept and move on
  • Code matching your existing patterns → Safe

This is a new muscle. You’re not trying to understand every line in real-time. You’re doing rapid triage: safe, suspicious, dangerous. Accept the safe stuff. Review the suspicious stuff. Rewrite the dangerous stuff yourself.

After a few months, this becomes automatic. You’ll spot the AI’s favorite mistakes without thinking about it.

Your Actual Workflow Now

Forget the idealized workflows from blog posts. Here’s what actually happens when you code with AI:

You need to add user authentication. You type:

// Authenticate user with email and password
// Check database, verify bcrypt hash, return JWT

AI generates 30 lines. You skim it. Looks mostly right. You accept.

You run it. Doesn’t compile - AI imported a package you don’t have. You add it to package.json, install, run again.

Now it compiles but tests fail. AI’s error messages don’t match your error handling system. You fix that manually.

Tests pass. You read the code more carefully. Notice it’s not handling the case where the user doesn’t exist - it just returns null. You add proper error throwing.

Notice it’s logging the password in the error case (bad!). You fix that.

Notice the JWT expiry is hardcoded instead of using your config. You fix that.

Run tests again. Pass. Code review with yourself. Ship.

Time saved: Maybe 40%. You didn’t have to type the bcrypt comparison logic, the JWT signing, the database query structure. You skipped the “how do I use this library again?” phase.

Time spent: Fixing imports, adjusting to your patterns, adding missing error handling, removing bugs.

Net result: Faster than writing from scratch, slower than if the AI was perfect. Still worth it.

This is the reality. Not “AI writes perfect code.” Not “AI is useless.” It’s “AI does 70% of the work, you do 30%, together you’re faster than you alone.”

The Uncomfortable Equilibrium

Here’s where we’ve landed, whether we like it or not:

What changed:

  • You write less code from scratch
  • You read more code you didn’t write
  • You ship faster but debug differently
  • You need new skills (rapid code review, pattern recognition)

What didn’t change:

  • You’re still responsible for bugs
  • You still need to understand what the code does
  • Tests are still mandatory (more than ever)
  • Security is still your problem
  • Edge cases are still your job

The new developer skillset:

  • Writing good prompts (comments that generate good code)
  • Rapid code review (spotting issues in seconds)
  • Pattern matching (recognizing AI’s common mistakes)
  • Knowing when to trust vs verify
  • Testing discipline (because AI code needs it)

Some days you’ll feel like you’re cheating. Some days you’ll feel like you’re babysitting an overconfident intern. Most days it’s somewhere in between.

The developers who thrive aren’t the ones who accept every suggestion. They’re the ones who learned to work with an imperfect tool that makes them faster at the boring parts so they can focus on the parts that matter.

The Part Nobody Talks About

Using AI for code generation changes how you think about programming.

You stop thinking “how do I implement this?” and start thinking “how do I verify this implementation?” The locus of work shifts from writing to reviewing. From creating to judging.

Some people love this. They were never that into the minutiae of syntax anyway. They like thinking about architecture and letting the AI handle the details.

Some people hate it. They feel disconnected from the code. They miss the flow state of writing. They don’t trust work they didn’t do themselves.

Both are valid. But only one is going to stay employed.

The industry has decided: AI-assisted coding is the new normal. You can resist, but you’ll be the person who refused to use Stack Overflow in 2010. Technically correct that you’re “really” programming. Practically slower than everyone else.

What This Actually Means For You

Stop pretending you’re not using AI. Everyone knows. Your manager knows. Your team knows. They’re using it too.

Start getting good at it. Learn to review fast. Learn to spot the common mistakes. Learn when to trust and when to verify. Build the muscle memory.

And for fuck’s sake, test the code. Especially the code the AI wrote that looks perfect.

Mandatory Knowledge Check

Question 1 / 10Score: 0

Why does the module claim AI is often better at debugging than generating code?