Talking About AI in Interviews: Without Sounding Like an Idiot
Somewhere in a dimly lit office...
"Tell me about your experience with AI."
47 seconds of silence
"I've used ChatGPT."
Interview ends.
Table of Contents
- What Interviewers Are Actually Asking
- The Questions You’ll Get (And How to Answer Them)
- The Ethical Questions
- Red Flags That Kill Your Credibility
- Questions to Ask Them
- When to Admit You Don’t Know
- The Interview Prep Checklist
- The 30-Second Pitch
- What You’ve Learned
The question is coming. In every interview, for every role, at every company. Someone will ask you about AI.
Maybe it’s “What’s your experience with AI tools?” Maybe it’s “How do you see AI impacting software development?” Maybe it’s the dreaded “Tell me about a time you used AI to solve a problem.”
Your answer will put you in one of three buckets:
- The Fanboy: “AI is going to change everything! I use it for all my code now!”
- The Dinosaur: “I don’t trust AI. Real developers write their own code.”
- The Professional: “Here’s what I’ve built, here’s what I learned, here’s what I think.”
Guess which one gets hired.
What Interviewers Are Actually Asking
When an interviewer asks about AI, they’re rarely testing your knowledge of transformer architectures. They’re trying to figure out:
Are You Keeping Up?
The subtext: “Will this person be able to adapt as tools change, or will they become obsolete?”
They don’t need you to be an AI expert. They need to know you’re not going to refuse to use new tools because “that’s not how we did it at my last job.”
Good signal: You’ve tried things. You have opinions based on experience.
Bad signal: You either haven’t touched it or you’ve swallowed the hype whole.
Can You Think Critically?
The subtext: “Can this person evaluate new technology rationally, or do they just follow trends?”
AI tools are imperfect. Anyone who’s used them knows this. If your answer suggests AI is either perfect or useless, you’ve failed the critical thinking test.
Good signal: You can articulate both benefits AND limitations from personal experience.
Bad signal: Unqualified enthusiasm or unqualified skepticism.
Do You Ship Things?
The subtext: “Has this person actually built something, or do they just talk about building things?”
This is where the portfolio project from Chapter 07 pays off. Anyone can talk about AI. Fewer people can show something they built.
Good signal: “I built X. Here’s what I learned.”
Bad signal: “I’ve been meaning to try that.”
A mediocre project you can actually demo beats a brilliant project that only exists in your head. Interviewers want to see that you can take something from idea to implementation. The RAG documentation search from Chapter 07 is specifically designed to be impressive-but-achievable.
The Questions You’ll Get (And How to Answer Them)
“What’s your experience with AI tools?”
Bad answer: “I use ChatGPT sometimes.”
Worse answer: “I don’t really use them, I prefer to code myself.”
Good answer:
“I use Copilot daily for boilerplate and routine code—it probably saves me 30-40 minutes a day on stuff like writing tests and standard CRUD operations. I’ve also built a documentation search feature using RAG that’s live in production. I’ve learned to be careful about blindly accepting suggestions, especially around security and error handling, but it’s become a normal part of my workflow.”
What makes this good:
- Specific tools mentioned
- Specific use cases
- Quantified benefit (even if approximate)
- Shows critical awareness of limitations
- References real work
”How do you see AI changing software development?”
This is a trap question. Go too bullish and you sound naive. Go too bearish and you sound like you’re in denial.
Bad answer: “AI will replace most developers in five years.”
Worse answer: “It’s just hype, nothing will really change.”
Good answer:
“It’s already changing how I work day-to-day—I spend less time on boilerplate and more time on architecture and problem-solving. I think it’ll continue to raise the bar for what’s expected of developers. We’ll need to be better at reviewing and understanding code we didn’t write, and better at knowing when AI is confidently wrong. But the core job—understanding problems, designing solutions, making trade-offs—that still requires human judgment.”
What makes this good:
- Grounded in current, personal experience
- Neither utopian nor dismissive
- Identifies specific skill shifts
- Acknowledges limitations
- Shows thoughtfulness
”Tell me about a time you used AI to solve a problem.”
If you built the portfolio project, this is your moment.
Framework for answering:
- Situation: What problem were you solving?
- Approach: Why did you choose an AI-based approach?
- Implementation: What did you actually build?
- Challenges: What went wrong and how did you fix it?
- Result: What was the outcome?
- Learning: What would you do differently?
Example answer:
“Our internal docs were getting hard to search—keyword search wasn’t cutting it because people asked questions in different ways. I built a RAG system that converts our documentation into embeddings and uses semantic search to find relevant sections, then generates answers with citations.
The main challenge was chunking the documents properly. My first approach used fixed-size chunks and the answers were often missing context. I switched to a smarter chunking strategy that respects document structure—headers, code blocks, paragraphs—and that improved answer quality significantly.
It’s been live for three months, handles about 200 queries a day, and we’ve seen support questions to the team drop by about 30%. The main limitation is it sometimes combines information from different docs in confusing ways, which I’d want to address with better source attribution.”
What makes this good:
- Specific problem with business context
- Technical details that show real implementation
- Honest about challenges and limitations
- Measurable results
- Shows learning and iteration
If you didn’t build it, don’t claim you did. Technical interviewers will follow up with questions. “How did you handle rate limiting?” “What embedding model did you use and why?” “How did you evaluate retrieval quality?” If you’re making things up, you’ll get caught.
”What are the limitations of AI tools?”
This tests whether you’ve actually used them or just read about them.
Surface-level answer (shows you’ve read about AI): “They can hallucinate and make things up.”
Experienced answer (shows you’ve used AI): “A few things I’ve run into: They suggest deprecated APIs if your training cutoff is old. They’re confident even when wrong, so you have to verify everything. They’re bad at anything that requires keeping track of state across a long conversation. And the costs can add up fast—I learned that the hard way with a feature that was making way more API calls than I expected.”
Specific examples from your experience are gold. Generic limitations anyone could Google are not.
”How do you verify AI-generated code?”
This is a senior-level question testing your judgment.
Junior answer: “I run it and see if it works.”
Senior answer:
“I treat it like code from a new team member—someone who’s competent but doesn’t know our codebase or conventions. I check for our common issues: Does it handle errors properly? Is it using the patterns we’ve established? Are there any security concerns? Does it have tests?
I’m especially careful with anything involving authentication, database queries, or external APIs. AI tends to write the happy path well but miss edge cases. I also watch for deprecated dependencies—the models don’t always know what’s current."
"What do you think about [specific AI tool/technique]?”
If you don’t know, say so. Confidently bullshitting about a technology is the fastest way to tank an interview.
Bad: “Oh yeah, I’ve used LangChain extensively…” (when you haven’t)
Good: “I haven’t used LangChain directly—I built my RAG system with the raw APIs because I wanted to understand the mechanics. I’ve heard mixed things about the abstraction overhead. What’s been your team’s experience with it?”
Flipping it back to them accomplishes two things: you avoid bullshitting, and you show genuine curiosity.
The Ethical Questions
These are increasingly common, especially at larger companies.
”What are your concerns about AI in software development?”
They want to see that you’ve thought about this, not that you have a specific “correct” answer.
Good themes to touch on:
- Quality and reliability: AI-generated code that looks right but has subtle bugs
- Skill atrophy: Developers becoming dependent and losing fundamental skills
- Security: AI suggesting insecure patterns or leaking sensitive data into prompts
- Attribution: Questions about code ownership and licensing for AI-assisted work
Example answer:
“My main concern is quality assurance at scale. AI makes it easy to generate a lot of code quickly, but that code still needs to be reviewed and tested. I’ve seen cases where AI-generated code looked fine but had subtle issues—race conditions, missing edge cases, security vulnerabilities. The faster we generate code, the more disciplined we need to be about reviewing it.
I’m also watching the skill development question. Junior developers who lean too heavily on AI might miss out on building the foundational understanding that lets you debug effectively. It’s great for productivity but we need to make sure people still understand what’s happening."
"How do you think about AI and job displacement?”
Minefield question. Don’t be dismissive (“AI will never replace developers”) or fatalistic (“we’re all doomed”).
Balanced answer:
“I think the job is changing more than disappearing. The tasks that are highly repetitive and pattern-matching—writing boilerplate, standard CRUD operations, basic tests—those are already being automated. But the job is more than those tasks.
Understanding what to build, why to build it, how to make trade-offs, how to work with stakeholders—that’s still very human. I think developers who adapt and learn to use these tools effectively will be more productive. Developers who refuse to adapt, or who try to compete with AI on the tasks AI is good at, will struggle.”
Red Flags That Kill Your Credibility
Avoid these and you’re already ahead of most candidates:
The Buzzword Avalanche
“I’ve been leveraging large language models to democratize access to AI-powered solutions through prompt engineering and agentic workflows.”
Translation: “I’ve read some blog posts.”
Real experience sounds specific. Fake experience sounds like a press release.
Unearned Confidence
“AI is definitely going to [any absolute prediction].”
Nobody knows where this is going. Anyone who claims certainty is either lying or foolish. The smartest people in AI are the ones who admit how much uncertainty there is.
Complete Dismissal
“AI-generated code is garbage. I’d never use it.”
This suggests you either haven’t really tried it, or you’re too stubborn to adapt. Neither is a good look.
No Limitations Mentioned
If you can’t name specific limitations from your own experience, interviewers assume you don’t have real experience.
The “I’ll Learn It On the Job” Gambit
“I haven’t used AI much but I’m a quick learner.”
This might have worked in 2023. In 2026, basic AI tool proficiency is expected. “I’ll learn it” is no longer a reasonable answer for something this fundamental.
Most candidates fall at the extremes—either AI cheerleaders or AI skeptics. The rare candidate who can say “I’ve used it, here’s what works, here’s what doesn’t, here’s what I built” stands out simply by being reasonable.
Questions to Ask Them
Interviews go both ways. Ask about their AI practices to learn about the company and demonstrate your thoughtfulness:
Good questions:
- “How is your team currently using AI tools? Any guidelines or policies?”
- “What’s been the biggest challenge with AI-generated code in your codebase?”
- “How do you handle code review for AI-assisted work?”
- “Are there parts of your system where AI isn’t allowed?”
What their answers tell you:
- “We don’t have policies yet”: They’re figuring it out. Could be exciting or chaotic.
- “We’ve banned AI tools”: Red flag. Either paranoid or behind.
- “We have clear guidelines and regular reviews”: Mature approach. Good sign.
- Vague non-answer: They probably haven’t thought about it much.
When to Admit You Don’t Know
This is the most important skill in any interview, but especially for AI topics where things change fast.
Scenarios where “I don’t know” is the right answer:
- Specific implementation details of a tool you haven’t used
- Predictions about future AI capabilities
- Questions about research papers you haven’t read
- Technical details outside your domain (ML theory if you’re a product engineer)
How to say it well:
“I’m not familiar with that specific approach. My experience has been more with [what you do know]. Could you tell me more about how your team uses it?”
What not to do:
- Bullshit your way through
- Pretend the question doesn’t matter
- Give a Wikipedia-level answer that reveals you’re just summarizing
Admitting gaps shows confidence and self-awareness. Bullshitting shows neither.
The Interview Prep Checklist
Before your next interview:
- Build something: The portfolio project or equivalent
- Use the tools: At least 2-4 weeks of daily Copilot/Claude/ChatGPT usage
- Know your numbers: How much time does it save you? What’s your workflow?
- Have a failure story: Something that went wrong and what you learned
- Know the limitations: From personal experience, not blog posts
- Practice the narrative: Out loud, not just in your head
- Prepare questions: What do you want to know about their AI practices?
The 30-Second Pitch
If you only have time for one answer about AI, make it this:
“I use [specific tools] daily for [specific tasks]. I built [specific project] that [specific outcome]. The main things I’ve learned are [specific limitation] and [specific best practice]. I’m genuinely interested in [specific aspect] and trying to get better at [specific skill].”
Fill in the blanks with your actual experience. Practice it until it’s natural.
Example:
“I use Copilot daily for boilerplate code and tests. I built a documentation search system using RAG that handles 200 queries a day for our internal team. The main things I’ve learned are that context management is critical—if you don’t structure your prompts carefully the quality drops fast—and that you absolutely have to verify everything, especially security-related code. I’m genuinely interested in how agents will change development workflows, and trying to get better at cost optimization for AI features.”
That’s 30 seconds. It conveys competence, experience, and critical thinking. It opens doors for follow-up questions you can answer.
It’s also true.
What You’ve Learned
After this chapter, you should be able to:
- Understand what interviewers are really asking when they ask about AI
- Present the portfolio project effectively
- Answer common technical and ethical questions
- Avoid the red flags that kill your credibility
- Ask good questions to evaluate the company
- Admit gaps without losing credibility
- Deliver a concise, credible summary of your AI experience
The interview isn’t about proving you’re an AI expert. It’s about proving you’re a thoughtful professional who can adapt to changing tools without losing your judgment.
That’s a low bar to clear, but most candidates don’t clear it. Now you will.
Resume Inflator 3000
Turn "I changed a button color" into "Architected a high-impact UI paradigm shift." Lying on your resume isn't a crime, it's Personal Branding.
Ready to exaggerate
Enter a task on the left to generate corporate nonsense.