AI is a tool. Judgment is the job
Why critical thinking still matters in AI-assisted development
Critical thinking is your most valuable asset in the age of AI-assisted development.
The rise of tools enabling so-called “vibe coding” — where you describe an app and suddenly you have a working prototype — feels almost like magic. And honestly, it is impressive. But as engineers, we still need to draw a clear line between rapid prototyping and responsible, production-ready development.
As Gergely Orosz discussed on The Pragmatic Engineer podcast in the episode “Beyond Vibe Coding with Addy Osmani,” there’s a real risk of losing technical judgment when we rely too heavily on LLMs.
The consensus from engineering leaders like Addy Osmani is clear:
AI is a powerful collaborator and a serious velocity multiplier.
It is not a replacement for engineering fundamentals.
The human engineer is still in control
If you want to build secure, scalable, and maintainable software, AI can help — but you still have to own the outcome.
Here are a few principles that have worked well for me.
1. Read and understand the code you commit
If you can’t explain a piece of code in your own words, you probably shouldn’t be shipping it.
AI-generated suggestions can look convincing, but they often hide subtle problems. Accepting them without a deep review is how you accumulate technical debt and security risks.
Before a change lands in main, I try to make sure I could justify
every line in a code review — even if I didn’t write it manually.
2. Master the last mile
In my experience, models are very good at getting you through the first 60–70% of a solution. They can scaffold, refactor, and generate boilerplate extremely well.
Where they usually struggle is the last mile:
- messy edge cases
- performance constraints
- production incidents
- domain-specific rules
- legacy integrations
That final stretch is where engineering judgment, debugging skills, and system knowledge still matter most. This is where experienced engineers make the difference.
Related to this read Addy Osmani’s article:
The 70% problem: Hard truths about AI-assisted coding.
3. Work from a clear spec, not just a prompt
“Give in to the vibes” is fun for experiments. It’s risky for real systems.
Before I involve an agent in serious work, I try to be clear about:
- what problem I’m solving
- what “done” looks like
- what needs to be tested
- what can fail
Breaking work into small, verifiable chunks — a disciplined engineering mindset — reduces compounding errors and makes it much easier to guide the model.
When things go wrong (and they will)
Models still hallucinate. They still misunderstand context. They still produce overly complex solutions.
When that happens, the only thing that gets you back on track is:
- your ability to reason about the system
- your understanding of the codebase
- your willingness to dig in and debug
No prompt will replace that.
Use AI to accelerate, not to abdicate
I use AI every day. It makes me faster. It helps me explore ideas. It reduces mechanical work.
But I never treat it as a decision-maker.
I see it as a junior engineer who:
- works incredibly fast
- has read everything
- sometimes confidently gets things wrong
You wouldn’t merge a junior’s code without review. You shouldn’t do that with AI either.
Don’t outsource your judgment.
Use AI to move faster — but keep responsibility for quality, security, and maintainability firmly in your hands.
