Developing Good Engineering Taste
I was talking with a group recently about LLM-assisted engineering. I was asked how to cross the capability gap: how to feel in control with LLMs rather than dependent on them. My reply was the predictable: develop “good taste” in engineering. Which is the standard advice now, but what does that actually mean in practice?
It’s one of those things that’s obviously real; some engineers have better taste than others, and generally good taste develops over time, but it’s frustratingly hard to pin down the strategies to speed up that process. We all know it when we see it. A senior glances at a pull request and immediately spots problems. A tech lead reads a design proposal and knows it won’t scale. That’s taste and experience. But how do you get there? And how do you get there in an era where you interact with your systems via an LLM proxy?
Read Lots of Code
The best way to develop taste is exposure to quality. You can’t recognise good if you haven’t seen it.
- Study well-maintained open source projects. Django itself is excellent. So is Flask. The
requestslibrary, and similar mature codebases are good places to begin. - Read the code of experienced colleagues during reviews. Not just to approve, but to learn. Ask yourself why they structured things the way they did. Role play. What did my colleague see that made them choose this design; what have they experienced in the past which informed this?
- Look at commit histories of mature projects to see how they evolved. The journey matters as much as the destination. Sometimes the most instructive thing is seeing what got deleted.
- Pay attention to why certain patterns emerge repeatedly across unrelated projects. If three different codebases independently arrived at the same abstraction, there’s probably something there.
This isn’t passive reading. You’re building a library of reference points in your head. When you’ve seen enough solutions to similar problems, you develop intuition for what works.
Build the ‘wrong’ reflex
When reviewing LLM-generated code (or your colleagues, or your own), practice asking:
- “Would I want to maintain this in six months?”
- “Is this the simplest thing that could work?”
- “What happens when requirements change slightly?”
- “Am I creating coupling or debt I’ll regret?”
- “What would I have to explain to a new team member about this?”
This becomes instinctive with practice, but you have to actively engage with it rather than just accepting what works. The LLM doesn’t care about your future self. You have to. Projects usually move pretty fast, and if a design seems self-evident right now, consider that in a year a design may not make sense without the current context.
The key word here is feel. Early on, you need to think through these questions explicitly. Eventually, code that violates these principles just looks ‘wrong’. But you don’t get to the instinctive stage without going through the deliberate stage first.
Understand the Why Behind Patterns
LLMs are excellent at producing code that follows patterns. They’re terrible at knowing when to break them. Part of having good taste is understanding:
- Why we use certain abstractions, not just that we do
- What problems specific patterns actually solve
- The tradeoffs involved in different approaches
- When the conventional wisdom doesn’t apply
An LLM will happily apply the repository pattern to a three-line script. It’ll add dependency injection to something that will never have more than one implementation. It’ll create an abstraction layer between two things that should just talk to each other directly. Knowing when that’s absurd is taste, and these things should be triggering that same ‘wrong’ reflex.
This is where reading about software design (not just code) becomes valuable. Understand the problems that led to the solutions. When you know that a pattern emerged to solve a specific pain point, you can recognise when that pain point isn’t present and the pattern is invalid.
Work Outside the LLM Safety Net
This is uncomfortable but necessary. Set challenges like:
- “Implement this ticket without any AI assistance.”
- “Create a hypothesis about this issue using only docs and code reading. Test the hypothesis.”
- “Refactor this module and write a sentence to explain every single edit you make.”
The struggle builds the intuition that makes LLM output assessable. If you’ve never felt the friction of solving something hard yourself, you won’t recognise when the LLM is handing you something easy that looks hard. You won’t know what is appropriate versus what is just verbose.
I imagine this is the reason musicians still practice scales even though they could just play the finished piece. The fundamentals create the foundation that makes everything else possible. Engineers who skip the fundamentals can produce output, but they can’t debug it, extend it, or explain it.
Learn to Spot the LLM tells
LLMs have characteristic weaknesses. With experience, you start recognising the patterns:
- Overly defensive code which checks for conditions that can’t occur: handling errors that the type system prevents, or adding null checks on things that are never null
- Missing edge cases that require domain understanding. A good tell is that the happy path works but the real-world scenarios don’t
- Following outdated patterns and libraries from training data and using approaches that made sense five years ago but have been superseded
- Intermixing concerns: database calls next to business logic next to presentation concerns
Learning to recognise “this feels like AI-generated code” is part of developing taste. It’s the same instinct that helps you spot code written by someone who doesn’t understand what they’re doing, because in a meaningful sense, the LLM doesn’t. It’s pattern matching without comprehension.
The Meta-Skill
The core of good taste is this: you need to be able to have a conversation with the code, understand the story which it tells, and know when it is lying to you.
That only comes from understanding at a level deeper than syntax. It comes from having written enough bad code yourself to recognise it. It comes from maintaining systems long enough to feel the consequences of piles of shortcuts.
LLMs are remarkably good at producing plausible code. Plausibility isn’t correctness, and it isn’t quality.
There’s no shortcut to this. But there is a path: read widely, build deliberately, question, and embrace the productive discomfort of feeling overwhelmed. The struggle transforms knowledge into judgment.