Building Momentum with LLM Coding

December 11, 2025 ai, llm, development, practice

LLM-assisted coding feels slow at first. Painfully slow. You’re constantly second-guessing the output, reading every line, checking for hallucinated imports and invented APIs. This is where the fear creeps in – the worry that you’re losing your craft, becoming a glorified prompt engineer who can’t actually write code anymore. Every suggestion feels like it needs to be verified, and the verification takes longer than just writing the code yourself would have.

But something shifts if you stick with it. The friction fades. Trust builds. You start to recognise the patterns in the AI’s mistakes and learn to head them off. You develop an intuition for when to accept a suggestion wholesale and when to scrutinise it. The flywheel starts turning.

This momentum isn’t automatic. You have to set things up correctly. The AI is a powerful but distractible collaborator, and your job is to create an environment where it can succeed. Here’s what I’ve learned (my preferred tool is Claude Code, so my examples probably drag that way):

  • One goal per session. A single context window should contain a single goal. When you finish a feature, reset the session and clear the context before starting the next one. Don’t let the details of feature A bleed into the implementation of feature B. The LLM will try to be helpful by remembering everything, but that helpfulness becomes confusion when it starts applying patterns from yesterday’s authentication work to today’s reporting feature.

  • Periodic review sessions. Every few major features – I aim for every three or four – run a dedicated review session. Not to build anything new, but to audit what you’ve built together. Ask the LLM to examine the codebase for architectural drift, DRY violations, and emerging patterns that should be formalised. These sessions are invaluable for catching the inconsistencies that accumulate when you’re moving fast. The AI is quite good at spotting its own mess if you explicitly ask it to look.

  • Save your plans. Lean heavily on planning mode. Before any significant feature, have the LLM create a detailed implementation plan and write it to a file. These plans become institutional memory. When you start a related feature later, you can say “Remember how we planned the notification system? Follow the same patterns for the alerting feature.” The AI doesn’t actually remember, of course, but it can read the file, and that’s close enough.

  • One log file. Bring everything from your application – backend, frontend, workers, whatever – into a single log file. Don’t make the LLM waste time and tokens figuring out how to access logs across containers or parse browser console output. When something breaks, you want to paste a log snippet and say “what’s happening here?” not spend five minutes explaining where logs live.

  • One project API. Give your project a well-documented Makefile with clear targets: make test, make serve, make deps, make lint. Instruct the LLM to only interact with the project through that interface. You don’t want it trying to figure out how to run your test suite from first principles every time. Provide the control panel; don’t make it rummage through wires.

  • Don’t optimise for token limits, ever. This is a luxury opinion, I know, but if you’re trying to structure your work to stay inside a context limit, you’re fighting the wrong battle. You’re crippling your workflow and diminishing the AI’s capabilities to save a few quid. Just buy the unlimited account. The productivity gains will pay for it many times over.

  • Invest in your tests. Parallelise them. Organise them into groups so you can run fast feedback loops on related code. Instruct the LLM to run the tests constantly – after every significant change, not just at the end of a feature. Most LLM coding tools support hooks; I use a stop hook to automatically run tests every time the AI finishes responding. The AI should never be allowed to move on without knowing whether it just broke something.

  • Alert on idle. While you’re setting up hooks, add one that alerts you when the LLM is idle. A sound, a terminal bell, whatever works. These tools can be remarkably fast when they’re in flow, and you don’t want to be the bottleneck.

The pattern here is simple: treat the LLM like a capable but context-limited collaborator. It’s possible to build real momentum and genuine trust, just as you would with a human partner. But you have to be the responsible party in the relationship. You maintain the structure. You enforce the discipline. You set the AI up to succeed, and then it will.

Thanks to my friend Vim for listening to me ramble about all this. It helped me get my thoughts together.