<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom"><title>Jon Atkinson - Blog</title><link href="https://www.jonatkinson.co.uk/" rel="self" type="application/atom+xml"/><link href="https://www.jonatkinson.co.uk/blog/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/</id><updated>2026-03-26T12:00:00Z</updated><entry><title>The Hypocrisy of Agentic Coding Critics</title><link href="https://www.jonatkinson.co.uk/blog/hypocrisy-of-llm-coding-critics/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/hypocrisy-of-llm-coding-critics/</id><published>2026-03-26T12:00:00Z</published><updated>2026-03-26T12:00:00Z</updated><content type="html">&lt;p&gt;Every week I read another post arguing that LLM coding agents are fundamentally untrustworthy. The code is buggy. The output needs reviewing. The agent doesn&amp;rsquo;t fully understand the domain. It hallucinates progress. It tests only the happy path.&lt;/p&gt;
&lt;p&gt;All valid criticisms. I&amp;rsquo;ve been managing software engineers for years, and this list sounds awfully familiar.&lt;/p&gt;
&lt;p&gt;Consider engineers, real ones with degrees and salaries and opinions about typing systems and ORMs and Kubernetes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Produce code containing bugs. Every sprint. Consistently. 100% of the time.&lt;/li&gt;
&lt;li&gt;Provide cursory code reviews, approving changes they haven&amp;rsquo;t actually read.&lt;/li&gt;
&lt;li&gt;Begin work on features without fully understanding the domain, figuring they&amp;rsquo;ll learn as they go.&lt;/li&gt;
&lt;li&gt;Are optimistic about progress, sometimes to the point of fiction.&lt;/li&gt;
&lt;li&gt;Test only the happy path, because the edge cases are tedious and the deadline is Thursday.&lt;/li&gt;
&lt;li&gt;Throw commits to live on a Friday afternoon without running the test suite, then disappear for the weekend.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;None of this is controversial. Every engineering manager has lived through all of it. And none of it stops us from hiring engineers, trusting them with production systems, or building entire organisations around their output.&lt;/p&gt;
&lt;p&gt;Instead, we built disciplines to compensate. Code review processes. QA teams. CI pipelines. Incident reviews. Post-mortems. The history of software engineering is largely the history of building systems to catch human mistakes before they reach production. We didn&amp;rsquo;t reject human engineers because they were fallible. We built structures around their fallibility because their contribution was worth the effort.&lt;/p&gt;
&lt;p&gt;Now consider the common arguments against LLM coding agents:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The code needs human review.&lt;/li&gt;
&lt;li&gt;The agent might introduce subtle bugs.&lt;/li&gt;
&lt;li&gt;It doesn&amp;rsquo;t understand the broader architecture.&lt;/li&gt;
&lt;li&gt;It can produce plausible-looking output that&amp;rsquo;s fundamentally wrong.&lt;/li&gt;
&lt;li&gt;It works best on the straightforward cases and struggles with nuance.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Read that list again. Swap &amp;ldquo;agent&amp;rdquo; for &amp;ldquo;new hire&amp;rdquo; and nothing changes. These are the same failure modes we&amp;rsquo;ve been managing in humans for decades. The difference is that when a human exhibits them, we call it a development opportunity. When an LLM exhibits them, we call it a fundamental flaw and reach for the &amp;lsquo;slop&amp;rsquo; accusation.&lt;/p&gt;
&lt;p&gt;So why the double standard?&lt;/p&gt;
&lt;p&gt;I think the discomfort is less about the quality of LLM output and more about the loss of a familiar model. We know how to manage humans. We have intuitions about when someone is struggling, when they&amp;rsquo;re bullshitting, when they need support. We&amp;rsquo;ve built careers around those intuitions. An LLM doesn&amp;rsquo;t fit neatly into that model, and that&amp;rsquo;s unsettling.&lt;/p&gt;
&lt;p&gt;But if you step back from the emotional response, the engineering problem is the same: you have a contributor that produces imperfect output, and you need processes to ensure that imperfect output doesn&amp;rsquo;t reach production unchanged. We solved this problem already. We solve it every time we onboard a new team member.&lt;/p&gt;
&lt;p&gt;There&amp;rsquo;s also something worth acknowledging about professional identity. For many engineers, the craft of writing code &lt;em&gt;is&lt;/em&gt; the job. The idea that a machine can do a version of it, even an imperfect version, feels like a challenge to something personal. That&amp;rsquo;s understandable, and I don&amp;rsquo;t think dismissing that feeling is helpful. But it shouldn&amp;rsquo;t be confused with a technical argument about capability, the the technical argument is frequently pushed above the emotional reaction.&lt;/p&gt;
&lt;p&gt;We need to concentrate on structures, not perfection.&lt;/p&gt;
&lt;p&gt;The value of a coding agent isn&amp;rsquo;t that it produces perfect code. Neither does anyone on your team. The value is that it produces &lt;em&gt;reviewable&lt;/em&gt; code at a speed and volume that changes what&amp;rsquo;s possible.&lt;/p&gt;
&lt;p&gt;Organisations built management structures around messy, nondeterministic humans because the value of human contribution was obvious despite the mess. The same logic applies here. The question was never whether the output is flawless. It never has been, for anyone. The question is whether you can build processes around the tool that capture its value while containing its weaknesses.&lt;/p&gt;
&lt;p&gt;And in many cases, the processes already exist. Code review catches bugs regardless of who or what wrote them. CI pipelines don&amp;rsquo;t care about the author. Tests either pass or they don&amp;rsquo;t. Type checkers don&amp;rsquo;t have opinions. Linters are indifferent to feelings. The infrastructure we built to manage human fallibility works just as well for managing LLM fallibility.&lt;/p&gt;
&lt;p&gt;In some cases it works &lt;em&gt;better&lt;/em&gt;, because an LLM will never take your review comments personally, never push back on making the &amp;lsquo;right&amp;rsquo; call because it&amp;rsquo;s Friday afternoon, and never quietly revert your suggested changes in a follow-up commit.&lt;/p&gt;
&lt;p&gt;So, the more productive conversation isn&amp;rsquo;t &amp;ldquo;should we use LLM agents?&amp;rdquo; but &amp;ldquo;what structures do we need to adapt?&amp;rdquo; Some existing processes transfer directly. Others need rethinking.&lt;/p&gt;
&lt;p&gt;Code review, for instance, needs to evolve. When a human writes code, the reviewer can assume a certain baseline of intent: the author understood the requirements, made deliberate choices, and can explain their reasoning. With LLM-generated code, those assumptions don&amp;rsquo;t hold. The reviewer needs to verify not just correctness but &lt;em&gt;appropriateness&lt;/em&gt;. That&amp;rsquo;s a different skill, and it&amp;rsquo;s one we should be actively developing in our teams rather than using its absence as a reason to avoid the tool.&lt;/p&gt;
&lt;p&gt;Similarly, testing becomes more important, not less. If you&amp;rsquo;re integrating LLM-generated code, your test coverage needs to be comprehensive enough to catch the kinds of mistakes LLMs characteristically make. This isn&amp;rsquo;t a new problem. It&amp;rsquo;s the same argument we&amp;rsquo;ve always made for good test coverage. The LLM just makes the case more urgent.&lt;/p&gt;
&lt;p&gt;The real risk isn&amp;rsquo;t that LLMs produce imperfect code. It&amp;rsquo;s that teams adopt them &lt;em&gt;without&lt;/em&gt; the structures that make any contributor safe. An LLM without code review is dangerous. So is a human without code review. The failure mode is identical: unreviewed code reaching production.&lt;/p&gt;
&lt;p&gt;If your processes can&amp;rsquo;t catch bad code regardless of its source, the problem isn&amp;rsquo;t the LLM. The problem is your processes.&lt;/p&gt;
&lt;p&gt;If you&amp;rsquo;re going to argue that LLMs aren&amp;rsquo;t ready for production engineering work, I think you need to grapple with why the same flaws are acceptable in human engineers. If the answer is &amp;ldquo;humans understand what they&amp;rsquo;re doing,&amp;rdquo; I&amp;rsquo;d gently suggest sitting in on a few more code reviews. Understanding is a spectrum, and a significant amount of production code was written by people who were working it out as they went.&lt;/p&gt;
&lt;p&gt;The honest assessment isn&amp;rsquo;t that LLMs are reliable. They are not, and pretending otherwise does everyone a disservice. But they&amp;rsquo;re a new kind of contributor with a familiar set of failure modes, and we already have decades of experience managing those failures in humans. The question is whether we&amp;rsquo;re willing to adapt our playbook, or whether we&amp;rsquo;d rather pretend that human-only engineering was working flawlessly all along.&lt;/p&gt;
&lt;p&gt;Because it wasn&amp;rsquo;t. We just got used to it.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Thanks to Ben for listening to me rant about this in person, which turned into this post.&lt;/em&gt;&lt;/p&gt;</content></entry><entry><title>Nobody buys software from Intel</title><link href="https://www.jonatkinson.co.uk/blog/nobody-buys-software-from-intel/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/nobody-buys-software-from-intel/</id><published>2026-02-05T12:00:00Z</published><updated>2026-02-05T12:00:00Z</updated><content type="html">&lt;p&gt;When did you last care which chip was in your laptop? Not which laptop, or which operating system, but which actual silicon processor was doing the work. Unless you&amp;rsquo;re a gamer or someone running heavy compute workloads, the answer is probably never. You bought the machine because of what it could do, not because of what was inside it. Intel and AMD are invisible.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve been thinking about this a lot recently, because I think it&amp;rsquo;s where LLM providers are heading. The companies pouring billions into training frontier models are building the CPUs of the AI era. They employ brilliant people, burn staggering amounts of capital, and produce genuinely remarkable technology. And if history is any guide, that&amp;rsquo;s not the comfortable position it sounds like.&lt;/p&gt;
&lt;h2 id="the-substrate"&gt;The substrate&lt;/h2&gt;
&lt;p&gt;Intel employs over 100,000 people. They spend north of $15 billion a year on research and development. The engineering required to design and fabricate modern processors is among the most complex work humans have ever attempted. The same is true of AMD, Qualcomm, Apple&amp;rsquo;s chip division, and the handful of other companies pushing the boundaries of what silicon can do.&lt;/p&gt;
&lt;p&gt;And yet, nobody buys software from Intel. Outside of very specific workloads, very few users pick an application because of which chip it runs on. When you open a browser or launch an IDE, the processor underneath is an implementation detail. It&amp;rsquo;s infrastructure. It&amp;rsquo;s the substrate on which everything else grows.&lt;/p&gt;
&lt;p&gt;This wasn&amp;rsquo;t always the case. There was a time when &amp;ldquo;Intel Inside&amp;rdquo; was a genuine selling point, when clock speed was the metric that mattered, when consumers actually cared about the difference between a Pentium III and a Pentium 4. But the value migrated upward. Operating systems, applications, services, platforms. Silicon became a commodity. Still essential, still incredibly sophisticated, but invisible.&lt;/p&gt;
&lt;p&gt;LLMs are starting to follow the same path. The difference between GPT-4o and Claude and Gemini, for most practical tasks, is shrinking. A year ago, model choice felt like it mattered enormously. Today, I still have preferences, but I&amp;rsquo;d struggle to articulate why in terms that would survive a blind test. Ask me why I prefer Claude over GPT for coding work, and I&amp;rsquo;ll give you an answer that&amp;rsquo;s more vibes than evidence. The models are converging, and the gap narrows with every release cycle.&lt;/p&gt;
&lt;h2 id="what-anthropic-is-actually-selling"&gt;What Anthropic is actually selling&lt;/h2&gt;
&lt;p&gt;Claude Code is the most interesting thing Anthropic makes right now, and it isn&amp;rsquo;t a model. It&amp;rsquo;s a product. A tool you install, configure, learn to use, build habits around. The underlying model matters, obviously, but what keeps me opening my terminal every morning is the workflow, not the weights. I&amp;rsquo;ve spent thousands of dollars on Claude Code at this point. I didn&amp;rsquo;t spend that money because of the model&amp;rsquo;s benchmark scores. I spent it because the tool makes me more productive, and the experience of using it keeps getting better.&lt;/p&gt;
&lt;p&gt;Think about what Anthropic gets from Claude Code that they don&amp;rsquo;t get from API access alone. They&amp;rsquo;re watching how people actually use the tool in real working environments. They see which workflows succeed and which ones frustrate. They see where people get stuck, where they lose trust, where they give up. That telemetry is extraordinarily valuable, and it&amp;rsquo;s driving development decisions as much (possibly more) than the underlying model research. When Anthropic decides what to improve next, they&amp;rsquo;re not just looking at abstract capability evaluations. They&amp;rsquo;re looking at what real users do in real sessions.&lt;/p&gt;
&lt;p&gt;Consider the trajectory. Claude Code launched as a fairly basic CLI tool. Within months it had hooks, custom commands, MCP server integrations, project-level configuration. Those features didn&amp;rsquo;t come from model improvements. They came from watching people use the product and understanding what was missing. Product development, not research.&lt;/p&gt;
&lt;p&gt;The feedback loop here is the real competitive advantage: real usage generates insight, insight improves the product, the improved product attracts more usage. This is a product flywheel, not a model flywheel. Benchmark scores don&amp;rsquo;t capture it. Training compute doesn&amp;rsquo;t explain it. It&amp;rsquo;s the accumulated understanding of how humans and AI tools actually collaborate on real work.&lt;/p&gt;
&lt;p&gt;OpenAI is doing the same thing with Codex and ChatGPT. Google with Jules and Gemini&amp;rsquo;s integrations. Cursor has built an entire company on the premise that the product layer is where the fight happens. The model underneath is important in the way that a good engine is important in a car. Sure, there are enthusiasts who care very much about an engine being a V8, or supercharged; but among the general consumers in the car-buying demographic, nobody test-drives an engine.&lt;/p&gt;
&lt;h2 id="models-tools-product"&gt;Models, tools, product&lt;/h2&gt;
&lt;p&gt;If you think of the stack as three layers (models at the bottom, tools in the middle, products at the top), each layer depends on the one below it. The model layer is where the raw capability lives. The tool layer is how that capability connects to the real world: file systems, APIs, databases, code execution. The product layer is what the user actually touches.&lt;/p&gt;
&lt;p&gt;The model layer is converging, as I said. The tool layer is where the most interesting work is happening right now. Context management, agentic workflows, knowing when to ask the user a question versus when to just act. These are the problems that separate &amp;ldquo;impressive demo&amp;rdquo; from &amp;ldquo;useful daily driver.&amp;rdquo; I&amp;rsquo;ve been using agentic coding tools for long enough now to know that the gap between those two things is enormous, and it lives almost entirely in the tool layer.&lt;/p&gt;
&lt;p&gt;When Claude Code decides to run my test suite after making a change, that&amp;rsquo;s a tool-layer decision. When it reads my project&amp;rsquo;s Makefile to understand how I&amp;rsquo;ve configured things, that&amp;rsquo;s tool-layer intelligence. When it stops and asks me a clarifying question instead of guessing, that&amp;rsquo;s tool-layer judgement. None of these things are about the model being smarter in some abstract sense. They&amp;rsquo;re about the product being better at the practical work of collaborating with a human.&lt;/p&gt;
&lt;p&gt;This is also where the gap between providers becomes most visible, at least for now. I&amp;rsquo;ve tested Codex, Jules, and Claude Code against the same tasks, and the differences in outcome had almost nothing to do with the underlying model&amp;rsquo;s raw capability. They came down to practical things: did the agent read the README? Did it understand how to install the dependencies? Did it run the full test suite or just a subset? These are tool-layer failures, not model-layer failures. The model was smart enough in every case. The tooling around it was what determined success or failure.&lt;/p&gt;
&lt;h2 id="diminishing-returns"&gt;Diminishing returns&lt;/h2&gt;
&lt;p&gt;The model race will cool off. It has to.&lt;/p&gt;
&lt;p&gt;Each generation of improvement costs more and delivers less visible gain for most users. This is the same trajectory as CPU clock speeds in the 2000s. Remember when Intel and AMD were in a raw performance war, pushing clock speeds higher and higher? They kept improving, but the improvements stopped mattering to most people. A 3GHz chip and a 3.5GHz chip felt identical for email and web browsing. The manufacturers pivoted to efficiency, power consumption, and integrated features instead.&lt;/p&gt;
&lt;p&gt;LLMs are approaching a similar threshold. The difference between a model that scores 90% on a coding benchmark and one that scores 93% is meaningful to researchers but largely invisible to someone using the tool to build a Django application. I can feel this in my own usage. Six months ago, a new model release would change how I worked. Today, I upgrade, notice it&amp;rsquo;s a bit faster or a bit better at long context, and carry on with my day. The improvements are real. They just don&amp;rsquo;t change my behaviour anymore.&lt;/p&gt;
&lt;p&gt;When that happens across the board, the companies that invested in the layers above the model will be the ones standing. The ones still chasing benchmark points will be selling a commodity. A very sophisticated commodity, sure. But a commodity nonetheless. And the margins on commodities are thin, no matter how clever the engineering underneath.&lt;/p&gt;
&lt;h2 id="or-maybe-they-wont-let-it-happen"&gt;Or maybe they won&amp;rsquo;t let it happen&lt;/h2&gt;
&lt;p&gt;The CPU analogy has a flaw, and it&amp;rsquo;s worth being honest about it.&lt;/p&gt;
&lt;p&gt;Intel and AMD grew up in a less aggressive era of technology capitalism. They largely accepted their role as component suppliers. They let the value migrate upward without fighting particularly hard to capture it. Intel tried to build products a few times (remember Intel&amp;rsquo;s phone chips? Their TV ambitions?) but never with the conviction needed to succeed.&lt;/p&gt;
&lt;p&gt;LLM vendors have the benefit of hindsight. They can see what happened to chipmakers and choose differently. OpenAI, Anthropic, and Google are already vertically integrating. They&amp;rsquo;re building the models AND the tools AND the products. They&amp;rsquo;re not content to be substrate. They want to own the full stack.&lt;/p&gt;
&lt;p&gt;This is a rational strategy, and they might pull it off. The current generation of AI companies are far more aggressive about capturing value across the entire chain than Intel ever was. They have the talent, the capital, and the motivation.&lt;/p&gt;
&lt;p&gt;But history is full of companies that tried to own everything and failed. Vertical integration is hard to sustain when the layers above you move faster than you can. Microsoft tried to own the browser, the search engine, the phone, the social network, the music player. They succeeded at some and failed spectacularly at others. Being the best at one layer doesn&amp;rsquo;t guarantee competence at the layers above. The skills required to train a frontier model are entirely different from the skills required to build a product that people love using every day. Great research labs don&amp;rsquo;t automatically produce great products. Ask Google about that one.&lt;/p&gt;
&lt;p&gt;The question is whether these companies can simultaneously be the best model provider, the best tool builder, and the best product company. That&amp;rsquo;s a lot of battles to fight at once. Startups like Cursor are already demonstrating that a focused team building exclusively at the product layer can compete with, and sometimes outperform, the vertically integrated giants. If the model layer truly commoditises, there will be more companies like Cursor, not fewer. Small teams with good product instincts, plugging into whichever model is cheapest or best for their use case, iterating faster than the big labs can. That&amp;rsquo;s the world Intel accidentally enabled for PC software. It might be the world that OpenAI and Anthropic accidentally enable for AI tools.&lt;/p&gt;
&lt;p&gt;I don&amp;rsquo;t know which way this goes. Anyone who tells you they do is selling something. But I keep coming back to the CPU analogy because the structural forces feel so similar: brilliant engineering becoming invisible infrastructure, value migrating upward, the consumer caring less and less about what&amp;rsquo;s underneath.&lt;/p&gt;
&lt;p&gt;When I open my terminal tomorrow morning, I won&amp;rsquo;t be thinking about which model is running. I&amp;rsquo;ll be thinking about whether the tool helps me get my work done. That instinct, multiplied across millions of users, is what turns a technology into a substrate. The LLM providers can see it coming. Whether they can avoid it is another question entirely.&lt;/p&gt;</content></entry><entry><title>Developing Good Engineering Taste</title><link href="https://www.jonatkinson.co.uk/blog/good-taste/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/good-taste/</id><published>2026-01-14T13:45:00Z</published><updated>2026-01-14T13:45:00Z</updated><content type="html">&lt;p&gt;I was talking with a group recently about LLM-assisted engineering. I was asked how to cross the capability gap: how to feel in control with LLMs rather than dependent on them. My reply was the predictable: develop &amp;ldquo;good taste&amp;rdquo; in engineering. Which is the standard advice now, but what does that actually mean in practice?&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s one of those things that&amp;rsquo;s obviously real; some engineers have better taste than others, and generally good taste develops over time, but it&amp;rsquo;s frustratingly hard to pin down the strategies to speed up that process. We all know it when we see it. A senior glances at a pull request and immediately spots problems. A tech lead reads a design proposal and knows it won&amp;rsquo;t scale. That&amp;rsquo;s taste and experience. But how do you get there? And how do you get there in an era where you interact with your systems via an LLM proxy?&lt;/p&gt;
&lt;h3 id="read-lots-of-code"&gt;Read Lots of Code&lt;/h3&gt;
&lt;p&gt;The best way to develop taste is exposure to quality. You can&amp;rsquo;t recognise good if you haven&amp;rsquo;t seen it.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Study well-maintained open source projects. Django itself is excellent. So is Flask. The &lt;code&gt;requests&lt;/code&gt; library, and similar mature codebases are good places to begin.&lt;/li&gt;
&lt;li&gt;Read the code of experienced colleagues during reviews. Not just to approve, but to &lt;em&gt;learn&lt;/em&gt;. Ask yourself why they structured things the way they did. Role play. What did my colleague see that made them choose this design; what have they experienced in the past which informed this?&lt;/li&gt;
&lt;li&gt;Look at commit histories of mature projects to see how they evolved. The journey matters as much as the destination. Sometimes the most instructive thing is seeing what got deleted.&lt;/li&gt;
&lt;li&gt;Pay attention to why certain patterns emerge repeatedly across unrelated projects. If three different codebases independently arrived at the same abstraction, there&amp;rsquo;s probably something there.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This isn&amp;rsquo;t passive reading. You&amp;rsquo;re building a library of reference points in your head. When you&amp;rsquo;ve seen enough solutions to similar problems, you develop intuition for what works.&lt;/p&gt;
&lt;h3 id="build-the-wrong-reflex"&gt;Build the &amp;lsquo;wrong&amp;rsquo; reflex&lt;/h3&gt;
&lt;p&gt;When reviewing LLM-generated code (or your colleagues, or your own), practice asking:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;Would I want to maintain this in six months?&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;Is this the simplest thing that could work?&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;What happens when requirements change slightly?&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;Am I creating coupling or debt I&amp;rsquo;ll regret?&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;What would I have to explain to a new team member about this?&amp;rdquo;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This becomes instinctive with practice, but you have to actively engage with it rather than just accepting what works. The LLM doesn&amp;rsquo;t care about your future self. You have to. Projects usually move pretty fast, and if a design seems self-evident right now, consider that in a year a design may not make sense without the current context.&lt;/p&gt;
&lt;p&gt;The key word here is &lt;em&gt;feel&lt;/em&gt;. Early on, you need to think through these questions explicitly. Eventually, code that violates these principles just looks &amp;lsquo;wrong&amp;rsquo;. But you don&amp;rsquo;t get to the instinctive stage without going through the deliberate stage first.&lt;/p&gt;
&lt;h3 id="understand-the-why-behind-patterns"&gt;Understand the &lt;em&gt;Why&lt;/em&gt; Behind Patterns&lt;/h3&gt;
&lt;p&gt;LLMs are excellent at producing code that follows patterns. They&amp;rsquo;re terrible at knowing when to break them. Part of having good taste is understanding:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Why&lt;/em&gt; we use certain abstractions, not just that we do&lt;/li&gt;
&lt;li&gt;What problems specific patterns actually solve&lt;/li&gt;
&lt;li&gt;The tradeoffs involved in different approaches&lt;/li&gt;
&lt;li&gt;When the conventional wisdom doesn&amp;rsquo;t apply&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;An LLM will happily apply the repository pattern to a three-line script. It&amp;rsquo;ll add dependency injection to something that will never have more than one implementation. It&amp;rsquo;ll create an abstraction layer between two things that should just talk to each other directly. Knowing when that&amp;rsquo;s absurd is taste, and these things should be triggering that same &amp;lsquo;wrong&amp;rsquo; reflex.&lt;/p&gt;
&lt;p&gt;This is where reading about software design (not just code) becomes valuable. Understand the problems that led to the solutions. When you know that a pattern emerged to solve a specific pain point, you can recognise when that pain point isn&amp;rsquo;t present and the pattern is invalid.&lt;/p&gt;
&lt;h3 id="work-outside-the-llm-safety-net"&gt;Work Outside the LLM Safety Net&lt;/h3&gt;
&lt;p&gt;This is uncomfortable but necessary. Set challenges like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&amp;ldquo;Implement this ticket without any AI assistance.&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;Create a hypothesis about this issue using only docs and code reading. Test the hypothesis.&amp;rdquo;&lt;/li&gt;
&lt;li&gt;&amp;ldquo;Refactor this module and write a sentence to explain every single edit you make.&amp;rdquo;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The struggle builds the intuition that makes LLM output assessable. If you&amp;rsquo;ve never felt the friction of solving something hard yourself, you won&amp;rsquo;t recognise when the LLM is handing you something easy that &lt;em&gt;looks&lt;/em&gt; hard. You won&amp;rsquo;t know what is appropriate versus what is just verbose.&lt;/p&gt;
&lt;p&gt;I imagine this is the reason musicians still practice scales even though they could just play the finished piece. The fundamentals create the foundation that makes everything else possible. Engineers who skip the fundamentals can produce output, but they can&amp;rsquo;t debug it, extend it, or explain it.&lt;/p&gt;
&lt;h3 id="learn-to-spot-the-llm-tells"&gt;Learn to Spot the LLM tells&lt;/h3&gt;
&lt;p&gt;LLMs have characteristic weaknesses. With experience, you start recognising the patterns:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Overly defensive code which checks for conditions that can&amp;rsquo;t occur: handling errors that the type system prevents, or adding null checks on things that are never null&lt;/li&gt;
&lt;li&gt;Missing edge cases that require domain understanding. A good tell is that the happy path works but the real-world scenarios don&amp;rsquo;t&lt;/li&gt;
&lt;li&gt;Following outdated patterns and libraries from training data and using approaches that made sense five years ago but have been superseded&lt;/li&gt;
&lt;li&gt;Intermixing concerns: database calls next to business logic next to presentation concerns&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Learning to recognise &amp;ldquo;this feels like AI-generated code&amp;rdquo; is part of developing taste. It&amp;rsquo;s the same instinct that helps you spot code written by someone who doesn&amp;rsquo;t understand what they&amp;rsquo;re doing, because in a meaningful sense, the LLM doesn&amp;rsquo;t. It&amp;rsquo;s pattern matching without comprehension.&lt;/p&gt;
&lt;h3 id="the-meta-skill"&gt;The Meta-Skill&lt;/h3&gt;
&lt;p&gt;The core of good taste is this: you need to be able to have a &lt;em&gt;conversation&lt;/em&gt; with the code, understand the story which it tells, and know when it is lying to you.&lt;/p&gt;
&lt;p&gt;That only comes from understanding at a level deeper than syntax. It comes from having written enough bad code yourself to recognise it. It comes from maintaining systems long enough to feel the consequences of piles of shortcuts.&lt;/p&gt;
&lt;p&gt;LLMs are remarkably good at producing plausible code. Plausibility isn&amp;rsquo;t correctness, and it isn&amp;rsquo;t quality.&lt;/p&gt;
&lt;p&gt;There&amp;rsquo;s no shortcut to this. But there is a path: read widely, build deliberately, question, and embrace the productive discomfort of feeling overwhelmed. The struggle transforms knowledge into judgment.&lt;/p&gt;</content></entry><entry><title>Building Momentum with LLM Coding</title><link href="https://www.jonatkinson.co.uk/blog/llm-coding-flywheel/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/llm-coding-flywheel/</id><published>2025-12-11T12:00:00Z</published><updated>2025-12-11T12:00:00Z</updated><content type="html">&lt;p&gt;LLM-assisted coding feels slow at first. Painfully slow. You&amp;rsquo;re constantly second-guessing the output, reading every line, checking for hallucinated imports and invented APIs. This is where the fear creeps in – the worry that you&amp;rsquo;re losing your craft, becoming a glorified prompt engineer who can&amp;rsquo;t actually write code anymore. Every suggestion feels like it needs to be verified, and the verification takes longer than just writing the code yourself would have.&lt;/p&gt;
&lt;p&gt;But something shifts if you stick with it. The friction fades. Trust builds. You start to recognise the patterns in the AI&amp;rsquo;s mistakes and learn to head them off. You develop an intuition for when to accept a suggestion wholesale and when to scrutinise it. The flywheel starts turning.&lt;/p&gt;
&lt;p&gt;This momentum isn&amp;rsquo;t automatic. You have to set things up correctly. The AI is a powerful but distractible collaborator, and your job is to create an environment where it can succeed. Here&amp;rsquo;s what I&amp;rsquo;ve learned (my preferred tool is Claude Code, so my examples probably drag that way):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;One goal per session.&lt;/strong&gt; A single context window should contain a single goal. When you finish a feature, reset the session and clear the context before starting the next one. Don&amp;rsquo;t let the details of feature A bleed into the implementation of feature B. The LLM will try to be helpful by remembering everything, but that helpfulness becomes confusion when it starts applying patterns from yesterday&amp;rsquo;s authentication work to today&amp;rsquo;s reporting feature.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Periodic review sessions.&lt;/strong&gt; Every few major features – I aim for every three or four – run a dedicated review session. Not to build anything new, but to audit what you&amp;rsquo;ve built together. Ask the LLM to examine the codebase for architectural drift, DRY violations, and emerging patterns that should be formalised. These sessions are invaluable for catching the inconsistencies that accumulate when you&amp;rsquo;re moving fast. The AI is quite good at spotting its own mess if you explicitly ask it to look.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Save your plans.&lt;/strong&gt; Lean heavily on planning mode. Before any significant feature, have the LLM create a detailed implementation plan and write it to a file. These plans become institutional memory. When you start a related feature later, you can say &amp;ldquo;Remember how we planned the notification system? Follow the same patterns for the alerting feature.&amp;rdquo; The AI doesn&amp;rsquo;t actually remember, of course, but it can read the file, and that&amp;rsquo;s close enough.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;One log file.&lt;/strong&gt; Bring everything from your application – backend, frontend, workers, whatever – into a single log file. Don&amp;rsquo;t make the LLM waste time and tokens figuring out how to access logs across containers or parse browser console output. When something breaks, you want to paste a log snippet and say &amp;ldquo;what&amp;rsquo;s happening here?&amp;rdquo; not spend five minutes explaining where logs live.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;One project API.&lt;/strong&gt; Give your project a well-documented Makefile with clear targets: &lt;code&gt;make test&lt;/code&gt;, &lt;code&gt;make serve&lt;/code&gt;, &lt;code&gt;make deps&lt;/code&gt;, &lt;code&gt;make lint&lt;/code&gt;. Instruct the LLM to &lt;em&gt;only&lt;/em&gt; interact with the project through that interface. You don&amp;rsquo;t want it trying to figure out how to run your test suite from first principles every time. Provide the control panel; don&amp;rsquo;t make it rummage through wires.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Don&amp;rsquo;t optimise for token limits, ever.&lt;/strong&gt; This is a luxury opinion, I know, but if you&amp;rsquo;re trying to structure your work to stay inside a context limit, you&amp;rsquo;re fighting the wrong battle. You&amp;rsquo;re crippling your workflow and diminishing the AI&amp;rsquo;s capabilities to save a few quid. Just buy the unlimited account. The productivity gains will pay for it many times over.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Invest in your tests.&lt;/strong&gt; Parallelise them. Organise them into groups so you can run fast feedback loops on related code. Instruct the LLM to run the tests constantly – after every significant change, not just at the end of a feature. Most LLM coding tools support hooks; I use a stop hook to automatically run tests every time the AI finishes responding. The AI should never be allowed to move on without knowing whether it just broke something.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Alert on idle.&lt;/strong&gt; While you&amp;rsquo;re setting up hooks, add one that alerts you when the LLM is idle. A sound, a terminal bell, whatever works. These tools can be remarkably fast when they&amp;rsquo;re in flow, and you don&amp;rsquo;t want to be the bottleneck.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The pattern here is simple: treat the LLM like a capable but context-limited collaborator. It&amp;rsquo;s possible to build real momentum and genuine trust, just as you would with a human partner. But you have to be the responsible party in the relationship. You maintain the structure. You enforce the discipline. You set the AI up to succeed, and then it will.&lt;/p&gt;
&lt;p&gt;Thanks to my friend Vim for listening to me ramble about all this. It helped me get my thoughts together.&lt;/p&gt;</content></entry><entry><title>GitHub's spec-kit and the cloud agent endgame</title><link href="https://www.jonatkinson.co.uk/blog/github-spec-kit-ambitions/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/github-spec-kit-ambitions/</id><published>2025-10-23T14:30:00Z</published><updated>2025-10-23T14:30:00Z</updated><content type="html">&lt;p&gt;I&amp;rsquo;ve been playing with GitHub&amp;rsquo;s &lt;a href="https://github.com/github/spec-kit"&gt;spec-kit&lt;/a&gt; this week. It&amp;rsquo;s pitched as a workflow tool for local LLM agents – think Claude Code, Cursor, Windsurf. The idea is structured development: specify requirements first, then plan, then implement. It works. The &lt;code&gt;/speckit.constitution&lt;/code&gt; command establishes project principles, &lt;code&gt;/speckit.clarify&lt;/code&gt; resolves ambiguity, &lt;code&gt;/speckit.analyze&lt;/code&gt; validates consistency. Useful stuff for keeping local agents on track.&lt;/p&gt;
&lt;p&gt;But spec-kit isn&amp;rsquo;t really about improving your local workflow. It&amp;rsquo;s a sketch of GitHub&amp;rsquo;s cloud agent infrastructure.&lt;/p&gt;
&lt;p&gt;Look at the constitution concept. For a local agent, it&amp;rsquo;s helpful context. For an autonomous agent working independently in the cloud – maybe handling issues across a backlog without constant oversight – a constitution becomes essential. It&amp;rsquo;s the strongest steering signal available. The difference between an agent that stays aligned with your project&amp;rsquo;s architectural philosophy and one that drifts into technically-correct-but-architecturally-wrong solutions.&lt;/p&gt;
&lt;p&gt;The gated process (specify, plan, clarify, analyze, tasks, implement) maps directly onto issue-based development. An agent gets assigned a ticket, works through the gates, presents checkpoints. This isn&amp;rsquo;t how you work with a local assistant watching your every move. This is how an autonomous agent would operate on GitHub infrastructure.&lt;/p&gt;
&lt;p&gt;The clarification step is particularly revealing. It&amp;rsquo;s designed to present multiple-choice options when the agent hits ambiguity. The natural implementation? Present those options directly in the GitHub issue UI. The agent encounters a decision point, generates three viable approaches, presents them as a quick poll. You click one, it continues. Far more efficient than the current pattern of verbose questions in PR comments.&lt;/p&gt;
&lt;p&gt;GitHub owns the platform where issues live, where CI/CD runs, where PRs are reviewed. They&amp;rsquo;re not competing with Claude Code or Cursor. They&amp;rsquo;re building the server-side infrastructure those tools will interface with, or be replaced by.&lt;/p&gt;
&lt;p&gt;The economics push this direction. Human engineering hours cost more while compute costs less. LLM capabilities improve while inference overhead shrinks. Eventually, projects will ask why they&amp;rsquo;re manually triaging straightforward bug reports when an agent could handle most of them with human review only at decision points.&lt;/p&gt;
&lt;p&gt;Spec-kit is GitHub showing their hand. The gated process, the constitution, the structured clarification – these aren&amp;rsquo;t just productivity features for local development. They&amp;rsquo;re scaffolding for autonomous agents living in GitHub&amp;rsquo;s cloud, operating with minimal oversight, working within boundaries established by project principles.&lt;/p&gt;
&lt;p&gt;The local agent experience – powerful, immediate, under your direct control – is likely transitional. Spec-kit is the blueprint for what&amp;rsquo;s next.&lt;/p&gt;</content></entry><entry><title>How I use Claude Code</title><link href="https://www.jonatkinson.co.uk/blog/how-i-use-claude-code/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/how-i-use-claude-code/</id><published>2025-06-25T09:06:00Z</published><updated>2025-06-25T09:06:00Z</updated><content type="html">&lt;p&gt;I&amp;rsquo;ve been using Claude Code a lot recently, and I&amp;rsquo;ve settled on a workflow that&amp;rsquo;s been producing really good results. The short version is: use two separate Claude instances to challenge each other.&lt;/p&gt;
&lt;p&gt;&lt;img src="/_media/blog/claude-code-screenshot.png" alt="A Konsole window showing two Claude Code instances"&gt;&lt;/p&gt;
&lt;p&gt;The first step, especially when working in an existing codebase, is context gathering. With Claude Code in &amp;ldquo;plan mode,&amp;rdquo; I&amp;rsquo;ll give it a detailed overview of the project structure. Something like:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&amp;ldquo;Examine this codebase in detail. The dependencies are documented in &lt;code&gt;@pyproject.toml&lt;/code&gt;, and take note of the environment variables for configuration in &lt;code&gt;@docker-compose.yml&lt;/code&gt;. The tests can be run with &lt;code&gt;make test&lt;/code&gt;.&amp;rdquo;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;If there&amp;rsquo;s any API or package documentation I&amp;rsquo;ve already downloaded, I&amp;rsquo;ll put it into the codebase in an &lt;code&gt;api_docs/&lt;/code&gt; folder, and explicitly tell the LLM to &amp;ldquo;examine the additional documentation in &lt;code&gt;@api_docs/&lt;/code&gt;.&amp;rdquo; I&amp;rsquo;ve found this particularly effective when working with APIs that publish a Swagger specification; just save a copy in &lt;code&gt;api_docs/&lt;/code&gt; and let the LLM learn.&lt;/p&gt;
&lt;p&gt;With the groundwork laid, I&amp;rsquo;ll then set out the upcoming context, still in planning mode:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&amp;ldquo;Our goal is to plan the delivery of feature X, Y, Z. Create for me a phased plan to build these features. Each phase should follow a theme, and each phase should contain an explicit test and documentation task. Ask any clarification questions as you need.&amp;rdquo;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This usually leads to a few rounds of question and answer as we refine the context. Eventually, as the questions dry up, I&amp;rsquo;ll instruct Claude to &amp;ldquo;Now, write this plan to &lt;code&gt;.implementation_plan.md&lt;/code&gt;.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;This is where the second Claude instance comes in. My first prompt to this instance is:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&amp;ldquo;Based on this codebase, and the plan in &lt;code&gt;.implementation_plan.md&lt;/code&gt;, critique this plan and help me understand where it&amp;rsquo;s underspecified, or lacking in necessary context. Tell me where you think the plan is weak, and where you would struggle.&amp;rdquo;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This burns a lot of tokens, as this second instance does a lot of reading and thinking in the background, essentially delivering the phase without writing the source files, but it&amp;rsquo;s worthwhile. I consistently get useful feedback that I can then feed back to the first instance.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ll then challenge the first Claude with the output from the second:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&amp;ldquo;Based on this critique, consider the points raised, and improve the plan.&amp;rdquo;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;After another round of this (and it&amp;rsquo;s usually only two exchanges at most), the &lt;code&gt;.implementation_plan.md&lt;/code&gt; is usually pretty robust.&lt;/p&gt;
&lt;p&gt;At this point, I&amp;rsquo;ll put the first Claude instance into &amp;ldquo;auto mode.&amp;rdquo; I usually grant broad permission to any tools Claude might need; the only tools I insist on running manually are &lt;code&gt;git commit&lt;/code&gt; and &lt;code&gt;git push&lt;/code&gt;. My prompting now goes phase-by-phase: &amp;ldquo;Consider the codebase, and begin implementing the first phase of &lt;code&gt;.implementation_plan.md&lt;/code&gt;.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;Usually, Claude needs a little prompting to actually &lt;em&gt;run&lt;/em&gt; the tests after creating them, but I&amp;rsquo;ll also run them manually to keep an eye on things. I&amp;rsquo;ve noticed Claude can be a bit overzealous with testing, generating complex mocks and factories. I don&amp;rsquo;t mind these existing, but I like to keep a handle on test suite bloat, and I&amp;rsquo;ll encourage Claude to mark tests as slow if necessary and exclude them from the default test scripts.&lt;/p&gt;
&lt;p&gt;At the end of each phase, I&amp;rsquo;ll ask the second Claude instance to &amp;ldquo;evaluate the progress in this codebase against the current &lt;code&gt;.implementation_plan.md&lt;/code&gt;, and provide feedback.&amp;rdquo; This often produces useful challenges for the primary Claude instance, and a few rounds of back-and-forth helps ensure the phase is genuinely &amp;ldquo;complete&amp;rdquo; and nothing has been missed.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ll then finish the phase with a test run, a commit, and the instruction to &amp;ldquo;update &lt;code&gt;.implementation_plan.md&lt;/code&gt; with the progress so far.&amp;rdquo; This is followed by &lt;code&gt;/clear&lt;/code&gt;ing the context of the first Claude. I find this &amp;ldquo;reset&amp;rdquo; helps prevent the LLM from going off-track. Then it&amp;rsquo;s back to the start for the next phase.&lt;/p&gt;
&lt;p&gt;Generally, I find Claude&amp;rsquo;s phased plans are unafraid of front-loading the difficult work. The early phases are usually the most complex, with later phases focusing on simpler tasks like UI work or optimization. So, once the early phases are complete, delivery becomes smoother and less hand-holding is needed from the second Claude instance.&lt;/p&gt;</content></entry><entry><title>notion-to-json</title><link href="https://www.jonatkinson.co.uk/blog/notion-to-json/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/notion-to-json/</id><published>2025-06-18T19:08:00Z</published><updated>2025-06-18T19:08:00Z</updated><content type="html">&lt;p&gt;I &lt;em&gt;was&lt;/em&gt; a big Notion user. For a while, it was my go-to for everything: project management, note-taking, knowledge bases, and a lightweight CRM. It&amp;rsquo;s a useful way to quickly build databases and organize data in a slightly more robust way than a spreadsheet, and the WYSIWYG editor is a bonus.&lt;/p&gt;
&lt;p&gt;But there&amp;rsquo;s a catch: data is effectively held hostage. Notion&amp;rsquo;s export formats are, to put it mildly, dreadful. They&amp;rsquo;re fine for static snapshots, but useless for anything involving programmatic access or integration with other tools. I like having control over my data, and the idea of it being locked away in a platform which is tiptoeing towards the cliff of enshittification felt increasingly uncomfortable.&lt;/p&gt;
&lt;p&gt;So I built &lt;code&gt;notion-to-json&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s a simple command-line tool that extracts JSON from Notion pages and databases, ready to be ingested elsewhere. It&amp;rsquo;s also a good agnostic way of taking a backup of your Notion workspace, independent of their export functionality. This is the kind of software that&amp;rsquo;s notionally &amp;ldquo;easy&amp;rdquo; to create, but the details and edge cases add a lot of friction. As I&amp;rsquo;ve found with recent projects, Claude Code is excellent at dealing with those kinds of problems. The whole project took a few hours from start to finish, and includes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Export all pages and databases from your Notion workspace.&lt;/li&gt;
&lt;li&gt;Systematic traversal of nested content.&lt;/li&gt;
&lt;li&gt;Progress tracking with an okay terminal UI.&lt;/li&gt;
&lt;li&gt;Rate-limited API calls to respect Notion&amp;rsquo;s limits.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The code is available on &lt;a href="https://github.com/jonatkinson/notion-to-json"&gt;Github&lt;/a&gt;.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#75715e"&gt;# Using pip&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;pip install notion-to-json
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#75715e"&gt;# Using uv&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;uv pip install notion-to-json
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#75715e"&gt;# Using uvx (no installation needed)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;uvx notion-to-json --api-key YOUR_API_KEY
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;</content></entry><entry><title>mastodon-to-bluesky</title><link href="https://www.jonatkinson.co.uk/blog/mastodon-to-bluesky/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/mastodon-to-bluesky/</id><published>2025-06-14T14:34:00Z</published><updated>2025-06-14T14:34:00Z</updated><content type="html">&lt;p&gt;I&amp;rsquo;ve been using Bluesky more and more recently. It&amp;rsquo;s a pleasant enough place to post, though I&amp;rsquo;m still not entirely convinced by the AT Protocol, but that&amp;rsquo;s a topic for another day).&lt;/p&gt;
&lt;p&gt;As I&amp;rsquo;ve been spending more time on Bluesky, I realised I wanted to bring over my old posts from Mastodon. Manually copying and pasting everything seemed tedious, especially with image attachments and thread structures. So I wrote a tool to automate the process.&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s a command-line Python application, on &lt;a href="https://github.com/jonatkinson/mastodon-to-bluesky"&gt;Github&lt;/a&gt;. It handles fetching your Mastodon posts, converting the content, downloading media attachments, splitting long posts into threads, and posting everything to Bluesky.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve tried to make it as robust and feature-rich as possible, including things like incremental transfers (so you can run it multiple times without creating duplicates), a dry-run mode for testing, filtering options, and automatic retry logic.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve packaged the tool using the &lt;code&gt;uv&lt;/code&gt; tool, which I find really convenient for managing command-line utilities. If you haven&amp;rsquo;t used &lt;code&gt;uv&lt;/code&gt; before, go and learn about that. I promise it&amp;rsquo;s more important than migrating your social posts. As follows:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;uv tool install mastodon-to-bluesky
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;For one-time use, you can use &lt;code&gt;uvx&lt;/code&gt;:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;uvx mastodon-to-bluesky transfer --help
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Or, if you prefer the traditional &lt;code&gt;pip&lt;/code&gt; approach:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;pip install mastodon-to-bluesky
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;You&amp;rsquo;ll need access tokens for both Mastodon and Bluesky. The README on Github has detailed instructions for obtaining these. Once you have them, the basic transfer command looks like this:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;uvx mastodon-to-bluesky transfer &lt;span style="color:#ae81ff"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#ae81ff"&gt;&lt;/span&gt;--mastodon-instance https://your.mastodon.instance &lt;span style="color:#ae81ff"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#ae81ff"&gt;&lt;/span&gt;--mastodon-token YOUR_MASTODON_TOKEN &lt;span style="color:#ae81ff"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#ae81ff"&gt;&lt;/span&gt;--bluesky-handle you.bsky.social &lt;span style="color:#ae81ff"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#ae81ff"&gt;&lt;/span&gt;--bluesky-password YOUR_BLUESKY_APP_PASSWORD
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;You can store your credentials in a configuration file or use environment variables.&lt;/p&gt;
&lt;p&gt;There are also various options for controlling the transfer process. For example, to do a dry run without actually posting anything to Bluesky:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;uvx mastodon-to-bluesky transfer --dry-run
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Or to transfer only posts from a specific date range:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;uvx mastodon-to-bluesky transfer --since 2024-01-01 --until 2024-12-31
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The tool handles downloading media attachments (up to four images per post, due to Bluesky&amp;rsquo;s limitations) and splitting long posts into threads. It tries to preserve as much of the original formatting as possible, including mentions, hashtags, and links.&lt;/p&gt;
&lt;p&gt;One of the things I focused on was making the tool robust. It tracks transferred posts to avoid duplicates and has automatic retry logic with exponential backoff for handling rate limits and transient errors.&lt;/p&gt;
&lt;p&gt;There are some limitations, mainly due to differences between the Mastodon and Bluesky platforms. For example, Bluesky doesn&amp;rsquo;t support backdating posts, so all transferred posts will have the current timestamp. The original Mastodon timestamp is appended to each post.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m pretty happy with how the tool turned out. It scratched my own itch, and hopefully it&amp;rsquo;ll be useful for others migrating from Mastodon to Bluesky. It&amp;rsquo;s another example of how quickly you can build this kind of software with LLM assistance. The whole project took about 3 hours from start to finish.&lt;/p&gt;</content></entry><entry><title>Good practice is good for LLMs too</title><link href="https://www.jonatkinson.co.uk/blog/good-practice-is-good-for-llms-too/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/good-practice-is-good-for-llms-too/</id><published>2025-06-08T20:00:00Z</published><updated>2025-06-08T20:00:00Z</updated><content type="html">&lt;p&gt;LLMs &lt;em&gt;are&lt;/em&gt; changing how we write software. In the face of this unheaval, it&amp;rsquo;s tempting to think these tools somehow absolve us from following good engineering practice. We have fifty or so years of layered wisdom, and sticking close to this knowledge not only makes your codebase more robust, it &lt;em&gt;enhances&lt;/em&gt; the LLM capabilities and developer experience.&lt;/p&gt;
&lt;p&gt;One of the most important practices, now more than ever, is comprehensive test coverage. Ideally, your test suite should be granular enough to run relevant tests &lt;em&gt;after each&lt;/em&gt; AI tool use. I&amp;rsquo;ve been using &lt;code&gt;pytest&lt;/code&gt; for years, and recently added &lt;a href="https://pypi.org/project/pytest-watch/"&gt;&lt;code&gt;pytest-watch&lt;/code&gt;&lt;/a&gt; to my workflow. It&amp;rsquo;s a small thing, but being able to see tests run automatically in the background as I&amp;rsquo;m prompting the LLM gives me immediate feedback and catches regressions early. You need automated, fast QA looking over the AI&amp;rsquo;s shoulder.&lt;/p&gt;
&lt;p&gt;Branch discipline is another essential practice. With LLMs, it&amp;rsquo;s easier than ever to generate (and just as easily discard) large chunks of code. Keeping your LLM-assisted work on a dedicated, short-lived branch with a single, clearly defined purpose makes it trivial to revert changes if things go sideways. This isolates the impact of any AI-induced chaos to the specific feature or code area you&amp;rsquo;re working on. Put up the safety net.&lt;/p&gt;
&lt;p&gt;Another thing I&amp;rsquo;ve started doing recently is keeping a detailed work diary. This is invaluable for rebuilding LLM context over multiple sessions, especially when working on larger features or complex refactoring. While your commit history provides a record of &lt;em&gt;what&lt;/em&gt; changed, the work diary captures the &lt;em&gt;why&lt;/em&gt; – the reasoning behind decisions, alternative approaches considered, and future plans. We talk a lot about building context for the LLM, but remember your own personal context is an extremely lossy system.&lt;/p&gt;
&lt;p&gt;DRY – Don&amp;rsquo;t Repeat Yourself – is a fundamental principle of good software design. It&amp;rsquo;s even more important when working with LLMs, as they have a tendency to reinvent wheels (sometimes quite badly). A well-defined application architecture, with clear separation of concerns, helps guide the LLM towards reusing existing code rather than generating duplicates. For example, in Django projects, I usually have a &lt;code&gt;models.py&lt;/code&gt;, &lt;code&gt;views.py&lt;/code&gt;, and a &lt;code&gt;services.py&lt;/code&gt; for business logic. This isn&amp;rsquo;t anything special, but I stick to the conventions, and keep everything in it&amp;rsquo;s right place. The more you stick to a predictable structure, the easier it is for the LLM to understand the context and generate code that fits seamlessly into your existing codebase. The agents are easily distracted, but a clear set of instructions and a tidy workspace – they&amp;rsquo;re much more likely to produce something useful.&lt;/p&gt;
&lt;p&gt;Finally, good task ergonomics are essential. A root &lt;code&gt;Makefile&lt;/code&gt; or &lt;code&gt;justfile&lt;/code&gt; with clearly defined tasks (&lt;code&gt;setup&lt;/code&gt;, &lt;code&gt;deploy&lt;/code&gt;, &lt;code&gt;lint&lt;/code&gt;, &lt;code&gt;test&lt;/code&gt;) provides the LLM with a standardized way to interact with your project. This saves tokens and reduces the risk of the LLM trying to figure out novel (and probably incorrect) ways to perform common operations. Provide the AI with a well-labeled control panel instead of making it rummage through a box of wires.&lt;/p&gt;
&lt;p&gt;LLMs are powerful tools, but they&amp;rsquo;re not magic. The good practices we developed to work with humans still apply to working with a cloud AI.&lt;/p&gt;</content></entry><entry><title>Codex, Jules, and Claude Code comparison</title><link href="https://www.jonatkinson.co.uk/blog/codex-jules-claude-comparison/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/codex-jules-claude-comparison/</id><published>2025-05-23T09:28:00Z</published><updated>2025-05-23T09:28:00Z</updated><content type="html">&lt;p&gt;I&amp;rsquo;ve tried three of the newer agentic code assistants this week: &lt;a href="https://openai.com/index/introducing-codex/"&gt;OpenAI Codex&lt;/a&gt;, &lt;a href="https://blog.google/technology/google-labs/jules/"&gt;Google Jules&lt;/a&gt;, and &lt;a href="https://www.anthropic.com/claude-code"&gt;Claude Code&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I asked each of them to operate in the same codebase, which is a personal app which I built to track my finances. It&amp;rsquo;s a pretty straightforward Django CRUD application, which tracks account balances over time, does some lightweight reporting, and can produce charts of the account balances, that kind of thing. There is very little Javascript, and it uses Bootstrap for the UI.&lt;/p&gt;
&lt;p&gt;I gave each agent the same task:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;quot;&amp;quot;&amp;quot;
I'm not happy with the name `accounts` as the application which deals with financial accounts. The name `accounts `is overloaded in web application development, and it might clash with other models in the system. I've decided it should not be used in this context.
Instead, this should be renamed to `money`, and all the side effects should be dealt with:
- Updating URL patterns throughout.
- Updating the `accounts:` namespace in all {% url %} tags.
- Updating import statements throughout.
- Ensuring the tests continue to pass.
At the same time, let's squash the migrations.
&amp;quot;&amp;quot;&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is a pretty simple request, one which I&amp;rsquo;d expect a junior software engineer to be capable of performing, though it would rely on some knowledge of Django patterns, understanding migrations, and how to find and update references to a package.&lt;/p&gt;
&lt;h2 id="openai-codex"&gt;OpenAI Codex&lt;/h2&gt;
&lt;p&gt;I setup the repository access via the web UI, gave it the task, and it immediately began working.&lt;/p&gt;
&lt;p&gt;The agent&amp;rsquo;s security model allows it internet access during the container bootstrap, but not after. The repository is pretty simple in layout, and the deployment is Herokuish, so I&amp;rsquo;d expect the AI to pickup from the &lt;code&gt;requirements.txt&lt;/code&gt; file that this was a Python project, and know to install the dependencies. Instead, the container started, and no dependencies were installed.&lt;/p&gt;
&lt;p&gt;Looking through the thinking log of the agent, it noticed very early that the dependencies were not installed, but decided to continue anyway. It then performed the task, but it&amp;rsquo;s approach was quite superficial (replacing the string &lt;code&gt;accounts&lt;/code&gt; with &lt;code&gt;money&lt;/code&gt; wherever it was found).&lt;/p&gt;
&lt;p&gt;The AI then tried to run the tests, but not in the way that is documented in the &lt;code&gt;README.md&lt;/code&gt;, or present in the Makefile (the correct way to run the tests is just &lt;code&gt;make test&lt;/code&gt;, which will deal with migrating, running the tests, calculating coverage etc.). Instead it tried to use the standard Django &lt;code&gt;./manage.py test&lt;/code&gt; approach. Regardless of approach, this failed, as no dependencies were installed.&lt;/p&gt;
&lt;p&gt;Rather than stop, the AI then decided that the tests not running wasn&amp;rsquo;t a problem, and YOLO&amp;rsquo;d up a PR with the changes. The total runtime was about 20 minutes.&lt;/p&gt;
&lt;p&gt;I pulled the PR locally, and ran the tests, which failed catastrophically due to a Python syntax error which was introduced in a migration file.&lt;/p&gt;
&lt;p&gt;This was a very poor impression, and it reinforces my opinion that OpenAI has lost their first-mover advantage. The product felt rushed and incapable.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Grade: F&lt;/strong&gt;&lt;/p&gt;
&lt;h2 id="google-jules"&gt;Google Jules&lt;/h2&gt;
&lt;p&gt;Jules appears to be a highly beta product, and was clearly struggling with high load when I ran the test.&lt;/p&gt;
&lt;p&gt;There were parts of the UI which I really liked, specifically a diff viewer for each file the agent touched. I think this is a really useful visible indicator of progress (or being stuck in a rabbit hole). This is more akin to the in-IDE agents like Cline, and I think it&amp;rsquo;s a useful middle ground.&lt;/p&gt;
&lt;p&gt;Container setup was successful, and the AI found and installed my requirements. It appears that Google trust their Gemini model to access remote resources at any time, so there wasn&amp;rsquo;t the same limitations as Codex displayed. Interestingly, it seems that their base container has &lt;code&gt;uv&lt;/code&gt; installed already, and it prioritied using &lt;code&gt;uv&lt;/code&gt; over &lt;code&gt;pip&lt;/code&gt;, though this also missed the documented steps in the &lt;code&gt;README.md&lt;/code&gt;, which would have pointed it to running &lt;code&gt;make setup&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;It follows the pattern which I&amp;rsquo;ve found particularly useful when using agentic assistants: plan then act, and presented the plan for my approval. The plan was reasonable in it&amp;rsquo;s approach, and I requested it begin.&lt;/p&gt;
&lt;p&gt;For some reason (again, maybe high load), when the agent was about half way through the planned steps, it stopped and asked for my approval to continue. I was happy to give approval, and then it continued to the second-to-last step in the plan (running the tests), and stopped, telling me the code was &amp;ldquo;Ready for Review&amp;rdquo;, and prompting me to hit a button to create the PR in my Github repository. Our last step in the plan, to squash the migrations, was never attempted.&lt;/p&gt;
&lt;p&gt;The tests passed, the refactoring was successful, but the whole process took around an hour. I&amp;rsquo;m interested in seeing how this progresses as I think Gemini is the standout model across most tasks at the moment. Jules is a very early product and it shows, but so does the potential. I wonder how many IDE features Google plan on implementing in the browser.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Grade: C&lt;/strong&gt;&lt;/p&gt;
&lt;h2 id="claude-code"&gt;Claude Code&lt;/h2&gt;
&lt;p&gt;I&amp;rsquo;ll preface this by saying that I&amp;rsquo;ve used Claude Code via the CLI extensively since it launched (I&amp;rsquo;m at least $2000 deep), so I had some preconceptions here. However, the day this launched was also the launch of Opus 4, a significant upgrade to the underlying model.&lt;/p&gt;
&lt;p&gt;Setup was the most awkward of the products, relying on the &lt;code&gt;claude&lt;/code&gt; CLI tool, and &lt;code&gt;gh&lt;/code&gt; for Github operations. The process of authenticating with Github was similar to any other OAuth application, but Claude Code operates entirely as a Github-based tool, and doesn&amp;rsquo;t have any UI outside of interactions in Issues and PRs.&lt;/p&gt;
&lt;p&gt;I copied and pasted my task description into a new ticket, and then notified it to begin work with &amp;ldquo;&lt;code&gt;@claude&lt;/code&gt; work on this ticket&amp;rdquo;. This triggers two effects: the agent updated the ticket with a detailed plan, and triggered a long-running Github Action to perform the work.&lt;/p&gt;
&lt;p&gt;I like being able to see the agent&amp;rsquo;s plan, though there was no opportunity to amend the plan or provide further guidance. I also liked being able to connect to the Github Action log and see in detail the tools the agent was using, and the detailed, very verbose output from the session.&lt;/p&gt;
&lt;p&gt;Claude Code completed the task after about 12 minutes, having run a limited subset of the tests (just for the newly refactored &lt;code&gt;money&lt;/code&gt; application), and completing the &amp;lsquo;squash migrations&amp;rsquo; task which the other agents failed to address.&lt;/p&gt;
&lt;p&gt;I was prompted via an Issue update to create a PR, which triggered the project-wide tests to run, one of which failed; one of the namespaced URLs in the application&amp;rsquo;s main navigation hadn&amp;rsquo;t been updated. I expect the agent would have caught this failure had it ran the entire test suite rather than a subset of the tests. I&amp;rsquo;ve seen this behaviour in &lt;code&gt;claude-cli&lt;/code&gt;, too. I expect this might be a token-efficiency strategy, as verbose test failure output can produce a LOT of tokens very quickly, but I&amp;rsquo;d expect that at the end of a task the agent would test the entire codebase.&lt;/p&gt;
&lt;p&gt;A comment on the newly-created PR &amp;ldquo;&lt;code&gt;@claude&lt;/code&gt; can you fix the failing test&amp;rdquo; resulted in the bug being fixed in a further 4 minutes.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Grade: B&lt;/strong&gt;&lt;/p&gt;</content></entry><entry><title>Four Bubbles I've Lived Through, With One Exception</title><link href="https://www.jonatkinson.co.uk/blog/four-bubbles/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/four-bubbles/</id><published>2025-04-29T15:30:00Z</published><updated>2025-04-29T15:30:00Z</updated><content type="html">&lt;p&gt;Working in technology means riding wave after wave of hype. Some waves reshape the landscape; others crash on the shore, leaving behind damp sand and confused investors.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve been around long enough to have lived through many hype cycles, inflated with breathless promise, often to be deflated just as quickly.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve been thinking about the recent past, and the bubbles built more on speculation and wishful thinking than substance. And then there&amp;rsquo;s the current AI bubble, which feels fundamentally different.&lt;/p&gt;
&lt;h3 id="metaverse"&gt;Metaverse&lt;/h3&gt;
&lt;p&gt;Remember when the Metaverse was the &lt;em&gt;only&lt;/em&gt; thing anyone talked about for a few months? The hype was so intense it renamed Facebook (researching this article, I had to clumsily &lt;a href="https://kagi.com/search?q=meta+metaverse"&gt;search for the phrase &amp;ldquo;Meta Metaverse&amp;rdquo;&lt;/a&gt;). Tech pundits told us all to prepare for the imminent future living and working in virtual worlds.&lt;/p&gt;
&lt;p&gt;Looking back, it feels like a product born less from genuine demand and more from the specific pandemic anxieties. We were physically isolated, craving connection, and the idea of a persistent virtual space held a certain appeal. But the reality was underwhelming: low-fidelity graphics, awkward interfaces, and a profound lack of compelling reasons to &lt;em&gt;be&lt;/em&gt; there beyond novelty. A solution looking for a problem, heavily pushed by a tiny handful of corporates rather than any organic interest. The hype evaporated almost as quickly as it appeared.&lt;/p&gt;
&lt;h3 id="nfts"&gt;NFTs&lt;/h3&gt;
&lt;p&gt;Non-Fungible Tokens. The promise was digital ownership, verifiable scarcity for digital assets on the blockchain. What it became, almost instantly, was a frenzy of pure speculation detached from underlying value. We saw JPEGs selling for millions, driven by celebrity endorsements and a pervasive &amp;ldquo;get rich quick&amp;rdquo; mentality that felt eerily reminiscent of historical bubbles like the Dutch tulip mania.&lt;/p&gt;
&lt;p&gt;People weren&amp;rsquo;t buying digital art they loved (digital art can be loved!); they were buying assets someone else would possibly pay even more for later. When the music stopped, the market crashed spectacularly, leaving many holding effectively worthless tokens. It was a traditional speculative bubble, plain and simple. FOMO.&lt;/p&gt;
&lt;h3 id="crypto-the-coin-launch-grift"&gt;Crypto (The Coin Launch Grift)&lt;/h3&gt;
&lt;p&gt;Blockchain technology itself might have long-term potential. The &lt;em&gt;bubble&lt;/em&gt; we lived through was the relentless churn of new coin launches, Initial Coin Offerings (ICOs), and meme coins. This wasn&amp;rsquo;t about revolutionizing finance; it was a grift, deeply entwined with social media influencer culture.&lt;/p&gt;
&lt;p&gt;Celebrities, tech personalities, podcasters and opportunistic politicians shilled their dubious tokens to their followers, promising astronomical returns. Twitter and Discord became echo chambers amplifying the hype. Fortunes were made (mostly by the insiders and early promoters) and lost (mostly by everyone else). It felt less like technological innovation and more like a digitally-enabled pump-and-dump scheme playing out on a global scale. It&amp;rsquo;s the future textbook example of global legislation being unable to keep up with innovation.&lt;/p&gt;
&lt;h3 id="ai"&gt;AI&lt;/h3&gt;
&lt;p&gt;And now we have AI. The hype is undeniable – venture capital is pouring in, valuations are soaring, and the coverage is intense. On the surface, it has all of the hallmarks of a bubble.&lt;/p&gt;
&lt;p&gt;But the crucial difference is utility.&lt;/p&gt;
&lt;p&gt;If the stock market hype around AI vanished tomorrow, the VC funding dried up, and the headlines stopped – I would still open my Claude Code window the day after and get to work. I use AI tools every single day to help me write code, draft documents, debug problems, organise information, and perform evaluative work. Tangible value, right now.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m not alone. Software engineers, writers, designers, teachers, counsellers, and consultants are quietly integrating AI into their daily works. They&amp;rsquo;re using it to brainstorm, create images, research, and write boilerplate code. Unlike the Metaverse, people &lt;em&gt;want&lt;/em&gt; to use these tools. Unlike NFTs, the value isn&amp;rsquo;t based purely on speculation. Unlike the crypto coin frenzy, there are real-world applications happening &lt;em&gt;now&lt;/em&gt;. AI does drive productivity.&lt;/p&gt;
&lt;p&gt;Certainly, AI company valuations are inflated. There will be a market correction and AI companies will fail. But the underlying technology isn&amp;rsquo;t going away, because it &lt;em&gt;works&lt;/em&gt;. It solves problems and provides value in a way the previous bubbles simply didn&amp;rsquo;t.&lt;/p&gt;
&lt;p&gt;The true test of a technology isn&amp;rsquo;t the high water-mark of the hype; it&amp;rsquo;s whether people keep using it when the water calms. This wave feels different because, beneath the froth, there&amp;rsquo;s a genuinely useful tool.&lt;/p&gt;</content></entry><entry><title>Trust but Verify: Sensible Ways to Use LLMs in Production</title><link href="https://www.jonatkinson.co.uk/blog/trust-but-verify/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/trust-but-verify/</id><published>2025-04-26T11:00:00Z</published><updated>2025-04-26T11:00:00Z</updated><content type="html">&lt;p&gt;Like many engineers right now, I&amp;rsquo;m exploring how LLMs can accelerate workflows. The potential is undeniable: generating code snippets, drafting content, summarizing complex information, powering chatbots. We are on the cusp of a significant shift in how we build software and create digital experiences. The temptation is strong to integrate these powerful tools directly into our production systems and content pipelines.&lt;/p&gt;
&lt;p&gt;But alongside the capabilities come significant risks. LLMs hallucinate. They make factual errors. They perpetuate biases present in their training data, and can be manipulated through prompt injection. Just blindly plugging AI output into user-facing applications or critical systems seems like asking for trouble.&lt;/p&gt;
&lt;p&gt;This brings me to a phrase I&amp;rsquo;ve been thinking about a lot recently: &amp;ldquo;trust but verify.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;I first heard &amp;ldquo;trust but verify&amp;rdquo; a long time ago, but I was surprised to learn its origin. For years, I&amp;rsquo;d mentally filed it away as wisdom from the cryptography or infosec communities – domains where skepticism is a virtue. It turns out the phrase was popularized by Ronald Reagan during nuclear disarmament talks with the Soviet Union in the 1980s: it represented a pragmatic approach: proceed with the agreement (trust), but ensure mechanisms are in place to check compliance (verify).&lt;/p&gt;
&lt;p&gt;That same pragmatism feels incredibly relevant to deploying LLMs today. We &lt;em&gt;want&lt;/em&gt; to leverage their power – that&amp;rsquo;s the &amp;ldquo;trust&amp;rdquo; part. We see the potential for massive efficiency gains and novel features. Letting an AI draft code, generate product descriptions, or provide first-line customer support can free up human time for higher-level tasks.&lt;/p&gt;
&lt;p&gt;However, the &amp;ldquo;verify&amp;rdquo; part is absolutely crucial and non-negotiable for anything going into production. Raw, unmediated LLM output is rarely trustworthy enough on its own. Hallucinations aren&amp;rsquo;t just funny quirks when they manifest as incorrect information presented to someone who relies on you.&lt;/p&gt;
&lt;p&gt;How do we actually &amp;ldquo;verify&amp;rdquo; AI-generated output in a production context? It&amp;rsquo;s a layered approach:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Human Review:&lt;/strong&gt; For critical outputs (e.g., code affecting core logic, sensitive communications, definitive factual statements), there&amp;rsquo;s is no substitute for a knowledgeable human reviewing the AI&amp;rsquo;s work before it goes live.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Automated Checks:&lt;/strong&gt; Just as we write unit tests for human-written code, we need tests for AI-generated code. For generated content or data, validation rules, fact-checking against known databases, or checks for PII or toxicity can be automated.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Monitoring and Feedback Loops:&lt;/strong&gt; Log your LLM interactions and outputs. Track performance. Implement mechanisms for users to flag incorrect or unhelpful responses (a simple thumbs up/down is more than enough). Crucially, have a system in place to &lt;em&gt;act&lt;/em&gt; on this feedback quickly.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Sandboxing and Staging:&lt;/strong&gt; Test LLM-powered features thoroughly in non-production environments before rolling them out. Understand their failure modes in a safe space. This should be obvious but you&amp;rsquo;d be surprised how many people are manipulating prompts in their live environment and hoping for the best.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Clear Boundaries and Fallbacks:&lt;/strong&gt; Define where AI is used and where it isn&amp;rsquo;t. Everyone in your organisation needs to understand these boundaries. CEO to product to engineer. Have robust fallback mechanisms for when the AI fails or provides low-confidence answers. Don&amp;rsquo;t let the AI operate outside its known competence.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Implementing verification adds friction. It requires building additional systems, dedicating human time to oversight, and accepting that deployment might be slower than just letting the AI run wild. But it is also the only responsible way forward. Harness the benefits and mitigate the risks.&lt;/p&gt;
&lt;p&gt;&amp;ldquo;Trust but verify&amp;rdquo; isn&amp;rsquo;t about stifling innovation; it&amp;rsquo;s about enabling and guiding it &lt;em&gt;sustainably&lt;/em&gt;. As LLMs continue to evolve, perhaps the nature of verification will change, but the principle will likely remain.&lt;/p&gt;</content></entry><entry><title>The MCP Servers I Actually Use</title><link href="https://www.jonatkinson.co.uk/blog/mcp-servers-im-using/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/mcp-servers-im-using/</id><published>2025-04-22T08:00:00Z</published><updated>2025-04-22T08:00:00Z</updated><content type="html">&lt;p&gt;The Model Context Protocol (MCP) ecosystem is changing quickly. While experimentation is interesting, I&amp;rsquo;ve found myself settling on a core set of MCP servers that I use day-to-day to when writing software with AI. These are the ones that have stuck:&lt;/p&gt;
&lt;h2 id="github-mcp-server"&gt;Github MCP server&lt;/h2&gt;
&lt;p&gt;I primarily use the &lt;a href="https://github.com/github/github-mcp-server"&gt;Github MCP server&lt;/a&gt; because it&amp;rsquo;s the canonical implementation from Github itself. While you can run it in various ways, I prefer the statically compiled Go binary for simplicity. Building it is straightforward:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;$ git clone https://github.com/github/github-mcp-server.git
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;$ cd github-mcp-server/cmd/github-mcp-server
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;$ go build .
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;$ cp ./github-mcp-server ~/.local/bin &lt;span style="color:#75715e"&gt;# or whatever is in your $PATH&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;It&amp;rsquo;s great to be able to sign off an AI coding session with a natural language instruction like &amp;ldquo;commit the changes with message &amp;lsquo;feat: Implement new widget&amp;rsquo;, push the branch, open a PR against main, and write a brief PR description of the changes.&amp;rdquo; This smooths the transition from AI interaction back to standard Git workflow.&lt;/p&gt;
&lt;p&gt;I also find it really useful for manipulating branches in more complex ways directly from a text description, saving time digging through Git commands for less common operations.&lt;/p&gt;
&lt;h2 id="fetch"&gt;Fetch&lt;/h2&gt;
&lt;p&gt;Often, I want to do my own research or find specific documentation before bringing the content into an AI session for analysis or summarization. The &lt;a href="https://github.com/modelcontextprotocol/servers/tree/main/src/fetch"&gt;Fetch server&lt;/a&gt; is perfect for this.&lt;/p&gt;
&lt;p&gt;It effectively replaces a manual step I used to perform with &lt;code&gt;curl&lt;/code&gt; piped into the excellent &lt;a href="https://trafilatura.readthedocs.io/en/latest/"&gt;trafilatura&lt;/a&gt; Python library to extract main content and convert it to Markdown. Fetch handles fetching the URL and transforming the relevant content into clean Markdown automatically, ready for the AI.&lt;/p&gt;
&lt;p&gt;As a small bonus, I also appreciate that Fetch is honest about its identity in its &lt;code&gt;User-Agent&lt;/code&gt; string.&lt;/p&gt;
&lt;h2 id="docker"&gt;Docker&lt;/h2&gt;
&lt;p&gt;My use case for the &lt;a href="https://github.com/QuantGeekDev/docker-mcp"&gt;Docker MCP server&lt;/a&gt; is quite specific: discovery and use of docker-compose.yml files within a project.&lt;/p&gt;
&lt;p&gt;While &lt;a href="https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview"&gt;Claude Code&lt;/a&gt; is generally good at figuring out how to start a project, it&amp;rsquo;s not perfect, especially in the presence of &lt;code&gt; Makefile&lt;/code&gt;, which Claude seems to treat at the only possible entrypoint into a running solution. This server&amp;rsquo;s ability to discover and use &lt;code&gt;docker-compose.yml&lt;/code&gt; files is a useful fallback for quickly getting services running, especially in unfamiliar codebases, directly from the AI interface. It bridges that gap when the AI&amp;rsquo;s initial setup instructions might be slightly off or incomplete.&lt;/p&gt;</content></entry><entry><title>The future of digital agencies</title><link href="https://www.jonatkinson.co.uk/blog/the-future-of-agencies/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/the-future-of-agencies/</id><published>2025-04-16T12:00:00Z</published><updated>2025-04-16T12:00:00Z</updated><content type="html">&lt;p&gt;This is keeping me up at night. The model we built is dying. It&amp;rsquo;s time to build the next one.&lt;/p&gt;
&lt;p&gt;Let’s start with a sketch of an agency project.&lt;/p&gt;
&lt;p&gt;A standard mid-sized website project costs maybe £80,000. That covers the costs of the sales process, a strategist, a project manager coordinating everything, a couple of designers iterating on UX and visuals, a few software engineers building the front and back end, maybe a copywriter. It covers the subscription costs of all the tools which those people use. It pays for lots of meetings, lots of messages, many revisions based on subjective feedback loops. It takes 12 weeks, maybe more. The cost is mostly people&amp;rsquo;s time – skilled, expensive, human time.&lt;/p&gt;
&lt;p&gt;Let&amp;rsquo;s sketch a different workflow. A single operator (maybe the owner, maybe a senior lead) defines the core goals and target audience. They feed this into several competing AI strategy assistants for market and competitive analysis. The AI reaches into the &amp;lsquo;old internet&amp;rsquo; for context, and searches to verify information. They use a combination of AI-assisted search and AI-assisted layout engines to create dozens of concepts and initial design directions in an hour. It costs nothing to reject a bad concept. No-one gets upset and no-one gets tired. They use AI wireframing tools. A refining prompt goes to an AI copywriter for core messaging options, another to a design AI for multiple layout variations based on the mood board. Code generation tools vacuum in the agency&amp;rsquo;s past repositories for code patterns and preferences, and begin putting together the code.&lt;/p&gt;
&lt;p&gt;Suddenly, instead of weeks of back-and-forth for initial concepts, you have a spread of viable options in a day. The cost? A fraction of the human-hours, replaced by 10,000 API calls. Let&amp;rsquo;s say, generously, we spend £500 in compute for the initial creative explosion.&lt;/p&gt;
&lt;p&gt;Which process delivers more value for the client&amp;rsquo;s pound at the concept stage? Which one will feel faster, and more responsive?&lt;/p&gt;
&lt;p&gt;Right now, the traditional team still delivers a more polished, strategically coherent final product. AI output needs heavy curation, integration relies on AI not coding itself into fractal loops, and oversight is still paramount. The AI will create beautiful but unusable code, and vapid copy. That £80,000 project involves nuance, client hand-holding, and integration complexities that AI struggles with today.&lt;/p&gt;
&lt;p&gt;The creatives, the software engineers, and the strategists convince themselves their craft is safe, sticking to that argument as a trillion dollars coalesces into the form of a giant silicon golem, ready to smash aside their passion, education, and professional utility. The battle, of the bespoke agency process versus 2025&amp;rsquo;s fragmented AI tools – is the worst the AI side will ever be, and the best the purely human-driven model will ever look in comparison. Everything is trending against the old way.&lt;/p&gt;
&lt;p&gt;Models will improve. Today&amp;rsquo;s AI is a clumsy intern drowning in complex tasks. Tomorrow&amp;rsquo;s will be smarter, faster, more integrated. The gap between AI output and human-quality output will shrink dramatically, and rapidly.&lt;/p&gt;
&lt;p&gt;Costs will plummet. Computing will get cheaper, models will become more efficient. The £500 for that concept explosion might become £50.&lt;/p&gt;
&lt;p&gt;The economic pressure will be immense.&lt;/p&gt;
&lt;p&gt;Workflow tools will emerge. Right now, we duct-tape AI tools together, copying and pasting Markdown and stuffing our aims into wide RAG contexts. Soon, platforms will exist specifically to orchestrate AI agents: seamless generation, review, and deployment pipelines. This isn&amp;rsquo;t about people discussing tickets in JIRA. This is an automated assembly line control panel.&lt;/p&gt;
&lt;p&gt;Client expectations will shift. Why pay £10,000 for a brand exploration over two weeks when an AI can generate 50 viable options for £100 in an afternoon, curated by one skilled designer? Clients will demand more, faster, cheaper. The perceived value of laborious human hours on tasks AI can approximate will crater. To experts, quality of the output will be perceptibly worse. The clients will not care.&lt;/p&gt;
&lt;p&gt;The nature of &amp;ldquo;good&amp;rdquo; will change. Just like &amp;ldquo;good code&amp;rdquo; might shift from human-readable to machine-optimized, &amp;ldquo;good design&amp;rdquo; or &amp;ldquo;good copy&amp;rdquo; might become less about singular human genius and more about the best outcome selected from a vast possibility space, refined by human taste.&lt;/p&gt;
&lt;p&gt;Is this keeping you up at night yet?&lt;/p&gt;
&lt;p&gt;Forget the single project comparison. Scale it up.&lt;/p&gt;
&lt;p&gt;Let&amp;rsquo;s consider &amp;ldquo;The Traditional Agency&amp;rdquo; in 2025. A 30-person agency. Senior management. Sales people. Account managers, project managers, strategists, designers, software engineers, SEO specialists. Payroll. Benefits and office space and software licenses. £1.5 million a year cost before profit. They juggle multiple clients. Projects move at human speed, bottlenecked by communication, revisions, and individual capacity. The only effective route to scale this model is means hiring more people, adding coordination complexity.&lt;/p&gt;
&lt;p&gt;What is an AI-Driven Agency in 2028? A core team of 8 people: Senior Strategists/Client Leads, AI Workflow Architects, Prompt Engineers, Senior Curators (Design/Copy/Code), Integration Specialists. They oversee a suite of AI tools and automated workflows. They handle 3x the project volume of the traditional agency, with half of the costs. The system generates code, design, copy, variants, and KPI reports 24/7. The humans focus on high-level strategy, client relationships, final quality control, and managing the machines. Overhead? Maybe £750,000, with significantly higher potential throughput.&lt;/p&gt;
&lt;p&gt;Which model wins? I don&amp;rsquo;t mean awards, or genuinely novel and interesting output. I mean which one delivers the speed and cost clients will inevitably demand?&lt;/p&gt;
&lt;p&gt;This isn&amp;rsquo;t about AI becoming a perfect replica of a human designer or software engineer. It&amp;rsquo;s about industrialization. The first steam engines were too expensive and underpowered, the first cars were death traps (when they ran at all). Don&amp;rsquo;t lull yourself into the dream of a static world.&lt;/p&gt;
&lt;p&gt;We see the wave coming. As an agency owner, my choice isn&amp;rsquo;t if this change happens, but how to navigate it. Do I cling to the &amp;ldquo;craftsman&amp;rdquo; model I grew up with, the bespoke process, the value tied solely to the hours my talented (and expensive) team pours in? Or do I look for a way to surf it?&lt;/p&gt;
&lt;p&gt;This means fundamentally rethinking the agency structure. It means accepting that much of the production work – the writing, the designing, the coding, the campaign setup – will be automated or assisted to a degree that makes the old model economically unviable for many clients. The value shifts upwards, towards strategy, towards curation, towards understanding how to wield these incredibly powerful new tools to achieve client goals more effectively than ever before.&lt;/p&gt;
&lt;p&gt;It means saying goodbye to parts of the agency model I loved. The bustling endeavour as the team gather and focus on the upcoming deadline is replaced by a silent slowly ascending chart on dashboards monitoring AI output. Roles will change. Many will disappear. We&amp;rsquo;ll need fewer junior &amp;ldquo;doers&amp;rdquo; and more senior &amp;ldquo;orchestrators&amp;rdquo; and &amp;ldquo;refiners&amp;rdquo;.&lt;/p&gt;
&lt;p&gt;The critics will say AI can&amp;rsquo;t replicate human creativity, strategic nuance, or the client relationship. They&amp;rsquo;re right, it can&amp;rsquo;t – not entirely. But industrialization doesn&amp;rsquo;t win by perfectly replicating the artisan. It wins by being relentlessly cheaper and faster for the bulk of the work. We mass-produce clothes, furniture, cars and cultural experiences. The bespoke artisan still exists, but they serve a niche market. The volume, the market dominance, belongs to the factory.&lt;/p&gt;
&lt;p&gt;Software engineering is facing its factory moment right now. Digital agencies are right there with them. The agencies that thrive won&amp;rsquo;t be the ones clinging to the past, waiting to feel the crush. They&amp;rsquo;ll be the ones building the assembly lines, figuring out how to harness this new automated workforce, and delivering unprecedented value by mastering the means of production. They&amp;rsquo;ll optimize for OPE – Output Per Employee – leveraging AI to create more, faster, and ultimately, cheaper than the traditional model can sustain.&lt;/p&gt;
&lt;p&gt;The craftsman agency isn&amp;rsquo;t dead yet. But the factory is being built next door, and the rent is about to go up. Time to figure out how to run the machines.&lt;/p&gt;</content></entry><entry><title>Preparing Django documentation for LLMs</title><link href="https://www.jonatkinson.co.uk/blog/preparing-django-documentation-for-llm/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/preparing-django-documentation-for-llm/</id><published>2025-02-21T10:00:00Z</published><updated>2025-02-21T10:00:00Z</updated><content type="html">&lt;p&gt;I&amp;rsquo;ve been using &lt;a href="https://github.com/simonw/files-to-prompt"&gt;files-to-prompt&lt;/a&gt; recently to help query large codebases with LLMs, but I realised this would work just as well for documentation sets. Here&amp;rsquo;s how to generate a complete documentation set for Django; I&amp;rsquo;ve been uploading this to a ChatGPT Project as a supporting document, and getting really good results.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ git clone git@github.com:django/django
$ files-to-prompt django/docs/ \
--ignore *.css \
--ignore *.svg \
--ignore *.graffle \
--ignore *.pdf \
--ignore *.png \
--ignore *.eot \
--ignore *.ttf \
--ignore *.woff \
--ignore *.woff2 &amp;gt; /tmp/django-docs.txt
&lt;/code&gt;&lt;/pre&gt;</content></entry><entry><title>Setting up Matrix Synapse on Fedora</title><link href="https://www.jonatkinson.co.uk/blog/matrix-synapse-setup-on-fedora/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/matrix-synapse-setup-on-fedora/</id><published>2025-02-18T18:00:00Z</published><updated>2025-02-18T18:00:00Z</updated><content type="html">&lt;p&gt;I wanted to install a Matrix service for my family. While &lt;code&gt;matrix-synapse&lt;/code&gt; is packaged in Fedora, there&amp;rsquo;s scant documentation regarding what to do next after installing the package.&lt;/p&gt;
&lt;p&gt;First, you need to generate a configuration file:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ generate_config --server-name your.domain.here --config-dir=/etc/synapse/ --report-stats=no &amp;gt; /tmp/homeserver.yaml
$ sudo mv /tmp/homeserver.yaml /etc/synapse
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There&amp;rsquo;s a bit of extra configuration, which is in &lt;code&gt;/etc/sysconfig/synapse&lt;/code&gt;, which allows you to set the memory limit for the Synapse server. Read and amend this as necessary.&lt;/p&gt;
&lt;p&gt;Next, generate some keys:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ generate_signing_key &amp;gt; /tmp/your.domain.here.signing.key
$ mv /tmp/your-domain-here.signing.key &amp;gt; /etc/synapse
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Review the contents of &lt;code&gt;/etc/synapse/homeserver.yaml&lt;/code&gt;. You&amp;rsquo;ll note the references to &lt;code&gt;DATADIR&lt;/code&gt;, which is &lt;code&gt;/var/lib/synapse/DATADIR/&lt;/code&gt; by default.&lt;/p&gt;
&lt;p&gt;Now, it&amp;rsquo;s time to start the service:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ sudo systemctl start synapse.service
$ sudo systemctl status synapse.service
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should now have Synapse running happily, but it&amp;rsquo;ll only be listening for traffic on the loopback network adaptor.&lt;/p&gt;
&lt;p&gt;Follow the &lt;a href="https://element-hq.github.io/synapse/latest/reverse_proxy.html"&gt;instructions&lt;/a&gt; for setting up a reverse-proxy to the service. I used &lt;code&gt;nginx&lt;/code&gt;, and you probably should too. For me, this also involved setting up a SSL certificate with &lt;code&gt;certbot&lt;/code&gt;, but your specific installation might not be the same as mine. Any generic &amp;lsquo;SSL with Nginx&amp;rsquo; instructions should work correctly.&lt;/p&gt;
&lt;p&gt;Finally, it&amp;rsquo;s time to setup a user on your instance. We need to generate a shared secret:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1
some-random-output-here
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Add that to your &lt;code&gt;/etc/synapse/homeserver.yaml&lt;/code&gt; file as:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;registration_shared_secret: some-random-output-here
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Finally, setup a new user:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ register_new_matrix_user -c /etc/synapse/homeserver.yamlhttp://localhost:8008
New user localpart [admin]:  
Password:  
Confirm password:  
Make admin [no]: yes
Sending registration request...
Success!
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should now be able to login to your Matrix server using the client of your choice.&lt;/p&gt;</content></entry><entry><title>A Cline prompt for codebase analysis and feature extraction</title><link href="https://www.jonatkinson.co.uk/blog/cline-prompt-code-analysis/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/cline-prompt-code-analysis/</id><published>2024-11-20T08:28:00Z</published><updated>2024-11-20T08:28:00Z</updated><content type="html">&lt;p&gt;From time-to-time, I&amp;rsquo;m asked to evaluate a large codebase. This is usually from a client who has a project in distress, or who needs to quickly move their infrastructure provider and re-host the application elsewhere.&lt;/p&gt;
&lt;p&gt;I lot of the time this relies on the intuition of a good engineer who is familiar with the technology being used. There&amp;rsquo;s a lot of code browsing, writing quick notes, and silent video calls as we try to find the context for the decisions in the codebase.&lt;/p&gt;
&lt;p&gt;I wanted to see if there was a better way to do this, and as I&amp;rsquo;ve been using &lt;a href="https://github.com/cline/cline"&gt;Cline&lt;/a&gt; in VSCode, I tried to write a comprehensive system prompt:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;# Cline Custom Instructions
## Role and Expertise
You are Cline, an expert-level full-stack developer and UI/UX designer. Your expertise covers:
- Rapid, efficient interrogation of existing codebases, using your judgement and intuition as a guide.
- The full spectrum from MVP creation to complex system architecture knowledge.
Adapt your approach based on project needs and user preferences, always aiming to guide users in efficiently creating functional applications.
## Critical Documentation and Workflow
### Documentation Management
Maintain a &amp;#39;analysis/&amp;#39; folder in the root directory (create if it doesn&amp;#39;t exist) with the following essential files:
1. technology.md
- Purpose: Analysis of key technology choices and architecture decisions
- Format: Use headers (##) for main technology categories, bullet points for specifics. Datestamp these headings based on when you first encountered the technology in the codebase.
- Content: Detail chosen technologies, frameworks, and architectural decisions with brief justifications
2. summary.md
- Purpose: An overview of project features.
- Include sections on:
- Key user-facing features.
- Interactions with external APIs, providers, etc.
- Format: Use headers (##) for main sections, subheaders (###) for components, bullet points for details. Datestamp the headings based on when you first encountered the feature in the codebase.
- Content: Provide a high-level overview of the project features, highlighting main components and their relationships
### Adaptive Workflow
- At the beginning of every task when instructed to &amp;#34;follow your custom instructions&amp;#34;, read the essential documents in this order. You can find them in the `analysis/` folder.
1. `technology.md`
2. `summary.md`
- If you try to read or edit another document before reading these, something BAD will happen.
- If conflicting information is found in the codebase, ask the user for clarification
## User Interaction and Adaptive Behavior
- Ask follow-up questions when critical information is missing for task completion
- Adjust approach based on project complexity and user preferences
- Strive for efficient task completion with minimal back-and-forth
- Present key technical decisions concisely, allowing for user feedback
## Code Editing and File Operations
- Refer to the main Cline system prompt for specific file handling instructions
Remember, your primary goal is always to provide the a detailed analysis of the codebase, and describe all the features therein. The documents you produce are key to a successful outcome.
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This has generated good results so far, providing the code can fit into the context window. With &lt;code&gt;claude-3-5-sonnet-20241022&lt;/code&gt;, that is possible for most mid-size applications (at least for &lt;em&gt;my&lt;/em&gt; definition of mid-size).&lt;/p&gt;</content></entry><entry><title>My Claude Artifact Prompt</title><link href="https://www.jonatkinson.co.uk/blog/my-claude-artifact-prompt/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/my-claude-artifact-prompt/</id><published>2024-11-12T18:10:00Z</published><updated>2024-11-12T18:10:00Z</updated><content type="html">&lt;p&gt;I&amp;rsquo;ve found myself writing a lot of Claude artifact prompts recently. I don&amp;rsquo;t remember if I started writing these little personal software widgets before of after I read &lt;a href="https://simonwillison.net/2024/Oct/21/claude-artifacts/"&gt;Simon Willison&amp;rsquo;s post on the same topic&lt;/a&gt;, but they are a very fast and straightforward way to feather the personal software nest.&lt;/p&gt;
&lt;p&gt;I have about ten artifacts which I&amp;rsquo;m regularly using now, and I&amp;rsquo;m adding more each week. In terms of writing software to scratch a personal itch, it is very nice to be able to &lt;em&gt;act&lt;/em&gt; on ideas, rather than putting them in the someday/maybe pile due to the high setup cost of building personal software by hand.&lt;/p&gt;
&lt;p&gt;For others who are also doing this, I wanted to share my prompt:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;I use the tagged prompt style. I don&amp;rsquo;t remember where I picked up this style, but it seems to work well.&lt;/li&gt;
&lt;li&gt;I include my personal preferences for CSS and the UI. Change these are you see fit.&lt;/li&gt;
&lt;li&gt;I&amp;rsquo;ve included a list of &amp;lsquo;banned&amp;rsquo; techniques. By default, Claude seems to love to produce React components, but as I added more banned technologies, it started to produce anything but plain JS. Hence the long list.&lt;/li&gt;
&lt;li&gt;I stuff all my state into a cookie. Yeah, I know this is limiting, but it works for up to 4KB of data. And that&amp;rsquo;s a LOT of data!&lt;/li&gt;
&lt;/ul&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;&amp;lt;task&amp;gt;
First, I describe in a single sentence what the goal of the software is.
&amp;lt;/task&amp;gt;
&amp;lt;technology&amp;gt;
Plain Javascript
Bootstrap 5.3
Bootstrap Icons
&amp;lt;/technology&amp;gt;
&amp;lt;banned_technology&amp;gt;
React
Django
Angular
Vue
JQuery
&amp;lt;/banned_technology&amp;gt;
&amp;lt;state&amp;gt;
- All state should be stored in a single object.
- After each manipulation of the application state, it should be stored in a cookie as a JSON blob.
- State should be loaded from the JSON blob in the cookie on page initialization.
- If the size of the state object approaches more than 4KB when serialised as JSON, alert the user.
&amp;lt;/state&amp;gt;
&amp;lt;ui&amp;gt;
- Produce a responsive design for mobile and desktop.
- Always include dark/light theme support, with a toggle button in the footer of the application.
- Use the CSS features in Bootstrap for consistent look. It may be necessary to write custom CSS, but avoid this if possible.
&amp;lt;/ui&amp;gt;
&amp;lt;data_structure&amp;gt;
If I have particular instructions about how I want to shape the data, I include them here.
&amp;lt;/data_structure&amp;gt;
&amp;lt;functions&amp;gt;
Here I describe in more detail the functions of the software.
&amp;lt;/functions&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;I am also working on a simple (and easy to describe) data persistence idea, so maybe more on this soon.&lt;/p&gt;</content></entry><entry><title>Fingerprint Authentication on a Lenovo Z13 and Fedora 40</title><link href="https://www.jonatkinson.co.uk/blog/fingerprint-authentication-lenovo-z1-fedora-40/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/fingerprint-authentication-lenovo-z1-fedora-40/</id><published>2024-10-27T09:01:00Z</published><updated>2024-10-27T09:01:00Z</updated><content type="html">&lt;p&gt;I&amp;rsquo;ve previously used fingerprint authentication on Thinkpads in the very distant past. There have been a lot of changes to the software stack since then, so I had to rediscover how to make the fingerprint reader work. I&amp;rsquo;m using Fedora 40. I don&amp;rsquo;t actually think any of these instructions are Z13 or Thinkpad specific, but YMMV.&lt;/p&gt;
&lt;p&gt;First, install the &lt;a href="https://packages.fedoraproject.org/pkgs/fprintd/fprintd/"&gt;fprintd&lt;/a&gt; package and start the service:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ sudo dnf install fprintd
$ sudo systemctl enable fprintd
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Enrol some fingerprints:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ fprintd-enroll
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You&amp;rsquo;ll need to touch the fingerprint reader multiple times (sometimes many times!) to build the model of your fingerprint. If you want to register a specific finger, you can so with the &lt;code&gt;-f&lt;/code&gt; flag. From the &lt;a href="https://linux.die.net/man/1/fprintd"&gt;man page&lt;/a&gt;, the supported finger identifiers are:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;For fprintd-enroll, the finger to enroll. Possible values are:
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;left-thumb, left-index-finger, left-middle-finger, left-ring-finger, left-little-finger,
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;right-thumb, right-index-finger, right-middle-finger, right-ring-finger, right-little-finger.
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Once you have registered your fingerprints, you can check with:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ fprintd-verify
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, you need to tell the system to include fingerprint login in the available authentication methods. This was new territory for me, as I wasn&amp;rsquo;t familiar with &lt;a href="https://github.com/authselect/authselect"&gt;authselect&lt;/a&gt;, which because the &lt;a href="https://fedoraproject.org/wiki/Changes/Make_Authselect_Mandatory"&gt;default management interface for authentication&lt;/a&gt; back in Fedora 28.&lt;/p&gt;
&lt;p&gt;To check that fingerprint authentication is available:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ authselect list-features local
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should see the &lt;code&gt;with-fingerprint&lt;/code&gt; capability. Now, we add this to our profile, and then apply the new configuration:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ sudo authselect enable-feature with-fingerprint
$ sudo authselect apply-changes
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, reboot the system to apply the new authentication profile. There is probably a way to do this without rebooting, but I didn&amp;rsquo;t research that far. Once rebooted, open a terminal, and invoke something with &lt;code&gt;sudo&lt;/code&gt; to test authentication:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ sudo ls
Place your finger on the fingerprint reader
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should now be able to authenticate.&lt;/p&gt;</content></entry><entry><title>New in Python 3.13: SQLite support in dbm</title><link href="https://www.jonatkinson.co.uk/blog/python-313-dbm-sqlite/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/python-313-dbm-sqlite/</id><published>2024-10-07T16:30:00Z</published><updated>2024-10-07T16:30:00Z</updated><content type="html">&lt;p&gt;&lt;a href="https://docs.python.org/3.13/whatsnew/3.13.html"&gt;Python 3.13 was released today&lt;/a&gt;, and it would be easy to overlook this:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&amp;ldquo;dbm.sqlite3: An SQLite backend for dbm. (Contributed by Raymond Hettinger and Erlend E. Aasland in gh-100414.)&amp;rdquo;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I don&amp;rsquo;t think that many Python engineers know about the &lt;a href="https://docs.python.org/3.13/library/dbm.html#module-dbm"&gt;&lt;code&gt;dbm&lt;/code&gt;&lt;/a&gt; module, which is a shame because it is a sharp tool. It&amp;rsquo;s a tool for reading and writing to string databases, which has been around in some form since I first &lt;a href="https://docs.python.org/release/1.5/lib/node111.html#SECTION009600000000000000000"&gt;came across it in Python 1.5&lt;/a&gt; (I am indeed &lt;em&gt;that&lt;/em&gt; old).&lt;/p&gt;
&lt;p&gt;&lt;code&gt;dbm&lt;/code&gt; is easy to overlook, maybe due to it&amp;rsquo;s fairly pedestrian subtitle: &amp;ldquo;Interfaces to Unix databases&amp;rdquo;, or maybe because in previous versions of Python, it only supported some unglamorous databases: &lt;a href="https://en.wikipedia.org/wiki/DBM_%28computing%29"&gt;NDBM&lt;/a&gt;, it&amp;rsquo;s cousin GDBM, and more recently the Berkeley DB. These are robust, reliable databases, but they have rather fallen out of favour. Readers who are familiar with those databases will know that such systems were the progenitor to something with a much sexier label: the NoSQL Database.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;dbm&lt;/code&gt; provides a very simple API, which will feel familiar.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;import dbm
# Open database, creating it if necessary.
with dbm.open(&amp;#34;mydatabase&amp;#34;, &amp;#34;c&amp;#34;) as db:
# Record some values
db[&amp;#34;mykey&amp;#34;] = &amp;#34;My Value&amp;#34;
db[&amp;#34;anotherkey&amp;#34;] = &amp;#34;another value&amp;#34;
# Access your database like a dictionary
print(f&amp;#34;mykey: {db.get(&amp;#34;mykey&amp;#34;)}&amp;#34;)
print(db.get(&amp;#34;doesntexist&amp;#34;, &amp;#34;Default value&amp;#34;))
# Beware, however, storing a non-string value will raise an exception
db[&amp;#34;thiswillfail&amp;#34;] = 4
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The API is clean, tiny, and memorable.&lt;/p&gt;
&lt;p&gt;Some will see the inability to store anything but strings as a deal-breaker (which is understandable), but I find that using &lt;a href="https://docs.python.org/3.13/library/pickle.html#module-pickle"&gt;&lt;code&gt;pickle&lt;/code&gt;&lt;/a&gt; or &lt;a href="https://docs.python.org/3.13/library/shelve.html#module-shelve"&gt;&lt;code&gt;shelve&lt;/code&gt;&lt;/a&gt; is enough for these scenarios. The occasional cast to &lt;code&gt;int&lt;/code&gt; is also perfectly bearable.&lt;/p&gt;
&lt;p&gt;Now, with the release of Python 3.13, we can very easily use SQLite3 as our backend:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;with dbm.sqlite3.open(&amp;#34;database.sqlite&amp;#34;, &amp;#34;c&amp;#34;) as db:
db[&amp;#34;mydata&amp;#34;] = &amp;#34;Lorem ipsum...&amp;#34;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;When I write code, I like to use the simplest possible datastore. In practice, I usually start with a JSON file in combination with &lt;code&gt;json.loads&lt;/code&gt; and &lt;code&gt;json.dumps&lt;/code&gt;, which is &amp;rsquo;enough&amp;rsquo; for a lot of problems.&lt;/p&gt;
&lt;p&gt;Having the option to now use SQLite via &lt;code&gt;dbm&lt;/code&gt; is a nice further step; once created, the SQLite database can be manipulated with standard SQLite tools which are available just about everywhere. This is very useful as you mature from &amp;ldquo;I just need somewhere to store my data state&amp;rdquo; to &amp;ldquo;I have more sophisticated needs&amp;rdquo;.&lt;/p&gt;
&lt;p&gt;The &lt;a href="https://docs.python.org/3.13/library/dbm.html#module-dbm"&gt;dbm documentation&lt;/a&gt; and the &lt;a href="https://docs.python.org/3.13/library/dbm.html#module-dbm.sqlite3"&gt;dbm.sqlite3 documentation&lt;/a&gt; is a good next stop to learn the detail.&lt;/p&gt;</content></entry><entry><title>Django Carbon Measurement Notes</title><link href="https://www.jonatkinson.co.uk/blog/django-carbon-measurement-notes/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/django-carbon-measurement-notes/</id><published>2024-10-07T10:15:00Z</published><updated>2024-10-07T10:15:00Z</updated><content type="html">&lt;p&gt;At work recently, I&amp;rsquo;ve been working on some code to measure the bandwidth required to display webpages, and from there calculate the approximate energy usage and carbon output. Here are the links from my notes:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://2024.djangocon.eu/pretalx/djangocon-europe-2024/talk/QP39VQ/"&gt;Greening Digital with Django&lt;/a&gt;, a talk by &lt;a href="https://blog.chrisadams.me.uk/"&gt;Chris Adams&lt;/a&gt;, whose blog is also full of interesting ideas.&lt;/li&gt;
&lt;/ul&gt;
&lt;iframe style="margin-left: 40px; margin-bottom: 15px;" width="560" height="315" src="https://www.youtube.com/embed/ok_xqkBJXP8?si=NFtSTpPiKa4sY4z_" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen&gt;&lt;/iframe&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://2024.djangocon.us/talks/faster-leaner-greener-10x-lower-website-carbon-emissions/"&gt;Faster, leaner, greener: 10x lower website carbon emissions&lt;/a&gt;, a talk by &lt;a href="https://thib.me/"&gt;Thibaud Colas&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/mlco2/codecarbon"&gt;codecarbon&lt;/a&gt;, a Python package to quantify carbon emissions by instrumenting code (&lt;a href="https://mlco2.github.io/codecarbon/"&gt;docs here&lt;/a&gt;).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://www.thegreenwebfoundation.org/co2-js/"&gt;co2.js&lt;/a&gt;, a Javascript package.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://www.mozilla.org/en-US/firefox/104.0/releasenotes/"&gt;Since Firefox 104&lt;/a&gt;, Firefox has supported measuring power consumption in the browser&amp;rsquo;s profiling tools. See also &lt;a href="https://fershad.com/writing/co2e-estimates-in-firefox-profiler/"&gt;this interesting post from Fershad Irani&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://github.com/GeopJr/CO2"&gt;CO2&lt;/a&gt;, a Github action to measure CO2 usage as part of a CI run. Some small amount of commentary on this; this tool starts up a Google Lighthouse instance in the CI run, which is very heavy, CPU-intensive software which runs a complete desktop web browser in order to instrument the bytes transferred. This seems very inefficient to me, and trying to find an acceptable alternative is the problem which I&amp;rsquo;ve been working on.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://greensoftware.foundation/"&gt;The Green Software Foundation&lt;/a&gt;, and see also the very useful &lt;a href="https://maturity-matrix.greensoftware.foundation/gsmm/"&gt;Green Software Maturity Matrix&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href="https://www.thegreenwebfoundation.org/"&gt;The Green Web Foundation&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;</content></entry><entry><title>Using ruff for everything in VSCode</title><link href="https://www.jonatkinson.co.uk/blog/vscode-ruff-everything/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/vscode-ruff-everything/</id><published>2024-10-03T10:00:00Z</published><updated>2024-10-03T10:00:00Z</updated><content type="html">&lt;p&gt;&lt;a href="https://marketplace.visualstudio.com/items?itemName=charliermarsh.ruff"&gt;Ruff&amp;rsquo;s VSCode extension&lt;/a&gt; is nice, and can handle the jobs previously done by &lt;code&gt;black&lt;/code&gt; and &lt;code&gt;isort&lt;/code&gt;. To turn on everything, and to run this on every file save (this is probably what you want, given it only takes a few milliseconds), use this configuration in &lt;code&gt;.vscode/settings.json&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;{
&amp;#34;[python]&amp;#34;: {
&amp;#34;editor.codeActionsOnSave&amp;#34;: {
&amp;#34;source.organizeImports&amp;#34;: &amp;#34;explicit&amp;#34;,
&amp;#34;source.fixAll&amp;#34;: &amp;#34;explicit&amp;#34;,
},
&amp;#34;editor.defaultFormatter&amp;#34;: &amp;#34;charliermarsh.ruff&amp;#34;,
&amp;#34;editor.formatOnSave&amp;#34;: true
},
}
&lt;/code&gt;&lt;/pre&gt;</content></entry><entry><title>Automatic, persistent autoreload in iPython</title><link href="https://www.jonatkinson.co.uk/blog/automatic-code-reload-ipython/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/automatic-code-reload-ipython/</id><published>2024-10-02T10:00:00Z</published><updated>2024-10-02T10:00:00Z</updated><content type="html">&lt;p&gt;I like that &lt;a href="https://ipython.org/"&gt;iPython&lt;/a&gt; can be configured to automatically reload modules. It&amp;rsquo;s useful to be able to keep REPL context through development, without needing to constantly tear down and setup your context:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;gt;&amp;gt;&amp;gt; %load_ext autoreload
&amp;gt;&amp;gt;&amp;gt; %autoreload 2
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;However, turning on the &lt;code&gt;autoreload&lt;/code&gt; extension is &lt;em&gt;also&lt;/em&gt; annoying to set each time you spawn a REPL, but thankfully this can also be automated:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Create a file in your project root, .ipython_data_local/ipython_config.py&lt;/li&gt;
&lt;li&gt;Add the following code:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;c.InteractiveShellApp.extensions = [&amp;#34;autoreload&amp;#34;]
c.InteractiveShellApp.exec_lines = [&amp;#34;%autoreload 2&amp;#34;]
&lt;/code&gt;&lt;/pre&gt;&lt;ul&gt;
&lt;li&gt;Then, optionally mount that folder in your docker-compose.yml, which is useful if you run your REPL in a container:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;volumes:
- .ipython_data_local:/root/.ipython/profile_default
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Using &lt;code&gt;.ipython_data_local/ipython_config.py&lt;/code&gt; seems quite strange, but it seems it&amp;rsquo;s based on iPython&amp;rsquo;s own &lt;a href="https://ipython.readthedocs.io/en/stable/config/intro.html#setting-configurable-options"&gt;profile system&lt;/a&gt;. It would be nice if this could be configured in &lt;code&gt;pyproject.toml&lt;/code&gt; like everything else.&lt;/p&gt;</content></entry><entry><title>A minimal Django application</title><link href="https://www.jonatkinson.co.uk/blog/minimal-django-application/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/minimal-django-application/</id><published>2024-09-18T10:28:00Z</published><updated>2024-09-18T10:28:00Z</updated><content type="html">&lt;p&gt;I wanted to put together a single-file Django application, which could still do &amp;lsquo;useful&amp;rsquo; work; that is, define models, operate on them, and setup URL routing.&lt;/p&gt;
&lt;p&gt;I ended up with two files. The first, &lt;code&gt;application.py&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;import os
from django.db import models
from django.http import HttpResponse
from django.urls import path, re_path
from django.core.wsgi import get_wsgi_application
from django.contrib.auth.decorators import login_required
from django.contrib.auth import views as auth_views
# Assuming settings.py is in the same directory
os.environ.setdefault(&amp;#39;DJANGO_SETTINGS_MODULE&amp;#39;, &amp;#39;settings&amp;#39;)
# Define the Django model
class MyModel(models.Model):
name = models.CharField(max_length=100)
# Define the view functions
def index(request):
return HttpResponse(&amp;#34;Hello, world!&amp;#34;)
@login_required
def authenticated_index(request):
return HttpResponse(&amp;#34;Hello, authenticated user!&amp;#34;)
# Define the URL patterns
urlpatterns = [
path(&amp;#39;&amp;#39;, index, name=&amp;#39;index&amp;#39;),
path(&amp;#39;authenticated/&amp;#39;, authenticated_index, name=&amp;#39;authenticated_index&amp;#39;),
path(&amp;#39;login/&amp;#39;, auth_views.LoginView.as_view(), name=&amp;#39;login&amp;#39;),
]
# Create the WSGI application
application = get_wsgi_application()
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;And the second, &lt;code&gt;settings.py&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
SECRET_KEY = &amp;#39;your_secret_key&amp;#39;
DATABASES = {
&amp;#39;default&amp;#39;: {
&amp;#39;ENGINE&amp;#39;: &amp;#39;django.db.backends.sqlite3&amp;#39;,
&amp;#39;NAME&amp;#39;: os.path.join(BASE_DIR, &amp;#39;db.sqlite3&amp;#39;),
}
}
INSTALLED_APPS = [
&amp;#39;django.contrib.auth&amp;#39;,
&amp;#39;django.contrib.contenttypes&amp;#39;,
]
ROOT_URLCONF = &amp;#39;application&amp;#39; # Point to the application file.
TEMPLATES = [
{
&amp;#39;BACKEND&amp;#39;: &amp;#39;django.template.backends.django.DjangoTemplates&amp;#39;,
&amp;#39;DIRS&amp;#39;: [&amp;#39;templates&amp;#39;],
&amp;#39;APP_DIRS&amp;#39;: True,
&amp;#39;OPTIONS&amp;#39;: {
&amp;#39;context_processors&amp;#39;: [
&amp;#39;django.template.context_processors.debug&amp;#39;,
&amp;#39;django.template.context_processors.request&amp;#39;,
&amp;#39;django.contrib.auth.context_processors.auth&amp;#39;,
&amp;#39;django.contrib.messages.context_processors.messages&amp;#39;,
],
},
},
]
ALLOWED_HOSTS = [] # Update for deployment
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Edit 2024-09-26: I just found out about &lt;a href="https://github.com/radiac/nanodjango"&gt;nanodjango&lt;/a&gt;, which is a much more polished experience. Check it out.&lt;/p&gt;</content></entry><entry><title>Logging git commits with a git hook</title><link href="https://www.jonatkinson.co.uk/blog/log-git-commits-hook/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/log-git-commits-hook/</id><published>2024-09-09T00:00:00Z</published><updated>2024-09-09T00:00:00Z</updated><content type="html">&lt;p&gt;Similar to my previous post (more than a year ago!) about &lt;a href="https://www.jonatkinson.co.uk/blog/conventional-commits-hook.html"&gt;using Git hooks to enforce conventional commits&lt;/a&gt;, I wanted to start logging all of my commits. This is part of a larger project to keep a better work diary, but I wanted to start with something simple and unobtrustive, and this seemed like a simple place to start.&lt;/p&gt;
&lt;p&gt;We need to create a global hooks folder, and enable it.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ mkdir $HOME/.git_hooks
$ git config --global core.hooksPath ~/.git_hooks
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Using the &lt;code&gt;post-commit&lt;/code&gt; hook is the most sensible place for this, so we&amp;rsquo;ll create the hook and make it executable:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ touch $HOME/.git_hooks/post-commit
$ chmod +x $HOME/.git_hooks/post-commit
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The contents of that script:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#!/bin/bash
# Get the commit message.
commit_message=$(git log -1 --pretty=%B)
# Get the repository name using git rev-parse.
repo_name=$(git rev-parse --show-toplevel | xargs basename)
# Get the current datestamp.
datestamp=$(date +&amp;quot;%Y-%m-%d %H:%M:%S&amp;quot;)
# Log the information to the .git_log file in the home directory.
echo &amp;quot;$datestamp | $repo_name | $commit_message&amp;quot; &amp;gt;&amp;gt; &amp;quot;$HOME/.git_log&amp;quot;
# Some positive output.
echo &amp;quot;Logged commit.&amp;quot;
exit 0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The code is also available from &lt;a href="https://gist.github.com/jonatkinson/71db4672fc74c98a66cf7163fc38706b"&gt;this gist&lt;/a&gt;. Here&amp;rsquo;s the copy/paste installer:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ curl https://gist.githubusercontent.com/jonatkinson/71db4672fc74c98a66cf7163fc38706b/raw/2dd8b0a583a1fddb203af43f09edeb3193f1d8fb/commit-log &amp;gt; $HOME/.git_hooks/post-commit &amp;amp;&amp;amp; chmod +x $HOME/.git_hooks/post-commit
&lt;/code&gt;&lt;/pre&gt;</content></entry><entry><title>Conventional Commits Git Hook</title><link href="https://www.jonatkinson.co.uk/blog/conventional-commits-hook/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/conventional-commits-hook/</id><published>2023-05-03T00:00:00Z</published><updated>2023-05-03T00:00:00Z</updated><content type="html">&lt;p&gt;I was looking for a way to enforce the &lt;a href="https://www.conventionalcommits.org/en/v1.0.0/"&gt;Conventional Commits&lt;/a&gt; pattern for my own commit messages. A lot of the solutions I found were wildly overcomplicated; I don&amp;rsquo;t need to install a commit linting framework with a hundred JS dependencies, I just want to apply a regex to a commit message.&lt;/p&gt;
&lt;p&gt;This was fairly straightforward. First, create a global hooks folder, and enable it.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ mkdir $HOME/.git_hooks
$ git config --global core.hooksPath ~/.git_hooks
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, create the script file:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ touch $HOME/.git_hooks/commit-msg
$ chmod +x $HOME/.git_hooks/commit-msg
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The contents of that script:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#!/bin/bash
commit_msg_file=$1
commit_msg=$(cat $commit_msg_file)
# Define the pattern for conventional commit messages
pattern=&amp;quot;^(build|chore|ci|docs|feat|fix|perf|refactor|revert|style|test)(\(.+\))?: .+&amp;quot;
if [[ ! ${commit_msg} =~ $pattern ]]; then
echo &amp;quot;ERROR: The commit message does not follow the conventional commit format.&amp;quot;
echo &amp;quot;Valid types: build, chore, ci, docs, feat, fix, perf, refactor, revert, style, test&amp;quot;
echo &amp;quot;Format: type(scope): subject&amp;quot;
exit 1
fi
# If commit message matches the pattern, continue with the commit
exit 0
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The code is also available from &lt;a href="https://gist.github.com/jonatkinson/9243328de14e17e1e4200b9a1ca97d72"&gt;this gist&lt;/a&gt;, so if you prefer, install it like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ curl https://gist.githubusercontent.com/jonatkinson/9243328de14e17e1e4200b9a1ca97d72/raw/3e342f140f5df8d6ec273253a071c431a90d6a89/commit-msg &amp;gt; $HOME/.git_hooks/commit-msg
&lt;/code&gt;&lt;/pre&gt;</content></entry><entry><title>How I Do 1-2-1 Meetings</title><link href="https://www.jonatkinson.co.uk/blog/how-i-do-121-meetings/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/how-i-do-121-meetings/</id><published>2023-02-28T00:00:00Z</published><updated>2023-02-28T00:00:00Z</updated><content type="html">&lt;p&gt;It appears that everyone on the internet completely hates 1-2-1 meetings.&lt;/p&gt;
&lt;p&gt;I don&amp;rsquo;t particularly want to add fuel to that viewpoint; I&amp;rsquo;m sure there are a huge mount of people who completely hate 1-2-1 meetings &lt;em&gt;with &lt;em&gt;their&lt;/em&gt; manager&lt;/em&gt;. I understand that.&lt;/p&gt;
&lt;p&gt;However, done correctly, the 1-2-1 meeting is one of the greatest tools we have in the workplace; building lines of communication up, down, and across your organisation is an extremely high value activity. There&amp;rsquo;s nothing more effective to build trust.&lt;/p&gt;
&lt;p&gt;Here&amp;rsquo;s the problem though; all the literature focuses on the &amp;lsquo;perfect&amp;rsquo; 1-2-1 meeting. Apparently you can find success if you just set the right agenda, just ask the right questions, or just set the right cadence. That&amp;rsquo;s all bullshit. Effective 1-2-1 meetings are a highly adaptive process. You need to change your tactics depending on the individual, their state of mind, their current challenges AND your own status. You need to be ready to adjust.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve ended up with four &amp;lsquo;modes&amp;rsquo; of running a 1-2-1, which I pick from and blend together based on the early minutes of the meeting:&lt;/p&gt;
&lt;h3 id="the-no-agenda-meeting"&gt;The &amp;lsquo;No Agenda&amp;rsquo; Meeting.&lt;/h3&gt;
&lt;p&gt;Sometimes, you don&amp;rsquo;t need an agenda. You can let the conversation and the other person lead entirely.&lt;/p&gt;
&lt;h3 id="the-big-goals-meeting"&gt;The &amp;lsquo;Big Goals&amp;rsquo; Meeting.&lt;/h3&gt;
&lt;p&gt;Sometimes, you need to take focus completely away from the current context. This is an ideal candidate for the very end of a day, or a Friday, or better over a long lunch. The most important context change you can make for this kind of 1-2-1 meeting is to make as much space between the general &amp;lsquo;work&amp;rsquo; context and the current moment.&lt;/p&gt;
&lt;p&gt;Big questions can be asked in this meeting. This can be a great way of showing vulnerability and building trust;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Is the organisation meeting your expectations?&lt;/li&gt;
&lt;li&gt;Will it continue to do so in a year, two years, four years?&lt;/li&gt;
&lt;li&gt;What is the biggest non-work goal in your life, and is there anything I can do to help you get there?&lt;/li&gt;
&lt;li&gt;What are you aiming to achieve this year in your personal life? What is blocking you, and how can I or the organisation help you?&lt;/li&gt;
&lt;li&gt;Do you know what my big goals are?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Clearly, not all 1-2-1 meetings can focus on these big goals. Big goals take time to achieve, so the cadence needs to be wide. That also doesn&amp;rsquo;t mean &amp;lsquo;only speak annually about these things&amp;rsquo;. You&amp;rsquo;re a better manager than that.&lt;/p&gt;
&lt;h3 id="the-work-catchup-meeting"&gt;The &amp;lsquo;Work Catchup&amp;rsquo; Meeting.&lt;/h3&gt;
&lt;p&gt;This could easily be renamed &amp;ldquo;The &amp;lsquo;Things I&amp;rsquo;m Too Embarrassed To Raise At The Standup&amp;rsquo; Meeting.&amp;rdquo;; it won&amp;rsquo;t shock you to know this is the most common meeting type, certainly for engineering managers. It&amp;rsquo;s also probably the least valuable time you&amp;rsquo;ll spend with your 1-2-1 partner. However there is some useful information you can gain from these meetings:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What is consistently working well for our team?&lt;/li&gt;
&lt;li&gt;Are we currently properly matching your workload with your ability?&lt;/li&gt;
&lt;li&gt;Which projects are healthy, which are at risk, which are fucked?&lt;/li&gt;
&lt;li&gt;Why?&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="the-shut-the-fuck-up-meeting"&gt;The &amp;lsquo;Shut The Fuck Up&amp;rsquo; Meeting.&lt;/h3&gt;
&lt;p&gt;Guess who is doing the shutting-the-fuck-up? You.&lt;/p&gt;
&lt;p&gt;Any of the above 1-2-1 meeting styles can quickly flip to this if your organisation has overt or hidden dysfunction. All organisations have both overt and hidden dysfunction.&lt;/p&gt;
&lt;p&gt;This is where you need to let the communication flow. Everything you know about active listening and NVC come into play here. Your camera should be on (aside: if your camera isn&amp;rsquo;t on, you&amp;rsquo;re a shitty leader), and at most your contribution should be to prompt for more. If you find yourself becoming defensive, you need to stop and minimise your own needs. When your counterpart is talking about big problems, your job is to listen, NOT respond. This isn&amp;rsquo;t litigation, it&amp;rsquo;s an exercise in building trust and enhancing your reputation with your counterpart. Save the solutions for another time.&lt;/p&gt;
&lt;p&gt;Good luck.&lt;/p&gt;</content></entry><entry><title>Link Roundup #6</title><link href="https://www.jonatkinson.co.uk/blog/link-roundup-6/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/link-roundup-6/</id><published>2023-02-20T00:00:00Z</published><updated>2023-02-20T00:00:00Z</updated><content type="html">&lt;p&gt;Some of these are older links I found languishing, misfiled in my Safari favourites.&lt;/p&gt;
&lt;h3 id="the-collapse-of-complex-software"&gt;&lt;a href="https://nolanlawson.com/2022/06/09/the-collapse-of-complex-software/"&gt;The collapse of complex software&lt;/a&gt;&lt;/h3&gt;
&lt;blockquote&gt;
&lt;p&gt;Right now, the software industry has been in a nearly two-decade economic boom (with some fits and starts), but the one sure thing in economics is that booms eventually turn to busts. During the boom, software companies can keep hiring new headcount to manage their existing software (i.e. more engineers to understand more boxes and arrows), but if their labor force is forced to contract, then that same system may become unmaintainable. A rapid and permanent reduction in complexity may be the only long-term solution.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;If CV-driven development comes to an end, little of value will be lost. And that&amp;rsquo;s not a snarky &amp;ldquo;little of value&amp;rdquo;; there is no value generation, and we will not miss those people.&lt;/p&gt;
&lt;h3 id="chatgpt-explained-a-normie"&gt;&lt;a href="https://www.jonstokes.com/p/chatgpt-explained-a-guide-for-normies"&gt;ChatGPT Explained: A Normie&amp;rsquo;s Guide To How It Works&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Okay, this isn&amp;rsquo;t for normie Normies. This isn&amp;rsquo;t for your parents. But as an engineers guide if you&amp;rsquo;re not familiar with the space, it&amp;rsquo;s fantastic.&lt;/p&gt;
&lt;h3 id="canada-bans-tiktok-on-government-devices-over-security-risks"&gt;&lt;a href="https://www.theguardian.com/technology/2023/feb/28/canada-bans-tiktok-on-government-phones-devices-over-security-risks"&gt;Canada bans TikTok on government devices over security risks&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;This is any of a cluster of articles I could have picked on this topic. No-one cares. The fact that we are seeing this action this late suggests that the toothpaste is already a long way out of the tube.&lt;/p&gt;
&lt;p&gt;Presumably the three-letter agencies have been monitoring traffic generated by the average TikTok user for years now. And presumably recently, China have made some rustle in this particular dark forest to cause this to bubble to the top of bureaucratic consciousness. Hopefully more at CCC.&lt;/p&gt;</content></entry><entry><title>Link Roundup #5</title><link href="https://www.jonatkinson.co.uk/blog/link-roundup-5/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/link-roundup-5/</id><published>2023-02-12T00:00:00Z</published><updated>2023-02-12T00:00:00Z</updated><content type="html">&lt;p&gt;Back after a vacation with the ChatGPT special edition. I suspect the theme of the next 5 years will be ChatGPT.&lt;/p&gt;
&lt;h3 id="chat-gpt-is-the-birth-of-the-real-web-30-and-it"&gt;&lt;a href="https://lajili.com/posts/post-2/"&gt;Chat GPT is the birth of the real Web 3.0, and it&amp;rsquo;s not going to be fun.&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;As the ouroboros continues to devour itself into ever-tightening recursion (A &amp;ldquo;recursive descent&amp;rdquo;! HA!), we approach the info-bullshit singularity.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I believe the web as a way to access information is getting worse by the day. Content generated with GPT-3 is going to start to show up for every long tail search under the sun, whereas regular content is going to get even heavier with SEO keyword to survive. The web is going to get worse and worse, and the only way to get good information is with a system that can extract the signal from the noise, a.k.a ChatGPT.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Actually, does this mark the beginning of the post-bullshit era; where &amp;lsquo;content&amp;rsquo; is &lt;em&gt;all&lt;/em&gt; meaningless? Does bullshit actually work as a distinction against the sea of GPT sludge? This leads on to&amp;hellip;&lt;/p&gt;
&lt;h3 id="poe"&gt;&lt;a href="https://apps.apple.com/app/id1640745955?platform=iphone"&gt;Poe&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;&amp;hellip; Poe. An AI chatbot trained on, no I am not-fucking-kidding, Quora answers. If Quora, which was truly the poster-child of the Web 2.0 tragedy of clout-chasing (or do we just call this the race to the bottom?), thinks if can extract meaningful signal from it&amp;rsquo;s corpus, please could we have Yahoo! Answers next? &lt;em&gt;How is AI formed&lt;/em&gt;?&lt;/p&gt;
&lt;h3 id="chatgpt-is-an-extra-ordinary-python-programmer"&gt;&lt;a href="https://davidamos.dev/chatgpt-is-an-extra-ordinary-python-programmer/"&gt;ChatGPT Is An Extra-Ordinary Python Programmer&lt;/a&gt;&lt;/h3&gt;
&lt;blockquote&gt;
&lt;p&gt;ChatGPT codes like an expert beginner. It can help you be productive, but it can&amp;rsquo;t be trusted.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The average CEO in a non-technical business (ie. most of them) will not be able to distinguish between the output of a capable software engineering leader with a small team, and the output of massively parallel GPT output. Even guided, I despair. The reality is going to hit the industry like a train (and, for offshore teams, like a nuclear bomb).&lt;/p&gt;</content></entry><entry><title>Link Roundup #4</title><link href="https://www.jonatkinson.co.uk/blog/link-roundup-4/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/link-roundup-4/</id><published>2023-02-01T00:00:00Z</published><updated>2023-02-01T00:00:00Z</updated><content type="html">&lt;h3 id="we-are-being-levelled-down"&gt;&lt;a href="https://www.bloomberg.com/graphics/uk-levelling-up/inflation-government-delays-why-wealth-gap-widens.html"&gt;We are being levelled-down&lt;/a&gt;&lt;/h3&gt;
&lt;blockquote&gt;
&lt;p&gt;The government has allocated £9.7 billion of levelling up funding since 2019. But between 2010 and 2020, annual funding from the national government to local councils in England fell from £41 billion to £26 billion adjusted for inflation — and the government’s critics say pots of levelling up funding since are scant compensation.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;That’s why we’re in the mess we’re in,&amp;rdquo; said Bev Craig, the Labour leader of Manchester City Council. &amp;ldquo;None of that money comes close to what was lost.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I know we&amp;rsquo;re in a kind of neo-politics, post-truth era (or maybe only nihilists are capable of grinding to the top of the political order), but the levelling-up agenda always appeared completely hollow from the beginning. Maybe around the early Cameron/Osborne era there was a glimmer of conviction about the process of evenly distributing the UK&amp;rsquo;s wealth, but that has been walked back (and further!) by successive governments, whose approach to the simple reality of the mathematics is: &amp;ldquo;just don&amp;rsquo;t talk about it&amp;rdquo;.&lt;/p&gt;
&lt;p&gt;One wonders how different a Labour government would be; the level of cynicism is currently off the scale.&lt;/p&gt;</content></entry><entry><title>Hiring Correctly</title><link href="https://www.jonatkinson.co.uk/blog/hiring-correctly/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/hiring-correctly/</id><published>2023-01-27T00:00:00Z</published><updated>2023-01-27T00:00:00Z</updated><content type="html">&lt;p&gt;It&amp;rsquo;s a time of year when a lot of businesses are hiring, and layoffs mean there are more candidates in the marketplace. That also means the volume of online commentary concerning &amp;lsquo;bad practices&amp;rsquo; is growing; and the amount of bad practice is staggering. Otherwise rational engineering organisations seem very susceptible to copying their hiring process from whatever nonsense the FAANG interview handbook currently suggests &amp;ndash; without thinking about the people they are putting into the meat grinder.&lt;/p&gt;
&lt;p&gt;I wanted to lay out my experiences; in a small (ie. sub £2m/year) software company, a long way from Silicon Valley.&lt;/p&gt;
&lt;h3 id="the-job-advert"&gt;The Job Advert&lt;/h3&gt;
&lt;p&gt;A job advert isn&amp;rsquo;t a mystery novel. The point is not to intrigue. A job advertisement should be extremely explicit, and it should be exhaustive in it&amp;rsquo;s detail where possible. From an efficiency point of view, it&amp;rsquo;s much better for your candidate to spend time evaluating the role in written form, and making a choice to proceed or decline early in the process. This also saves time on the employer side.&lt;/p&gt;
&lt;p&gt;Your job advert should contain absolutely no misdirection. Don&amp;rsquo;t talk about &amp;rsquo;leading new projects&amp;rsquo; when your engineering team spends 65% of their time on maintenance work. Don&amp;rsquo;t be tempted to try to attract engineering talent to the zeitgeist; don&amp;rsquo;t advertise new frameworks and languages, when you only use them for prototypes and not your day-to-day activities. If you allow your job advert to turn into a marketing piece, then you&amp;rsquo;re just saving up disappointment for your candidate&amp;rsquo;s first week on the job. Write about the company you &lt;em&gt;are&lt;/em&gt;, not the company you wish you were.&lt;/p&gt;
&lt;p&gt;This transparency must extend to the salary for the role. If you do one thing to improve your hiring processes, make it this one; you must be explicit about salary. If you have a range which depends on experience, that&amp;rsquo;s fine, but be explicit about how much that experience affects the compensation. The narrower salary band you can offer, the more confidence you can build in your hiring funnel, because both you and the candidate have the same expectations.&lt;/p&gt;
&lt;p&gt;The salary you offer for a job is the clearest expression of the bargain you&amp;rsquo;re making between the organisation and the individual. If your organisational success depends on undervaluing your staff and underpaying them, then you need to fix that before you hire anyone else. This doesn&amp;rsquo;t mean you need to reach for the highest number you can; geography and prestige and debt all effect what an organisation can afford to spend on it&amp;rsquo;s talent, and your organisation might not be able to spend top dollar. If your number is lower than the competition, that is fine, but you can at least be honest about it. As the compensation discussion usually comes at the end of the hiring process, if either party enter into the process without clarity, then you run the risk of wasting everyone&amp;rsquo;s time. That will, quite correctly, damage your credibility.&lt;/p&gt;
&lt;h3 id="process-transparency"&gt;Process Transparency&lt;/h3&gt;
&lt;p&gt;Once a candidate has contacted you, you need to lay out the process for them in detail, from the very first communication. No matter what your process is, it probably has multiple stages, and those stages all have a window of time, and your candidate needs to know this so they can plan accordingly. So if your process is 3 stages, and it usually takes 2 weeks, then be clear about this. Highlight any potential risks (particularly around the availability of key people); nothing destroys trust and motivation faster than a candidate who is simply left waiting for you to &amp;lsquo;get back in touch&amp;rsquo;.&lt;/p&gt;
&lt;p&gt;I try to always open an interaction with a reminder of where the candidate is in the process, and any changes. It never hurts to let someone know if the process if running off-schedule, or if someone dealing with a future stage is unavailable. Transparency at all times.&lt;/p&gt;
&lt;p&gt;The steps in your process are commitments, both to the candidate and the business. You should take them seriously, as this may be your first opportunity to demonstrate your integrity to someone you may work with for a long time to come. Cancelling interviews, arranging meetings without proper notice, not having the interviewer available &amp;ndash; these are all examples of bad practice and it&amp;rsquo;s hard to trust an employer who breaks their early commitments.&lt;/p&gt;
&lt;h3 id="feedback"&gt;Feedback&lt;/h3&gt;
&lt;p&gt;Speaking of your commitments to a candidate, you should commit to feedback, regardless of whether it&amp;rsquo;s good, or bad, or awkward. Telling a candidate they did a good job in an early stage of your process can give them confidence for future stages. Be human; if a candidate impressed you, let them know!&lt;/p&gt;
&lt;p&gt;Equally, being clear and empathetic in your feedback for a candidate who isn&amp;rsquo;t right will take you minutes, but may save that person hours of wasted time in a job search which isn&amp;rsquo;t well-targeted. Bringing a new employee on board is a collaborative process. I have given clear negative feedback to candidates who I didn&amp;rsquo;t hire, and then employed those same people a few years later. It costs nothing and builds trust.&lt;/p&gt;
&lt;h3 id="no-technical-tests"&gt;No Technical Tests&lt;/h3&gt;
&lt;p&gt;This might not apply to all organisations: but I strongly advise you to remove the technical test. The technical test is usually a way of uncovering technical proficiency. However, well run interview process, asking the right questions, will uncover the candidate&amp;rsquo;s technical proficiency naturally. Asking about their experience, examples of work, examples of past mistakes, all of these things develop towards understanding this.&lt;/p&gt;
&lt;p&gt;If you absolutely must do technical tests, then do with an unfixed deadline, and pay the candidate for their time. I usually ask a candidate for an up-front day-rate early in the interview process, and work from there. Expect it to take time to come back, and always assume your candidate has, and is entitled to, a private home life which your process should not interfere with. Expecting someone to work their day-job, then spend their evenings doing a technical test, just to meet your arbitrary deadline? Unacceptable.&lt;/p&gt;
&lt;h3 id="meet-candidates-in-context"&gt;Meet Candidates In Context&lt;/h3&gt;
&lt;p&gt;If you are a remote organisation, conduct the process remotely. If you&amp;rsquo;re an in-person organisation, meet your candidates at your offices. The interview process is all about building context, and your candidate is watching and noticing everything. A candidate would, unsurprisingly, be skeptical about an &amp;lsquo;100% remote&amp;rsquo; position when the interviewer is sat in an office, with poor videoconferencing equipment, and a busy office in their background; a clearly dissonant context.&lt;/p&gt;
&lt;h3 id="dont-use-a-recruiter-until-you-find-the-right-one"&gt;Don&amp;rsquo;t Use A Recruiter, Until You Find The Right One&lt;/h3&gt;
&lt;p&gt;There is perennial natural hostility towards recruiters among most software organisations. Low-quality recruiters are a constant annoyance, who treat the recruitment process as having a probability-based outcome. Ignore all recruiters who are playing the numbers by sending unsolicited CVs to cold contacts.&lt;/p&gt;
&lt;p&gt;However, there will come an inflection point when you find the right recruiter, and suddenly working with recruiters becomes incredibly valuable. A high-quality recruiter:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;has pedigree with a particular technology stack, the narrower the better. A &amp;lsquo;software engineering&amp;rsquo; recruiter is no help to you. A &amp;lsquo;Python software engineering recruiter&amp;rsquo; is better, but a &amp;ldquo;Django software engineer recruiter&amp;rdquo; is best.&lt;/li&gt;
&lt;li&gt;understands their candidate base, and networks with them all the time. Note that LinkedIn &lt;em&gt;isn&amp;rsquo;t&lt;/em&gt; networking, and sending emails &lt;em&gt;isn&amp;rsquo;t&lt;/em&gt; networking. I&amp;rsquo;m talking about real, in-person, taking-people-out-for-dinner networking. Boots on the ground.&lt;/li&gt;
&lt;li&gt;is willing to take the time, sometimes months, to understand your business and the type of candidate who will work for you.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you find a recruiter who works for you like this, then you should stick closely to them, because that is the relationship which will ensure you only encounter high-quality candidates at the top of your hiring funnel. Quality in, quality out.&lt;/p&gt;</content></entry><entry><title>Link Roundup #3</title><link href="https://www.jonatkinson.co.uk/blog/link-roundup-3/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/link-roundup-3/</id><published>2023-01-20T00:00:00Z</published><updated>2023-01-20T00:00:00Z</updated><content type="html">&lt;h3 id="amazon-is-discontinuing-its-amazonsmile-charity-program-next-month"&gt;&lt;a href="https://arstechnica.com/gadgets/2023/01/amazonsmile-ending-donation-program-had-limited-impact-amazon-says/"&gt;Amazon is discontinuing its AmazonSmile charity program next month&lt;/a&gt;&lt;/h3&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;Amazon emailed participants of the free program about the news on Wednesday. The email said that AmazonSmile, which launched in 2013, &amp;lsquo;has not grown to create the impact that we had originally hoped.&amp;rsquo;&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;It&amp;rsquo;s despicable that the Smile programme only managed to have ~$45m/year impact, during a time when Amazon was one of the most valuable companies in the world. I&amp;rsquo;ll happily acknowledge that is a milquetoast take, but in truth Amazon did a poor job of promoting Smile, and it was difficult for charities to engage their constituents. The programme could have offered a link to automatically enroll: amzn.to/cancerresearchuk, which I raised a couple of times with our AWS account manager. A very disappointing whimper.&lt;/p&gt;
&lt;h3 id="programmer-salaries-in-the-age-of-llms"&gt;&lt;a href="https://milkyeggs.com/?p=303"&gt;Programmer salaries in the age of LLMs&lt;/a&gt;&lt;/h3&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;Just like how a partner at [law firm] Cravath likely sketches an outline of how they want to approach a particular case and swarms of largely replaceable lawyers fill in the details, we are perhaps converging to a future where a FAANG L7 can just sketch out architectural details and the programmer equivalent of paralegals will simply query the latest LLM and clean up the output.&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I don&amp;rsquo;t understand the asserting that this will only be available to senior FAANG engineers. Software engineers at all levels can benefit from understanding and leveraging LLM assisted practice. Resistance is futile. That isn&amp;rsquo;t to say that we are removing the &amp;lsquo;craft&amp;rsquo; of software engineering (though you could argue that most languages have a standardised &amp;lsquo;style&amp;rsquo; and &amp;lsquo;way&amp;rsquo;, especially at higher levels), but more that the craft is moving to a higher level of abstraction. For better or worse.&lt;/p&gt;</content></entry><entry><title>Link Roundup #2</title><link href="https://www.jonatkinson.co.uk/blog/link-roundup-2/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/link-roundup-2/</id><published>2023-01-13T00:00:00Z</published><updated>2023-01-13T00:00:00Z</updated><content type="html">&lt;h3 id="microservices-are-a-big-ball-of-mud"&gt;&lt;a href="https://code-held.com/2022/07/28/microservices/"&gt;Microservices are a Big Ball of Mud&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;It&amp;rsquo;s incredible that we still have microservice defenders in 2023. It&amp;rsquo;s a prime example of the immaturity of software enigneering; the cargo-cult is truly rampant.&lt;/p&gt;
&lt;p&gt;The article makes the point that this is &amp;lsquo;over-engineering&amp;rsquo;, which I take issue with. Over-engineering usually implies a baroque, but working solution, when microservices rarely even work in practice.&lt;/p&gt;
&lt;p&gt;In a parallel universe, in which there exists effective way to measure software engineering productivity, the thousands of years of effort wasted on microservices would be exposed.&lt;/p&gt;
&lt;h3 id="is-moving-to-mastodon-ethical"&gt;&lt;a href="https://www.tbray.org/ongoing/When/202x/2022/12/21/Mastodon-Ethics"&gt;Is Moving to Mastodon Ethical?&lt;/a&gt;&lt;/h3&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;Everyone is wondering out loud whether Mastodon can take the strain and whether it can provide cool new features. What we haven’t been discussing are two ethical questions: First, is it OK to bail out of Twitter? And if bailing out, is Mastodon a acceptable place to land?&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Of course it is.&lt;/p&gt;
&lt;p&gt;While I won&amp;rsquo;t participate, it&amp;rsquo;s encouraging to see federation beginning to pierce the mainstream consciousness. Our federated software is our most successful software; email and the web. The model is proven and reliable and scalable, even if it&amp;rsquo;s only controlled by some independent greybeards rather than a corporation (again, this model worked just fine until about 2010).&lt;/p&gt;
&lt;p&gt;No-one makes the case that &amp;ldquo;racists and abusers have access to email, therefore no-one should use email&amp;rdquo;.&lt;/p&gt;</content></entry><entry><title>Link Roundup #1</title><link href="https://www.jonatkinson.co.uk/blog/link-roundup-1/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/link-roundup-1/</id><published>2023-01-05T00:00:00Z</published><updated>2023-01-05T00:00:00Z</updated><content type="html">&lt;p&gt;A few years ago I used to write these short articles with links to interesting things I found on the internet. I stopped doing that because I didn&amp;rsquo;t have the time to write them. Now I have the time, so I&amp;rsquo;m going to start doing it again.&lt;/p&gt;
&lt;h3 id="surviving-disillusionment"&gt;&lt;a href="https://www.spakhm.com/p/surviving-disillusionment"&gt;Surviving Disillusionment&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;The software engineering discipline is deeply immature. As we seem unable to rectify this, it&amp;rsquo;s unsurprising that a lot of software engineering has been relegated to software plumbing (and soon, further relegated to proofreading AI output). So disillusionment is never far from anyone&amp;rsquo;s mind:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;[&amp;hellip;] engineers are faced with two realities. One reality is the atmosphere of new technology, its incredible power to transform the human condition, the joy of the art of doing science and engineering, the trials of the creative process, the romance of the frontier. The other reality is the frustration and drudgery of operating in a world of corporate politics, bureaucracy, envy and greed [&amp;hellip;]&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3 id="oligopoly-everywhere"&gt;&lt;a href="https://experimentalhistory.substack.com/p/oligopoly-everywhere"&gt;Oligopoly Everywhere&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Consider that while the internet grapples with centralisation and federation, the same question has been asked for a long time in other media. What are we doing to make the media accessible to outsiders, or independents? Could we ever resist the safety and predictability which centralisation promises, in art, politics, software, and systems?&lt;/p&gt;
&lt;h3 id="where-did-software-go-wrong"&gt;&lt;a href="https://blog.jse.li/posts/software/"&gt;Where Did Software Go Wrong?&lt;/a&gt;&lt;/h3&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;ldquo;The Internet was a fantastic assemblage of all the world’s knowledge, and it was a bastion of freedom that would make time, space, and geopolitics irrelevant. Ignorance, authoritarianism, and scarcity would be relics of the meatspace past.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;Things didn’t quite turn out that way. The magic disappeared and our optimism has since faded. Our websites are slow and insecure; our startups are creepy and unprofitable; our president Tweets hate speech; we don’t trust our social media apps, webcams, or voting machines. [&amp;hellip;] Where did it all go wrong?&amp;rdquo;&lt;/p&gt;
&lt;/blockquote&gt;</content></entry><entry><title>A game a week #3</title><link href="https://www.jonatkinson.co.uk/blog/game-a-week-3/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/game-a-week-3/</id><published>2021-03-28T00:00:01Z</published><updated>2021-03-28T00:00:01Z</updated><content type="html">&lt;p&gt;This week didn&amp;rsquo;t have much consistency. I did a lot of work on this game last Sunday, and then I was busy most of the week, and I barely touched the game. I finished it yesterday. I think this shows in the final product; last week I wrote about being glad of having a refactoring pass on the code, something which I didn&amp;rsquo;t manage to have this time.&lt;/p&gt;
&lt;p&gt;I did make some progress though, I think. I had a conversation with a friend last week about &lt;code&gt;snake.py&lt;/code&gt;, and how I did at some points have interesting ideas which I ended up walking back because of the complexity it added to the code. I realised that was stupid, and that I know plenty of tricks to manage complexity in Python code; I just had this &amp;rsquo;everything in a single file&amp;rsquo; constraint in my head, which didn&amp;rsquo;t make any sense.&lt;/p&gt;
&lt;p&gt;So, with that constraint lifted, and some ideas about common things which I needed to implement in both Week 1, Week 2, and Week 3. So I began extracting some items into an &lt;code&gt;engine&lt;/code&gt; package. Right now these aren&amp;rsquo;t anything special; but there is a useful TwoDArray class which I think I&amp;rsquo;ll use a lot in the future.&lt;/p&gt;
&lt;h2 id="tetrispy"&gt;tetris.py&lt;/h2&gt;
&lt;p&gt;Tetris is a game which I have played a lot; I had a fairly good idea before I began of the data structures behind Tetris, and how its implemented on the NES and the Gameboy. Tetris is interesting, because it has a sort of nested gameplay loop; the playfield advances continuously based on the level, but within that loop you can make multiple inputs and rotations.&lt;/p&gt;
&lt;p&gt;I began by putting together an implementation of the data structures for the pieces; there is some interesting dynamics for how the irregular shapes rotate (not always around the pivot you&amp;rsquo;d expect), and I wanted to create something which was classic-Tetris accurate:&lt;/p&gt;
&lt;p&gt;&lt;img src="/_media/blog/game-a-week-3a.gif" alt="tetris.py screen capture 1"&gt;&lt;/p&gt;
&lt;p&gt;After this, I began placing the pieces in the playfield, establishing constraints on their movement (you can see a bug below where certain pieces can&amp;rsquo;t move across the whole X dimension due to how the pieces were modeled). So far, this was a really naive implementation:&lt;/p&gt;
&lt;p&gt;&lt;img src="/_media/blog/game-a-week-3b.gif" alt="tetris.py screen capture 2"&gt;&lt;/p&gt;
&lt;p&gt;Next, I began actually modelling the game state. The game is a 2d array of integers. I use certain integers for different tasks: &lt;code&gt;0&lt;/code&gt; is empty space, &lt;code&gt;1&lt;/code&gt; is a solid wall (ie. the edge of the playfield), &lt;code&gt;2&lt;/code&gt; is a &amp;lsquo;settled&amp;rsquo; block, &lt;code&gt;3&lt;/code&gt; is an active block. The falling piece data is additively combined with the playfield for collision detection: so for example, if our active play piece &lt;code&gt;3&lt;/code&gt; combines with the wall of the playfield &lt;code&gt;1&lt;/code&gt;, then we get &lt;code&gt;4&lt;/code&gt;, which is a collision, and the move can be rejected.&lt;/p&gt;
&lt;p&gt;This additive model produces some interesting bugs, for example when you constantly add each game tick rather than correctly resetting state:&lt;/p&gt;
&lt;p&gt;&lt;img src="/_media/blog/game-a-week-3c.gif" alt="tetris.py screen capture 3"&gt;&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m pretty sure there are still bugs in this approach; occasionally due to how this collision is calculated, you can get a piece stuck in the wall with the right combination of input mashing and spins.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m reasonably happy with the end result; this is a minimum viable Tetris game, but it supports advanced movement (for example, you can properly T-spin if you&amp;rsquo;re fast enough), and the gameover state works similarly to the Gameboy where you see the blocks overwrite eachother for a few game ticks. This is pleasingly authentic.&lt;/p&gt;
&lt;p&gt;&lt;img src="/_media/blog/game-a-week-3d.gif" alt="tetris.py screen capture 4"&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/jonatkinson/games/blob/main/tetris/game.py"&gt;Here&amp;rsquo;s the code&lt;/a&gt;.&lt;/p&gt;</content></entry><entry><title>A game a week #2</title><link href="https://www.jonatkinson.co.uk/blog/game-a-week-2/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/game-a-week-2/</id><published>2021-03-19T00:00:01Z</published><updated>2021-03-19T00:00:01Z</updated><content type="html">&lt;p&gt;I captured the momentum of last week, and began this game on Sunday evening.&lt;/p&gt;
&lt;p&gt;There was certainly more time spent on this game, but I spread it out over several evenings. I&amp;rsquo;ve also shared the first week&amp;rsquo;s post with a few people, and I&amp;rsquo;ve had some interesting conversations. I&amp;rsquo;ve started to develop a rough project list in my head, building complexity. This list is very likely to change.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Tetris&lt;/li&gt;
&lt;li&gt;Sokoban&lt;/li&gt;
&lt;li&gt;Conway&amp;rsquo;s Game of Life&lt;/li&gt;
&lt;li&gt;A perspective/sprite scale based racing game (think Outrun)&lt;/li&gt;
&lt;li&gt;A raycaster&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="snakepy"&gt;snake.py&lt;/h2&gt;
&lt;p&gt;This was a level up in complexity from last week. There was a lot more game state to manage, and while I understood the game pretty well before I began, there was some mis-steps.&lt;/p&gt;
&lt;p&gt;I managed to build this quite quickly. I probably has the game fully &amp;lsquo;working&amp;rsquo; after two short sessions of work, so maybe 3 hours in total. I then has plenty of remaining time to refactor, which I&amp;rsquo;m glad of. The final version of the code (which would probably be tightened up further) is much more elegant due to the time I could spend analysing the code and making improvements.&lt;/p&gt;
&lt;p&gt;I also spent time on &amp;lsquo;game&amp;rsquo;-related things (on the web, I guess we would call this UX), like a score and a lives system, useful information for the player in intermission screens, and some really basic sound. All these add up to make it feel like a real game rather than &lt;code&gt;pong.py&lt;/code&gt; from last week.&lt;/p&gt;
&lt;p&gt;&lt;img src="/_media/blog/game-a-week-2b.gif" alt="snake.py screen capture"&gt;&lt;/p&gt;
&lt;p&gt;I think my primary problem was one of manageing game state. I ended up with an awkward system where the &lt;code&gt;SnakeGame&lt;/code&gt; contains the &lt;code&gt;Snake&lt;/code&gt; instance, and sometimes when they need to reference each other (like in &lt;code&gt;Snake.collide()&lt;/code&gt;), I pass a complete reference to the game. This is pointless, and really both should just be in the global scope for a simple game like this.&lt;/p&gt;
&lt;p&gt;I also think there is some subtle duplication between the collision detection functions on the Snake, and the &lt;code&gt;SnakeGame.empty_location()&lt;/code&gt; function. &lt;code&gt;empty_location()&lt;/code&gt; is used to find a position in the game world which is not occuped by anything else (eg. a wall, or the snake&amp;rsquo;s body), in order to place down a new item of food. So really this is just the inverse of detecting a collision, so I think I could refactor this more to simplify.&lt;/p&gt;
&lt;p&gt;While I was browsing the PyGame documentation, I came across the &lt;code&gt;Rect&lt;/code&gt; collision detection functions, too. So I wonder if there&amp;rsquo;s a &amp;lsquo;stateless&amp;rsquo; version of this game which just operates based on collisions between thing on the screen. I expect I&amp;rsquo;ll explore that more soon, because I want to make better use of the &lt;code&gt;Sprite&lt;/code&gt; classes rather than just drawing everything manually as tiles.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/jonatkinson/games/blob/main/snake/snake.py"&gt;Here&amp;rsquo;s the code&lt;/a&gt;.&lt;/p&gt;</content></entry><entry><title>A game a week #1</title><link href="https://www.jonatkinson.co.uk/blog/game-a-week-1/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/game-a-week-1/</id><published>2021-03-14T00:00:00Z</published><updated>2021-03-14T00:00:00Z</updated><content type="html">&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Like a huge amount of software people, when I was young I learned how to write code because I wanted to make games.&lt;/p&gt;
&lt;p&gt;Now fast-forward 30 years, and I&amp;rsquo;m a full-time software engineer, I lead software teams, working on big, complex projects, but I&amp;rsquo;ve still never finished a single game.&lt;/p&gt;
&lt;p&gt;Looking back, the reason that I never finsihed a game was due to the scale of my vision. Back when I had the time to make games, I never set out to make &amp;lsquo;small&amp;rsquo; games. They were always too grand, the vision too large, assuming that any knowledge which I was lacking could be picked up along the way. So each time I set out to write a game, I failed.&lt;/p&gt;
&lt;p&gt;I think that I&amp;rsquo;ve learned more humility since then, and I&amp;rsquo;ve learned how to learn. I&amp;rsquo;ve learned about building understanding via incremental steps, immediate feedback loops, and the small victories which bring progress.&lt;/p&gt;
&lt;p&gt;I still want to make games.&lt;/p&gt;
&lt;p&gt;So I figured that I should start with &lt;em&gt;small&lt;/em&gt; games. And build one a week (or whenever I have the time). I just built my first one.&lt;/p&gt;
&lt;h2 id="pongpy"&gt;pong.py&lt;/h2&gt;
&lt;p&gt;&lt;img src="/_media/blog/game-a-week-1.gif" alt="pong.py game screen capture"&gt;&lt;/p&gt;
&lt;p&gt;This isn&amp;rsquo;t even Pong. It&amp;rsquo;s more like a half-finished Breakout clone, though it&amp;rsquo;s missing the bricks to break. But it does some things which I&amp;rsquo;d forgotten about: initialising a screen, creating a timed input look, responding to input. These are foreign to me, given that I&amp;rsquo;ve spent most of my career dealing with the HTTP request/response cycle. Real-time code is a mystery to me (I had to seriously rethink my approach once I typed &lt;code&gt;import threading&lt;/code&gt;!)&lt;/p&gt;
&lt;p&gt;The stack is unimpressive: Windows, VSCode, Python, PyGame. I use Windows in my &amp;lsquo;real&amp;rsquo; job, though I run a remote VSCode server, so it&amp;rsquo;s effectively Linux. I forgot how many hoops you need to jump through to run simple code on Windows (and how unpleasant the Windows shell is). None of this was insurmountable, but I suppose that building web software all day on Unix-likes spoils you; it&amp;rsquo;s all so simple and well understood.&lt;/p&gt;
&lt;p&gt;The game has no standout features. The ball only follows a 45-degree path; my geometry is terrible, and I didn&amp;rsquo;t want to exercise it here. The game has a completely linear difficulty progression; I challenge anyone to score more than 20 points before it becomes too fast and unbalanced to keep up.&lt;/p&gt;
&lt;p&gt;Nevertheless, I&amp;rsquo;m pleased that some of the knowledge came back to me, and that PyGame is still around (it&amp;rsquo;s the same library which 15-year old me was using so long ago), and that it seems that some of my real-world experience has helped me see these things as annoying details to be understood, rather than insurmountable obstacles.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://github.com/jonatkinson/games/blob/main/pong.py"&gt;Here&amp;rsquo;s the code&lt;/a&gt;.&lt;/p&gt;</content></entry><entry><title>Listing all Django URLs in a project</title><link href="https://www.jonatkinson.co.uk/blog/list-all-django-urls/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/list-all-django-urls/</id><published>2021-02-16T07:00:00Z</published><updated>2021-02-16T07:00:00Z</updated><content type="html">&lt;p&gt;This will list all registered Django URLs in a project, with their arguments.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;from django.conf import settings
from django.urls import URLPattern, URLResolver
urlconf = __import__(settings.ROOT_URLCONF, {}, {}, [''])
def urls(ls, acc=None):
if acc is None:
acc = []
if not ls:
return
l = ls[0]
if isinstance(l, URLPattern):
yield acc + [str(l.pattern)]
elif isinstance(l, URLResolver):
yield from urls(l.url_patterns, acc + [str(l.pattern)])
yield from urls(ls[1:], acc)
for u in urls(urlconf.urlpatterns):
print(str(u))
&lt;/code&gt;&lt;/pre&gt;</content></entry><entry><title>Parallel Rsync</title><link href="https://www.jonatkinson.co.uk/blog/parallel-rsync/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/parallel-rsync/</id><published>2021-01-17T11:15:39Z</published><updated>2021-01-17T11:15:39Z</updated><content type="html">&lt;p&gt;&lt;code&gt;rsync&lt;/code&gt; will run in sequential mode by default. This can cause transfers to take a long time, and it&amp;rsquo;s unlikely to make full use of your bandwidth if you&amp;rsquo;re copying many small files (for example, when syncing a home folder). This situation can be slightly improved by using &lt;code&gt;xargs&lt;/code&gt; to run many parallel &lt;code&gt;rsync&lt;/code&gt; operations:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ls /home/yourname | xargs -n1 -P8 -I% rsync -Pa % destination:/home/yourname/
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You can adjust the &lt;code&gt;-P&lt;/code&gt; argument to change the number of parallel processes to run.&lt;/p&gt;</content></entry><entry><title>Dokku Cheatsheet</title><link href="https://www.jonatkinson.co.uk/blog/dokku-cheatsheet/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/dokku-cheatsheet/</id><published>2020-07-12T16:09:06Z</published><updated>2020-07-12T16:09:06Z</updated><content type="html">&lt;p&gt;I always forget how to setup new Dokku apps; I do it infrequently enough that it never sticks.&lt;/p&gt;
&lt;p&gt;Create the app:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ ssh host
$ dokku apps:create myapp
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Create the database and link it to the app:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ dokku mysql:create myapp
$ dokku mysql:link myapp myapp
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Set some variables. &lt;code&gt;DEBUG&lt;/code&gt; and &lt;code&gt;SECRET_KEY&lt;/code&gt; are Django-specific, and the other is for SSL provisioning:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ dokku config:set --no-restart myapp SECRET_KEY=foo
$ dokku config:set --no-restart myapp DEBUG=True
$ dokku config:set --no-restart myapp DOKKU_LETSENCRYPT_EMAIL=me@example.com
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, on the localhost, setup the repository and push:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ git remote add dokku dokku@host:myapp
$ git push dokku master
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Finally, back on the live server, setup port forwarding (I&amp;rsquo;m not 100% sure if this is necessary, and note that I run &lt;code&gt;gunicorn&lt;/code&gt; on port 8000 in my containers), and SSL:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ ssh host
$ dokku proxy:ports-add myapp http:80:8000
$ dokku letsencrypt myapp
$ dokku letsencrypt:cron-job --add
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Done.&lt;/p&gt;</content></entry><entry><title>Github Marketplace Endgame</title><link href="https://www.jonatkinson.co.uk/blog/github-marketplace/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/github-marketplace/</id><published>2020-07-02T17:46:06Z</published><updated>2020-07-02T17:46:06Z</updated><content type="html">&lt;p&gt;I&amp;rsquo;ve been thinking recently about Github Marketplace; where it may lead.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m responsible for technical purchasing at &lt;a href="https://www.giantmade.com/"&gt;Giant&lt;/a&gt;. That means that I&amp;rsquo;m ultimately responsible for the purchasing we make; but I&amp;rsquo;m also required to justify those purchases. We&amp;rsquo;re an SME, and the expenses cannot easily flex beyond their budget, and there are other people in the business who will examine the expenses line-by-line most months.&lt;/p&gt;
&lt;p&gt;This level of scrutiny is challenging; and having to develop organisational buy-in for each purchase is exhausting. Sometimes I need to justify temporary SaaS purchasing for a month or two; sometimes I&amp;rsquo;m going to take a longer term decision to buy for multiple years. The latter of these committments gives me flexibility, because there&amp;rsquo;s no expectations of predicting the future and cost changes. That is a roundabout way of saying that I get more questions about spending £100 on a random SaaS transaction for one month than I do spending £100,000 on AWS in a year. I speak to other CTOs, and we&amp;rsquo;re all in similar positions.&lt;/p&gt;
&lt;p&gt;Of course, water takes the path of least resistance; so when I have a purchasing decision to make, no matter how large or small, I&amp;rsquo;m going to ask two questions:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Can I purchase this through AWS?&lt;/li&gt;
&lt;li&gt;Can I purchase this through Github?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Either purchase option subsumes the costs into a larger, more flexible one, and leaves me with little justification to make. It&amp;rsquo;s an easier position to be in.&lt;/p&gt;
&lt;p&gt;Anyway; to the point. I&amp;rsquo;ve been thinking a lot about Github Marketplace recently, and it&amp;rsquo;s potential to change engineering purchasing. Currently, my Github seats are the minority part of my overall Github bill; I also buy CircleCI, Codecov, and several other services from the Github Marketplace. And as long as the above two questions remain, I&amp;rsquo;m going to continue to buy services on the Github Marketplace. And it&amp;rsquo;s interesting to think about what these could be:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Development instances (Coming soon via &lt;a href="https://github.com/features/codespaces/"&gt;Github Spaces&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Further automations and actions.&lt;/li&gt;
&lt;li&gt;Code reviews on demand?&lt;/li&gt;
&lt;li&gt;Software licensing (it&amp;rsquo;s not available yet, but I&amp;rsquo;d love to add my JetBrains purchasing to Github)&lt;/li&gt;
&lt;li&gt;Azure services; this seems like a no-brainer.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A future where I buy the engineers on my team their &amp;rsquo;tool stack&amp;rsquo; in a single place is something I look forward to. A future where I use the Github Marketplace to access freelancers with verified contributions in a given language is unsettling (for reasons I can&amp;rsquo;t quite pinpoint) but possible.&lt;/p&gt;
&lt;p&gt;Of course, there are monopolistic concerns here; but I&amp;rsquo;m also resigned to convenience winning against all forces of market intervention (at least, over a long enough period). I was a BitBucket contrarian for a long time, but the pull of the single, coherent marketplace is pretty irresistable.&lt;/p&gt;
&lt;p&gt;tldr; Buy MSFT.&lt;/p&gt;</content></entry><entry><title>Digital Workshop</title><link href="https://www.jonatkinson.co.uk/blog/digital-workshop/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/digital-workshop/</id><published>2020-06-24T00:00:00Z</published><updated>2020-06-24T00:00:00Z</updated><content type="html">&lt;p&gt;In the neverending quest for the &amp;lsquo;right&amp;rsquo; development setup, I&amp;rsquo;ve been thinking about the concept of how to organise and make available my tools. This is partly motivated by some &amp;rsquo;new&amp;rsquo; (to me at least) tools and options which are becoming more popular; remote development (as &lt;a href="https://code.visualstudio.com/docs/remote/remote-overview"&gt;included in VSCode&lt;/a&gt;), both in the sense of a remote server, but also a remote environment, like a container.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m currently editing this on a remote server; I&amp;rsquo;ve used this setup for about 90 days or so now. The motivation for this is partly the churn which I was experiencing earlier in the year around technology setup which I was using day to day (just look at the recent post history to get an idea: Arch, FreeBSD, Void, and now Windows 10). Having a volatile setup isn&amp;rsquo;t condusive to productive work, so I moved all my development tools to a server with Digital Ocean, and as long as my local machine had an SSH client and my SSH key, then I could work.&lt;/p&gt;
&lt;p&gt;So far, this setup has been very pleasant. It&amp;rsquo;s really nice to treat the computers which I use for development as thin clients, and being able to move from my desktop to any laptop I have to hand and keep on working is liberating. Having a reliable environment is comfortable; the metaphor is something like a craftsman being at home in his own workshop.&lt;/p&gt;
&lt;p&gt;There are annoyances, however. It takes time to connect to the server (only a few seconds here and there, but those add up), and very occasionally I&amp;rsquo;ll lose my connection for no apparent reason (I guess maybe wifi congestion in my house). I also lose state between machines; VSCode doesn&amp;rsquo;t seem to keep track of the buffers which I have open, which means I lose context sometimes. Some of these problems could be solved in other ways; SSH is used as the underlying transport, and I know there are ways to make SSH more robust to losing connection, and I imagine some of these techniques would work.&lt;/p&gt;
&lt;p&gt;I also have some underlying anxiety about the way my workshop was built; it&amp;rsquo;s completely organic. I created the environment in a hurry, and since then I&amp;rsquo;ve neglected any repeatability; my environment is a mess of &lt;code&gt;apt&lt;/code&gt; packages, some static binaries in &lt;code&gt;~/bin/&lt;/code&gt;, plus the usual &lt;code&gt;npm&lt;/code&gt; vomit, various tools installed in mysterious ways with &lt;code&gt;curl | bash&lt;/code&gt; (for shame!). If I want my homestead to really feel like home, I need to automate the setup.&lt;/p&gt;
&lt;p&gt;This lead me to investigate VSCode remote development in a container; this should alleviate the repeatability AND the connectivity problems; assuming on each machine I&amp;rsquo;m willing to add a dependency (Docker) to go alongside the others (VSCode, SSH). It&amp;rsquo;s something I intend to explore over the next few weeks. The inventory of tools I need is something like this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;pyenv&lt;/code&gt;, &lt;code&gt;pipx&lt;/code&gt;, &lt;code&gt;libpython-dev&lt;/code&gt; and friends.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;build-essential&lt;/code&gt; and friends.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;git&lt;/code&gt;, my &lt;code&gt;.config&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;git flow&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;My &lt;code&gt;awscli&lt;/code&gt; setup, and associated credentials and configuration.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ripgrep&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;hugo&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;kubectl&lt;/code&gt; and various other k8s related tools.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;node&lt;/code&gt;, &lt;code&gt;deno&lt;/code&gt;, &lt;code&gt;npm&lt;/code&gt; etc.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is already quite a diverse list; I wonder if this lends itself to also investigating using &lt;code&gt;nix&lt;/code&gt; to manage.&lt;/p&gt;
&lt;p&gt;More on this as I progress.&lt;/p&gt;</content></entry><entry><title>Serverless Thoughts #2</title><link href="https://www.jonatkinson.co.uk/blog/serverless-thoughts-2/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/serverless-thoughts-2/</id><published>2020-06-23T09:30:00Z</published><updated>2020-06-23T09:30:00Z</updated><content type="html">&lt;p&gt;After hacking yesterday through a few issues with my AWS account (transferred a domain name from one AWS account to another, but didn&amp;rsquo;t remove the &lt;code&gt;NS&lt;/code&gt; records from the original account led to some DNS confusion and AWS couldn&amp;rsquo;t then verify certificates in Certificate Manager), I&amp;rsquo;ve managed to come to a good place with &lt;code&gt;serverless&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;I now have a Flask application, with multiple routes, running behind an API Gateway with a custom domain.&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s available as a cookiecutter repository at &lt;a href="https://github.com/jonatkinson/flask-serverless"&gt;flask-serverless&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;There was nothing particularly notable about getting to this point. The &lt;code&gt;serverless&lt;/code&gt; CLI seems to behave predictably, and recovers from errors (for example, interrupted deployments) well. I was slightly confused about the need to use &lt;code&gt;us-east-1&lt;/code&gt; as the region due to the availability of AWS API Gateway (which seems to be available in other regions, too), but I don&amp;rsquo;t want to look under that particular rock yet.&lt;/p&gt;
&lt;p&gt;My goals for today include configuring DynamoDB to add some models to the application. It would be nice to have a serverless database and application running; I may need to look into some plugins such as &lt;a href="https://github.com/99xt/serverless-dynamodb-local"&gt;serverless-dynamodb-local&lt;/a&gt; for development.&lt;/p&gt;</content></entry><entry><title>Serverless Thoughts #1</title><link href="https://www.jonatkinson.co.uk/blog/serverless-thoughts-1/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/serverless-thoughts-1/</id><published>2020-06-22T09:30:00Z</published><updated>2020-06-22T09:30:00Z</updated><content type="html">&lt;p&gt;I&amp;rsquo;m on holiday this week, and I wanted to use the time to really get to grips with serverless models of deployment.&lt;/p&gt;
&lt;p&gt;My background is in Python development (which I want to stick with; this week isn&amp;rsquo;t to learn a new language), and mainly around doing full-stack deployments with Django, and all that entails; a persistant relational database, the MVC pattern, and server-side rendering.&lt;/p&gt;
&lt;p&gt;I have a background knowledge of AWS Lambda (ie. I know what it is, I know what it &lt;em&gt;can&lt;/em&gt; be used for, but I haven&amp;rsquo;t actually used it), and I&amp;rsquo;ve a vague set of notions around things like AWS API Gateway, AWS SAM, etc. In short, I am no expert.&lt;/p&gt;
&lt;p&gt;I have a very simple application which I want to deploy; a very limited set of views, and a very simple data model which would be suitable for a schemaless store like AWS DynamoDB, or AWS ElastiCache/Redis.&lt;/p&gt;
&lt;p&gt;So last night, I began doing some research. In all instances, I was looking to deploy a trivially simple Flask application:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;from flask import Flask
app = Flask(__name__)
@app.route('/')
def index():
return &amp;quot;Hello, I'm serverless.&amp;quot;
if __name__ == '__main__':
app.run()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I realise this isn&amp;rsquo;t at all testing any of the factors of the serveless model which are actually interesting; this doesn&amp;rsquo;t react to events, it doesn&amp;rsquo;t do anything to mutate the state of another, it&amp;rsquo;s dull. But, I figured that if I can get Flask installed, that means that I have a reasonable Python runtime, into which I can install Python packages. And that&amp;rsquo;s good enough for me.&lt;/p&gt;
&lt;h3 id="zappa"&gt;Zappa&lt;/h3&gt;
&lt;p&gt;Zappa was my first instinct. I&amp;rsquo;ve heard of Zappa previously; at the time, it made some really interesting claims around easily adapting a WSGI application (such as a Django application) to a serverless environment. I also knew that Zappa was a Python package, so my research was rushed (or non-existant).&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;pip install zappa
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;hellip; followed by a deployment with:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;zappa init
zappa deploy
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This was magical. The deployment went well, it appears that Zappa created the dependant resources which I needed in AWS, and I could &lt;code&gt;curl&lt;/code&gt; my Flask application and get the response I expected. Very encouraging. Time to learn more, time to visit the documentation at &lt;code&gt;zappa.io&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Oh.&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s down. I checked the Github issues; it seems the project is in a state of mid-handover. The original developer has disappeared, or lost interest. There are maintainers who are looking to create releases, but they neither have control over the project domains (and, sadly, are closing tickets which relate to this: &lt;a href="https://github.com/Miserlou/Zappa/issues/1976"&gt;#1976&lt;/a&gt;). There is a new Pypi release from March 2020, but the project is sending very mixed signals. There&amp;rsquo;s clearly work ongoing to stabilise the repository and the infrastructure, but I&amp;rsquo;m wary to depend on Zappa until these issues are resolved and the code comes into common ownership.&lt;/p&gt;
&lt;h3 id="architect"&gt;Architect&lt;/h3&gt;
&lt;p&gt;Following from that disappointment, I began researching Architect. This is the open-source element of &lt;a href="begin.com"&gt;begin.com&lt;/a&gt;. Again, setup was near trivial (even so simple to understand that a quick detour into deploying a &lt;code&gt;deno&lt;/code&gt; application took 5 minutes).&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;arc init
arc deploy
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;I&amp;rsquo;m especially intrigued by the idea of &lt;a href="https://arc.codes/primitives/tables"&gt;embedding your data models&lt;/a&gt; in the &lt;code&gt;.arc&lt;/code&gt; DSL. This is a fascinating way to get started quickly with an application. It&amp;rsquo;s an approach which might have real merit. It feels clean and minimal and well designed.&lt;/p&gt;
&lt;p&gt;Unfortunately, Architect cannot currently clean up after itself. This is a dealbreaker. It appears there is &lt;a href="https://github.com/architect/destroy"&gt;a package to provide this&lt;/a&gt;, but it wasn&amp;rsquo;t something I wanted to battle with integrating.&lt;/p&gt;
&lt;p&gt;So I had to spend 15 minutes cleaning up the mess in my AWS account which was annoying.&lt;/p&gt;
&lt;h3 id="serverless"&gt;Serverless&lt;/h3&gt;
&lt;p&gt;Next on my list was &lt;a href="https://github.com/serverless/serverless/"&gt;Serverless&lt;/a&gt;. This seems like a &amp;lsquo;mature&amp;rsquo; offering; supporting a few languages (but I only really care about Python), with a monetisation models based on selling dashboard access to teams. This is fine; the core of the tool is MIT licenses.&lt;/p&gt;
&lt;p&gt;Again, the setup is simple, with the annoying detour of being asked to register an account. The dashboard seems useful. There&amp;rsquo;s an &lt;code&gt;serverless remove&lt;/code&gt; command to clean up.&lt;/p&gt;
&lt;p&gt;This seems promising.&lt;/p&gt;</content></entry><entry><title>Creating a new email user with `postfix` on Void</title><link href="https://www.jonatkinson.co.uk/blog/new-postfix-user/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/new-postfix-user/</id><published>2020-01-28T12:20:23Z</published><updated>2020-01-28T12:20:23Z</updated><content type="html">&lt;p&gt;This is possibly specific to my own super-minimal &lt;code&gt;postfix&lt;/code&gt; setup on Void, but to add a new email user:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ sudo adduser someone
$ sudo passwd someone
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Amend &lt;code&gt;/etc/postfix/virtual&lt;/code&gt;, as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;someone@domain.com someone
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Regenerate the virtual alias table:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ sudo /usr/sbin/postmap /etc/postfix/virtual
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And finally restart &lt;code&gt;postfix&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ sudo sv restart postfix
&lt;/code&gt;&lt;/pre&gt;</content></entry><entry><title>User services with `runit` on Void Linux</title><link href="https://www.jonatkinson.co.uk/blog/user-service-runit-void/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/user-service-runit-void/</id><published>2019-11-23T16:51:00Z</published><updated>2019-11-23T16:51:00Z</updated><content type="html">&lt;p&gt;Void Linux uses the very miminalsit service management tool &lt;code&gt;runit&lt;/code&gt;. The &lt;code&gt;runsvdir&lt;/code&gt; program monitors a folder for service definitions, and then supervises the processes described within. There is a system-wide instance of &lt;code&gt;runsvdir&lt;/code&gt; for system services by default on Void, which is responsible for your &lt;code&gt;tty&lt;/code&gt;s, &lt;code&gt;sshd&lt;/code&gt;, maybe a logger, depending on your configuration.&lt;/p&gt;
&lt;p&gt;There will be times when you want to run a service, or a set of services, as a user, rather than as &lt;code&gt;root&lt;/code&gt;, and to do this you can use nested &lt;code&gt;runsvdir&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Throughout these examples, replace &amp;lsquo;voiduser&amp;rsquo; with your own username.&lt;/p&gt;
&lt;p&gt;First, will define a system-wide service for running an instance of &lt;code&gt;runsvdir&lt;/code&gt; for the user. This will find it&amp;rsquo;s set of services in the folder &lt;code&gt;$HOME/service&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ sudo mkdir -p /etc/sv/voiduser
$ sudo touch /etc/sv/voiduser/run
$ sudo chmod +x /etc/sv/voiduser/run
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Add the following to &lt;code&gt;/etc/sv/voiduser/run&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#!/bin/sh
UID=$(pwd -P)
UID=${UID##*/}
if [ -d &amp;quot;/home/${UID}/service&amp;quot; ]; then
chpst -u&amp;quot;${UID}&amp;quot; runsvdir /home/${UID}/service
fi
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, start this service:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ sudo ln -s /etc/sv/voiduser /var/service
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, we create a service file for the user. In this example, it&amp;rsquo;ll run &lt;code&gt;syncthing&lt;/code&gt;, but you can adapt this for any given service:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ mkdir -p $HOME/service/syncthing
$ touch $HOME/service/syncthing/run
$ chmod +x $HOME/service/syncthing/run
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then the contents of the &lt;code&gt;run&lt;/code&gt; file:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#!/bin/sh
export HOME=/home/jonathan/
exec 2&amp;gt;&amp;amp;1
exec /usr/bin/syncthing
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That&amp;rsquo;s it. Now, your system-wide &lt;code&gt;runit&lt;/code&gt; will start your user-level &lt;code&gt;runit&lt;/code&gt;, and it&amp;rsquo;ll run the service. You can check your process tree and see &lt;code&gt;syncthing&lt;/code&gt; running as your own user.&lt;/p&gt;</content></entry><entry><title>Thinkpad X230/X220 keyboard swap</title><link href="https://www.jonatkinson.co.uk/blog/thinkpad-x230-x220-keyboard/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/thinkpad-x230-x220-keyboard/</id><published>2019-11-15T11:15:39Z</published><updated>2019-11-15T11:15:39Z</updated><content type="html">&lt;p&gt;I recently replaced the keyboard in my Thinkpad X230 with the keyboard from and X220. There are a few reasons for this; while the chiclet keyboard on the X230 is decent (I&amp;rsquo;ve used it for about a year without any real complaints), the keyboard from the X220 is a much more typist-friendly layout, with a huge Return key, a more sensible layout for Home/End/PgUp/PgDown, and a huge Escape key which is wonderful for Vim users. There&amp;rsquo;s also more key travel and a very nice aubible feedback with this keyboard, which appealy to my preferences. The Thinkpad keyboard part also includes the trackpoint, and the mouse buttons (the touchpad is part of the &amp;lsquo;chin&amp;rsquo; section, but I have it disabled in favour of the trackpoint anyway). I think that the trackpoint sits slightly higher and is more pronounced, and that the mouse buttons have slightly more travel and click to them; this could also all be just becuase I&amp;rsquo;m using a new part without the curulative wear my old one had.&lt;/p&gt;
&lt;h3 id="hardware"&gt;Hardware&lt;/h3&gt;
&lt;p&gt;The process for swapping the keyboard is fairly simple. First, you need to remove the existing keyboard:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Begin by removing the battery.&lt;/li&gt;
&lt;li&gt;On the underside of the laptop, there are two screws which hold the keyboard in place; they&amp;rsquo;re marked below:&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&amp;lt; image &amp;gt;&lt;/p&gt;
&lt;ol start="3"&gt;
&lt;li&gt;Remove these screws, then turn the laptop back over, and open the lid.&lt;/li&gt;
&lt;li&gt;Gently pry the keyboard up from it&amp;rsquo;s front edge. There are four retaining tabs on the X230&amp;rsquo;s keyboard, one on each side of the keyboard near the left Control key, and below the arrow keys, and one each side of the space bar.&lt;/li&gt;
&lt;li&gt;Once the keyboard has popped up, slide it forward gently to expose the ribbon cable.&lt;/li&gt;
&lt;li&gt;Gently pry the ribbon connector upwards from it&amp;rsquo;s seat.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Now you have the keyboard removed, you need to slightly modify the X220 keyboard to fit.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;You&amp;rsquo;ll notice that while the old keyboard had four retaining tabs on it&amp;rsquo;s front edge, the X220 keyboard has five, larger tabs. From left to right, we&amp;rsquo;ll number them 1, 2, 3, 4 and 5.&lt;/li&gt;
&lt;li&gt;You need to entirely remove tab 3 (which is adjacent to the middle trackpad button). You can do a neat job of this with a file and a sharp knife. I didn&amp;rsquo;t have these tools handy, however, so I used a set of fingernail clippers (I know, I&amp;rsquo;m ashamed). The tabs are made of a sandwich of soft metal and plastic, so very little force was needed.&lt;/li&gt;
&lt;li&gt;You need to re-shape tabs 1, 2, 4 and 5 to flatten them. Again, I used the clippers which I had to hand, and snipped the edges of the tabs, then bent the metal to shape and trimmed it as necessary to remove the bezel and create flat metal tabs.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;At this point, you&amp;rsquo;re ready to reconnect the keyboard:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Connect the ribbon cable, slide the keyboard backwards into the tray, and then gently push down on the front of the keyboard. The modified retaining tabs will pop into place with a little push. The keyboard I had was an excellent fit, and even before tightening the screws it felt solid.&lt;/li&gt;
&lt;li&gt;Flip the laptop over, and replace the two screws. Be careful with these screws, they&amp;rsquo;re very soft metal and easy to strip.&lt;/li&gt;
&lt;li&gt;Replace the battery, and power on the laptop.&lt;/li&gt;
&lt;/ul&gt;</content></entry><entry><title>Bringing up KVM on Arch</title><link href="https://www.jonatkinson.co.uk/blog/kvm-on-arch/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/kvm-on-arch/</id><published>2019-11-08T19:19:42Z</published><updated>2019-11-08T19:19:42Z</updated><content type="html">&lt;p&gt;It&amp;rsquo;s reasonably simple to bring up a new KVM system on Arch, assuming your hardware supports VT-x or AMD-V (and almost everything does).&lt;/p&gt;
&lt;p&gt;First, check that you have the capabilities needed. The first command checks that the CPU supports virtualisation, and thr second whether your kernel has the appropriate modules available:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ lscpu | grep &amp;quot;Virtualization&amp;quot;
$ zgrep CONFIG_KVM /proc/config.gz
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Assuming there are no surprises, now install &lt;code&gt;libvirt&lt;/code&gt; and a few helpers. We use &lt;code&gt;qemu&lt;/code&gt; as it provides a lot of useful utilities for dealing with disk images (among others), and &lt;code&gt;virt-install&lt;/code&gt; is a useful helper script for quickly setting up new virtual machines.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ sudo pacman -S libvirt dnsmasq qemu virt-install
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Start the service:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ sudo systemctl enable libvirtd.service
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now we use the &lt;code&gt;virsh&lt;/code&gt; client to connect to the KVM daemon:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ virsh -c qemu:///session
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, we need to create a storage pool, as a precursor to creating a storage volume:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;virsh # pool-list --all
Name State Autostart
---------------------------
virsh # $ pool-define-as main dir - - - - /home/jonathan/.local/libvirt/images
Pool main defined
virsh # pool-build main
Pool main built
virsh # pool-start main
Pool main started
virsh # pool-autostart main
Pool main marked as autostarted
virsh # pool-list --all
Name State Autostart
----------------------------
main active yes
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, create a storage volume (in this example, I&amp;rsquo;m calling the volume &amp;lsquo;storvol&amp;rsquo;, but I&amp;rsquo;d adapt this to your VM&amp;rsquo;s role, so &amp;lsquo;mail&amp;rsquo; or &amp;lsquo;www&amp;rsquo; or similar):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;virsh # vol-create-as main storvol 20GiB --format qcow2
Vol mail created
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Finally, we can create a new VM (or &amp;lsquo;domain&amp;rsquo; in libvirt parlance). First, exit &lt;code&gt;virsh&lt;/code&gt; with &lt;code&gt;^D&lt;/code&gt;. Then:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ virt-install --name yourvmname --memory 2048 --vcpus=2 --cpu host --cdrom=/home/user/downloads/archlinux-2019.08.01-x86_64.iso --disk size=10,format=qcow2 --network user --virt-type kvm --console pty,target_type=serial
&lt;/code&gt;&lt;/pre&gt;</content></entry><entry><title>TabNine on FreeBSD</title><link href="https://www.jonatkinson.co.uk/blog/tabnine-on-freebsd/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/tabnine-on-freebsd/</id><published>2019-11-05T12:24:36Z</published><updated>2019-11-05T12:24:36Z</updated><content type="html">&lt;p&gt;I&amp;rsquo;m a big fan of &lt;a href="https://tabnine.com"&gt;TabNine&lt;/a&gt;, a machine-learning powered omni-completer for pretty much any language. It&amp;rsquo;s a hassle to run with FreeBSD, though. These instructions cover running TabNine with Neovim, but you can ignore the Vim-specific parts if you just want to run the &lt;code&gt;TabNine&lt;/code&gt; binary on FreeBSD.&lt;/p&gt;
&lt;p&gt;First, install the plugin in your Neovim configuration file:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Plug 'zxqfl/tabnine-vim'
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Save your configuration, then &lt;code&gt;:source %&lt;/code&gt; and &lt;code&gt;:PlugInstall&lt;/code&gt;. Restart &lt;code&gt;nvim&lt;/code&gt;, and after a short compliation delay, you&amp;rsquo;ll see that &lt;code&gt;TabNine&lt;/code&gt; fails to load. Time to fix this. First, we need &lt;code&gt;CMake&lt;/code&gt;, which te TabNine install script requires. Then we need to enable the Linux binary compatibility layer for FreeBSD:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# For the installer
$ sudo pkg install cmake
# For Linux binary compatibility
$ sudo kldload linux64
$ sudo kldstat
$ sudo pkg install emulators/linux_base-c7
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, we can manually run the installer. I only have &lt;code&gt;python3.6&lt;/code&gt; installed on my FreeBSD system, yours may differ:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ cd ~/.config/nvim/plugins/tabnine-vim/
$ python3.6 install.py
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once this has completed, we need to manually &amp;lsquo;brand&amp;rsquo; the binary as a linux binary so that the operating system knows to use the Linux compatibility layer:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ cd ~/config/nvim/plugins/tabnine-vim/binaries/2.1.11/x86_64-unknown-linux-musl/
$ brandelf -t Linux ./TabNine
$ ./TabNine --version
TabNine 2.1.11 (x86_64-unknown-linux-musl)
Jacob Jackson (jacob@tabnine.com)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now you can just start your editor/IDE normally, and the ./TabNine binary should work normally.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve not upgraded my TabNine since I wrote these instructions, but I expect it will be necessary to re-brand the binary each time it is rebuilt.&lt;/p&gt;</content></entry><entry><title>The simplest pulseaudio installation on Arch</title><link href="https://www.jonatkinson.co.uk/blog/simple-pulseaudio/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/simple-pulseaudio/</id><published>2019-08-20T10:44:11Z</published><updated>2019-08-20T10:44:11Z</updated><content type="html">&lt;p&gt;I&amp;rsquo;ve previously ended up very confused by &lt;code&gt;pulseaudio&lt;/code&gt;. Previously I&amp;rsquo;ve had over-complicated setups using a global daemon when really the standard Arch packages are very well thought out, and don&amp;rsquo;t require much setup at all.&lt;/p&gt;
&lt;p&gt;For a basic &lt;code&gt;pulesaudio&lt;/code&gt; installation from a clean installation, this is all you need:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ yay -S pulseaudio pulseaudio-alsa ncpamixer
$ systemctl enable --user pulseaudio.socket
$ systemctl start --user pulseaudio.socket
$ ncpamixer # Ensure your audio device isn't muted or anything silly
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That&amp;rsquo;s it. This works well for a simple setup with a single USB soundcard, and the defaults are sensible and automatically detected.&lt;/p&gt;</content></entry><entry><title>Distributed compilation with distcc on Arch</title><link href="https://www.jonatkinson.co.uk/blog/distcc-arch/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/distcc-arch/</id><published>2019-06-13T00:00:00Z</published><updated>2019-06-13T00:00:00Z</updated><content type="html">&lt;p&gt;Sometimes, when I&amp;rsquo;m working, I&amp;rsquo;ll prefer to sit on the couch with my laptop, which is a not-very-powerful i5/8GB Thinkpad. Sometimes I want to install some packages from source, or compile some software on my laptop. It&amp;rsquo;s far from ideal for this job, mainly because it takes some time, and the thermal changes in the laptop make it kind of uncomfortable to actually perch it on my lap.&lt;/p&gt;
&lt;p&gt;I also have a very powerful desktop machine on my network (i9/64GB), and another slightly less powerful NAS (i5/8BG). Naturally, I&amp;rsquo;d like to delegate as much processing to these machines as possible when compiling. Fortunately, &lt;code&gt;distcc&lt;/code&gt; makes this nearly trivial.&lt;/p&gt;
&lt;p&gt;Before we begin, it&amp;rsquo;s worth noting that my network is &lt;code&gt;ipv4&lt;/code&gt; only and everything lives on &lt;code&gt;192.168.1.X&lt;/code&gt;. All the machines involved run Arch.&lt;/p&gt;
&lt;p&gt;First, on all your hosts, install &lt;code&gt;distcc&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ sudo pacman -S distcc # on your local machine
$ ssh desktop.local sudo pacman -S distcc # repeat for each remote machine.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, on each machine in your cluster, amend the &lt;code&gt;distccd&lt;/code&gt; configuration file to allow connections from your network. You should end up with something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ cat /etc/conf.d/distccd
#
# Parameters to be passed to distccd
#
# You must explicitly add IPs (or subnets) that are allowed to connect,
# using the --allow switch. See the distccd manpage for more info.
#
DISTCC_ARGS=&amp;quot;--allow 192.168.1.0/24 --log-level error --log-file /tmp/distccd.log&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Finally, on each machine, enable the service and start it:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ sudo systemctl enable --now distccd # local
$ ssh desktop.local sudo systemctl enable --now distccd # repeat for remotes.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now your cluster is ready, but you need to modify your &lt;code&gt;/etc/makepkg.conf&lt;/code&gt; to tell &lt;code&gt;makepkg&lt;/code&gt; to use the cluster. First, unbang the &lt;code&gt;distcc&lt;/code&gt; in &lt;code&gt;BUILDENV&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;BUILDENV=(distcc color !ccache check !sign)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Then, enumerate your hosts, with the number of cores you wish to make available on each:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;DISTCC_HOSTS=&amp;quot;192.168.1.100/10 192.168.1.101/4&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Finally, change your &lt;code&gt;MAKEFLAGS&lt;/code&gt; to use your total number of cores. In this care, I&amp;rsquo;ve for 10 cores on &lt;code&gt;.100&lt;/code&gt;, 4 cores on &lt;code&gt;.101&lt;/code&gt;, and 4 cores on the local laptop for a total of 18:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;MAKEFLAGS=&amp;quot;-j18&amp;quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That&amp;rsquo;s it. When you compile anything with &lt;code&gt;makepkg&lt;/code&gt;, it&amp;rsquo;ll spread the compilation load around your hosts. If you want to check on the status of a compilation job, you can run the useful &lt;code&gt;distccmon-text&lt;/code&gt; to get streaming updates from the job distributor.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ distccmon-text 2 # change this number of faster/slower updates
&lt;/code&gt;&lt;/pre&gt;</content></entry><entry><title>Syncing Gmail with mbsync</title><link href="https://www.jonatkinson.co.uk/blog/gmail-mbsync/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/gmail-mbsync/</id><published>2019-05-14T12:18:34Z</published><updated>2019-05-14T12:18:34Z</updated><content type="html">&lt;p&gt;I&amp;rsquo;ve decided that I want to try to reduce my dependence on Google&amp;rsquo;s services. A large part of my Google footprint is Gmail, which I have used for about 15 years.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve not entirely planned through my migration yet, but I think I want to use email in a more transactional manner, as was intended; I want to retrive emails from a server, store and back them up locally on hardware which I control, and then send via SMTP. I really don&amp;rsquo;t find having access to my email via a browser that convenient as it&amp;rsquo;s rare I don&amp;rsquo;t have a laptop or a phone from which I can SSH into my own computers. Push notifications for email are an annoyance; I&amp;rsquo;d rather interact with my email on my own schedule in batches.&lt;/p&gt;
&lt;p&gt;No matter what the overall plan, the first step is to being to sync a considerable volume of email from Google&amp;rsquo;s servers to my own computer.&lt;/p&gt;
&lt;p&gt;For this, I&amp;rsquo;m using &lt;a href="http://isync.sourceforge.net/"&gt;isync&lt;/a&gt;, which can reliably sync email from GMail&amp;rsquo;s IMAP servers to a local Maildir, in one (or both) directions. Note that while the package is called &lt;code&gt;isync&lt;/code&gt;, the binary I&amp;rsquo;m using is called &lt;code&gt;mbsync&lt;/code&gt;, and the configuration file is &lt;code&gt;.mbsyncrc&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;That &lt;code&gt;isync&lt;/code&gt; can sync mail in both directions is important as I want to initially have my local setup mirror my current GMail setup, with access via either method, and to later transition to only downloading from GMail.&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s also necessary to setup an &amp;lsquo;App Password&amp;rsquo; in GMail, which will allow IMAP access to your mailbox outside of the usual Google Account authentication flow. Once that password has been created, I suggest you encrypt it locally and extract it via gpg (as the configuration below demonstrates).&lt;/p&gt;
&lt;p&gt;My configuration file (&lt;code&gt;~/.mbsyncrc&lt;/code&gt;) is as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;IMAPAccount personal
Host imap.gmail.com
User jon@jonatkinson.co.uk
PassCmd &amp;quot;&amp;lt;run gpg here&amp;gt;&amp;quot;
SSLType IMAPS
CertificateFile /etc/ssl/certs/ca-certificates.crt
IMAPStore personal-remote
Account personal
MaildirStore personal-local
Subfolders Verbatim
Path ~/mail/personal/
Inbox ~/mail/personal/inbox
Channel personal-default
Master :personal-remote:
Slave :personal-local:
Patterns * ![Gmail]*
Create Both
SyncState *
Sync All
Channel personal-sent
Master :personal-remote:&amp;quot;[Gmail]/Sent Mail&amp;quot;
Slave :personal-local:sent
Create Slave
Sync Pull
Channel personal-trash
Master :personal-remote:&amp;quot;[Gmail]/Trash&amp;quot;
Slave :personal-local:trash
Create Slave
Sync Pull
# Get all the channels together into a group.
Group personal
Channel personal-default
Channel personal-sent
Channel personal-trash
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;isync&lt;/code&gt; configuration files are in an odd, stanza-based format. The concepts are based around groups and channels. A channel is a bidirectional mapping between a two mail stores (in this case the remote IMAP server and the local Maildir, but this could just as easily be two remote IMAP services). From top to bottom, that configuration file does the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Define the &lt;code&gt;IMAPAccount&lt;/code&gt;, and it&amp;rsquo;s credentials.&lt;/li&gt;
&lt;li&gt;Setup SSL&lt;/li&gt;
&lt;li&gt;Setup the &lt;code&gt;IMAPStore&lt;/code&gt; on the account.&lt;/li&gt;
&lt;li&gt;Setup the local mailbox.&lt;/li&gt;
&lt;li&gt;Define a channel to map the remote mail to the local mailbox.&lt;/li&gt;
&lt;li&gt;Define a channel for the &amp;lsquo;Sent&amp;rsquo; folder.&lt;/li&gt;
&lt;li&gt;Define a channel for the &amp;lsquo;Trash&amp;rsquo; folder.&lt;/li&gt;
&lt;li&gt;Finally, setup a group called &amp;lsquo;personal&amp;rsquo; which is made up of those channels.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The first sync will take a long time (for my decade and a half of email, to my home server over a 75mb link it was around a day, but that also accounts for some gentle throttling on the remote IMAP service). I ran this in a &lt;code&gt;screen&lt;/code&gt; session:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ mbsync -Dmn personal
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Once the initial sync has completed, subsequent activity will be much quicker. I use the following &lt;code&gt;systemd&lt;/code&gt; timer and service to sync every five minutes:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ cat /etc/systemd/user/mbsync.timer
[Unit]
Description=Mailbox synchronization timer
[Timer]
OnBootSec=2m
OnUnitActiveSec=5m
Unit=mbsync.service
[Install]
WantedBy=timers.target
$ cat ~/.config/systemd/user/mbsync.service
[Unit]
Description=Mailbox synchronization service
[Service]
Type=oneshot
ExecStart=/usr/bin/mbsync -Va
&lt;/code&gt;&lt;/pre&gt;</content></entry><entry><title>Thinkpad x230 fingerprint reader on Arch Linux</title><link href="https://www.jonatkinson.co.uk/blog/thinkpad-x230-fingerprint-reader/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/thinkpad-x230-fingerprint-reader/</id><published>2019-05-11T11:15:39Z</published><updated>2019-05-11T11:15:39Z</updated><content type="html">&lt;p&gt;I have a Lenovo Thinkpad x230, with an integrated fingerprint reader. The system runs Arch. This describes how to identify and register your fingerprint, and then use it to authenticate your &lt;code&gt;sudo&lt;/code&gt; actions via PAM.&lt;/p&gt;
&lt;p&gt;First, identify the model of fingerprint reader which you have.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ sudo lsusb
Bus 003 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 005: ID 5986:02d2 Acer, Inc
Bus 001 Device 004: ID 0a5c:21e6 Broadcom Corp. BCM20702 Bluetooth 4.0 [ThinkPad]
*Bus 001 Device 003: ID 147e:2020 Upek TouchChip Fingerprint Coprocessor (WBF advanced mode)*
Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This device is supported by the &lt;code&gt;fprintd&lt;/code&gt; package, se we will install this with the package manager (you can probably substitute &lt;code&gt;pacman&lt;/code&gt; for &lt;code&gt;yay&lt;/code&gt; here if you&amp;rsquo;re not a &lt;code&gt;yay&lt;/code&gt; user):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ yay -s fprintd
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now it&amp;rsquo;s time to &amp;rsquo;enroll&amp;rsquo; your fingerprint. Your fingerprint data will be stored in &lt;code&gt;/var/lib/fprintd/&lt;/code&gt;. This process will require you to swipe your right index finger five times. You can see from the output below that I mis-swiped once. Just keep swiping until the process if complete:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ fprintd-enroll
Using device /net/reactivated/Fprint/Device/0
Enrolling right-index-finger finger.
Enroll result: enroll-stage-passed
Enroll result: enroll-stage-passed
Enroll result: enroll-swipe-too-short
Enroll result: enroll-stage-passed
Enroll result: enroll-stage-passed
Enroll result: enroll-completed
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Finally, we will update two files, &lt;code&gt;/etc/pam.d/sudo&lt;/code&gt; and &lt;code&gt;/etc/pam.d/su&lt;/code&gt; to enable the &lt;code&gt;fprintd&lt;/code&gt; backend. Add the following line to both files:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;auth sufficient pam_fprintd.so
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As a complete example, my &lt;code&gt;/etc/pam.d/su&lt;/code&gt; file looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#%PAM-1.0
auth sufficient pam_rootok.so
auth sufficient pam_fprintd.so
auth required pam_unix.so
account required pam_unix.so
session required pam_unix.so
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Save these files, and then you can authenticate with your fingerprint:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ sudo echo &amp;quot;Hello, world!&amp;quot;
Swipe your finger across the fingerprint reader
Hello, world!
&lt;/code&gt;&lt;/pre&gt;</content></entry><entry><title>Installing Arch On a Vultr VPS</title><link href="https://www.jonatkinson.co.uk/blog/installing-arch-on-a-vultr-vps/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/installing-arch-on-a-vultr-vps/</id><published>2019-05-06T00:00:00Z</published><updated>2019-05-06T00:00:00Z</updated><content type="html">&lt;h3 id="abstract"&gt;Abstract&lt;/h3&gt;
&lt;p&gt;I have been happy using Vultr as my VPS provider for a few months now. Vultr offer a very flexible system of ISO upload before the first boot, which means essentially unlimited options when choosing the operating system for a new VPS (there is also a large library of ISO images to chose from; only occasionally have I actually uploaded my own).&lt;/p&gt;
&lt;p&gt;However, booting a fresh ISO on a VPS can be a challenge; the hardware and devices exposed to the VPS are usually unfamiliar. These are my steps for installing Arch after the VPS has been provisioned by Vultr, and you have a working VNC connection to the console.&lt;/p&gt;
&lt;p&gt;This assumes a familiarity with Arch. Don&amp;rsquo;t forget the &lt;a href="https://wiki.archlinux.org/index.php/installation_guide"&gt;installation guide&lt;/a&gt; is the most comprehensive resource if you get stuck.&lt;/p&gt;
&lt;h3 id="general-checks"&gt;General Checks&lt;/h3&gt;
&lt;p&gt;Check that networking has come up correctly in the live ISO environment:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# ping -c 1 google.com
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Update the system clock.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# timedatectl set-ntp true
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Update the Arch keyring.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# pacman -Sy archlinux-keyring
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="disk-partitioning"&gt;Disk Partitioning&lt;/h3&gt;
&lt;p&gt;Check the available block devices. Your disk will be available as &lt;code&gt;vda&lt;/code&gt; or similar, but adapt the following as required. Once you have identified the disk, use &lt;code&gt;fdisk&lt;/code&gt; to partition.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# lsblk
# fdisk /dev/vda
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Create a new, full-disk partition by pressing &lt;code&gt;N&lt;/code&gt;, and using the default values for each question. Your new partition will be available as &lt;code&gt;/dev/vda1/&lt;/code&gt; once complete.&lt;/p&gt;
&lt;p&gt;Write the partition table with &lt;code&gt;W&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Finally, we need to create a new filesystem on the disk and mount it. I use &lt;code&gt;ext4&lt;/code&gt;, because I&amp;rsquo;m unfamiliar with &lt;code&gt;btrfs&lt;/code&gt;. Your research might indicate that &lt;code&gt;btrfs&lt;/code&gt; is more appropriate for your needs.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# mkfs-ext4 /dev/vda1
# mount /dev/vda1 /mnt
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="bootstrapping"&gt;Bootstrapping&lt;/h3&gt;
&lt;p&gt;Now it&amp;rsquo;s time to bootstrap the system with the &lt;code&gt;base&lt;/code&gt; metapackage. This may take a while. If you plan on compiling a lot of code, it might be worth also installing &lt;code&gt;base-devel&lt;/code&gt; at this point.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# pacstrap /mnt base
# pacstrap /mnt base-devel
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Soon, we are going to chroot into the new system. First, we need to create an &lt;code&gt;fstab&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# genfstab /mnt &amp;gt;&amp;gt; /mnt/etc/fstab
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, we will switch over to the new system:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# arch-chroot /mnt
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="basic-configuration"&gt;Basic Configuration&lt;/h3&gt;
&lt;p&gt;Now we are chrooted into the system, set the timezone and sync the clock:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# ls /usr/share/zoneinfo/Europe/
# ln -sf /usr/share/zoneinfo/Europe/London /etc/localtime
# hwclock --systohc
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Set the &lt;code&gt;root&lt;/code&gt; password:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# passwd
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Set the system locale to UTF-8. Visit &lt;code&gt;locale.gen&lt;/code&gt; and uncomment &lt;code&gt;en_GB.UTF-8 UTF-8&lt;/code&gt;, then set the locale:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# vi /etc/locale.gen
# echo 'LANG=en_GB.UTF-8' &amp;gt; /etc/locale.conf
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="networking"&gt;Networking&lt;/h3&gt;
&lt;p&gt;Now, find the currently active network adaptor. This is usually called &lt;code&gt;ens3&lt;/code&gt; or similar.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# ip addr
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Write the configuration file:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# vi /etc/systemd/network/ens3.network
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The content should be as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;[Match]
Name=ens3
[Network]
DHCP=ipv4
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Enable DHCP, DNS and setup &lt;code&gt;resolv.conf&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# systemctl enable systemd-networkd
# systemctl enable systemd-resolved
# ln -sf /run/systemd/resolve/stub-resolv.conf /etc/resolv.conf
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Set the system hostname&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# echo 'server.domain.com' &amp;gt; /etc/hostname
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="install-the-bootloader"&gt;Install the bootloader&lt;/h3&gt;
&lt;p&gt;Install &lt;code&gt;grub&lt;/code&gt;, and write a configuration file.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# pacman -S grub
# grub-install --target=i386-pc /dev/vda
# grub-mkconfig -o /boot/grub/grub.cfg
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="reboot"&gt;Reboot&lt;/h3&gt;
&lt;p&gt;Now, exit the chroot, and reboot the system.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# exit
# systemctl poweroff
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, in the Vultr control panel, unmount the ISO, and boot the system again. Reconnect the VNC console.&lt;/p&gt;
&lt;h3 id="post-boot-setup"&gt;Post-Boot Setup&lt;/h3&gt;
&lt;p&gt;Assuming the system has booted successfully (if it hasn&amp;rsquo;t, then re-insert the ISO, remount &lt;code&gt;/dev/vda1&lt;/code&gt;, and reactivate the chroot to investigate), then login as root.&lt;/p&gt;
&lt;p&gt;Next, create a new user, and setup &lt;code&gt;sudo&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# useradd --create-home &amp;lt;yourusername&amp;gt;
# passwd &amp;lt;yourusername&amp;gt;
# pacman -S sudo
# visudo
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Find the appropriate section of &lt;code&gt;/etc/sudoers&lt;/code&gt;, and add something like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;yourusername&amp;gt; ALL=(ALL) ALL
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, logout as root and login again as your newly created user.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# exit
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Next, install SSH and edit the configuration file to enable a port. Typically this will be port 22.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ sudo pacman -S openssh
$ sudo vi /etc/ssh/sshd_config
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Finally, enable SSH:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ sudo systemctl enable --now sshd
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should now be able to login via SSH (&lt;code&gt;root&lt;/code&gt; is denied by default). You can continue your onward configuration from there.&lt;/p&gt;</content></entry><entry><title>Automatically --set-upstream when pushing a new branch</title><link href="https://www.jonatkinson.co.uk/blog/automatically-set-upstream-when-pushing/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/automatically-set-upstream-when-pushing/</id><published>2019-04-08T17:46:06Z</published><updated>2019-04-08T17:46:06Z</updated><content type="html">&lt;p&gt;I frequently open a new branch with &lt;code&gt;git flow&lt;/code&gt;, and the first time I push to Github, I see the following message:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;fatal: The current branch feature/whatever has no upstream branch.
To push the current branch and set the remote as upstream, use
git push --set-upstream origin feature/whatever
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It seems redundant that &lt;code&gt;git&lt;/code&gt; offers me a solution and then makes my type it myself (especially considering the solution, to use &lt;code&gt;--set-upstream&lt;/code&gt;, is actually deprecated). This finally annoyed me enough to figure out how to remedy this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ git config --global push.default current
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now to push to a new branch for the first time, just do this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ git push -u
&lt;/code&gt;&lt;/pre&gt;</content></entry><entry><title>Remembering Rob Edwards</title><link href="https://www.jonatkinson.co.uk/blog/remembering-rob-edwards/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/remembering-rob-edwards/</id><published>2019-02-07T15:56:03Z</published><updated>2019-02-07T15:56:03Z</updated><content type="html">&lt;p&gt;This has been a very difficult year.&lt;/p&gt;
&lt;p&gt;This year I lost my best friend and business partner. Rob and I had been friends for fifteen years, my entire adult life. We concieved and ran several failed, and one successful business together. Apart from my wife, Rob is the person who knew me best in the world. We did great deal of growing up together. I miss him terribly, and even now months on from his death, I think about him many times every day.&lt;/p&gt;
&lt;p&gt;I don&amp;rsquo;t want to dwell on how I feel about this. I&amp;rsquo;m sure over time these feelings will lift. But I have a few stories about Rob which I want to tell.&lt;/p&gt;
&lt;p&gt;Very soon after Rob and I first met at university, I transferred to the same computer science course as him. We were living down the hall from one another, taking the same modules and following the same schedule. After a few weeks, when our first &amp;lsquo;real&amp;rsquo; assignment was due, we decided the share the burden. I don&amp;rsquo;t remember anything about the work we were asked to do, it was probably implementing some simple algorithm in Java. We did very little work that night. We sat at Rob&amp;rsquo;s desk, in the glow of our laptops, and talked; first feeling each other out, then sharing jokes, then testing each other&amp;rsquo;s boundaries, and finally talking about the things we were passionate about. I was 18 years old, and my passions were very nerdy; I probably talked about Linux and Free Software, and I remember Rob talking about the music which he loved. Most importantly I remember the feeling of &amp;lsquo;clicking&amp;rsquo; with someone else. Rob gave me space and time to explain what I loved, and joined me in my enthusiasm for technology and the potential of the connected world which was about to become Web 2.0. I remember talking about IRC, about startups, about hacking, robotics, drugs, and the politics which defined us at that age. We laid down some important foundations that night.&lt;/p&gt;
&lt;p&gt;Some years later, Rob and I were working on our second business. We had a nice idea, but very little idea of how to execute on the vision. To demonstrate our inexperience, we were focusing on completely the wrong thing, and we prioritised our website rather than any working technology. By this time we were both web developers; I had been part of a couple of digital agencies, and Rob had run a small agency for a few years at the time. We dragged each other to the very peak of Mt. Dunning-Kruger and built a hopelessly over-engineered website. We were using CSS 2.0, AJAX (back when we just called it XMLHttpRequest). We were operating in a hostile environment of IE4/5, and tensions were high. Optimistic, I&amp;rsquo;d pre-announced our website launch later that day, to our vanishingly small audience on the IRC channels we both frequented, and a nascent social network at the time called twttr. Rob, less optimistic, was worried that making a promise to launch without leaving enough time for testing the site might cause us embarassment, and Rob picked up the phone and told me this in no uncertain terms. Somehow this argument about our deadline devolved into personal attacks and insults. We screamed at each other, and a lot of very unpleasant things were said. Fifteen minutes later, as we simultaneously called each other to make up, we both left long, heartfelt apologies on each other&amp;rsquo;s voicemail systems. I know I kept a copy of that voicemail for quite a few years, and I Rob did the same. We used to play them to each other when one of us was being unreasonable.&lt;/p&gt;
&lt;p&gt;Further years later, and we are on a roof-top in Manchester, having our photo taken. This is no longer just a story about Rob and I. This is a professional photo-shoot, with us both, but also our staff and business partners, the company we had all built together. There is about twenty of us, and we line up and smile. I remember how incredibly proud Rob and I both felt that day. We were proud that so many people believed in us, enough to join us and work with us as we tried to make a mark.&lt;/p&gt;
&lt;p&gt;I remember a few weeks before Rob died, he called me for the last time. A few months before, Rob had left the company, and a lot of anger had been spent, a lot of lines had been crossed. Rob called me because he heard that my grandfather was dying, and Rob knew how much this would be upsetting me. It was a nice gesture, and it started the conversation between us again. We spoke that night for three or four hours, on speakerphone. We spoke about a lot of things, not least our failings which had led us to this unhappy point. We remembered and shared old jokes. We talked about our old friends. As the conversation continued, we both opened up to each other, much like that first evening we spent at university solving stupid homework problems. As the conversation drew to a close, Rob became upset, incoherent. It was rare for Rob to become flustered like this, he was usually good at expressing himself. He told me I had always been kind. Kind to my family, and the people who we worked with. He finished with &amp;ldquo;you were always too kind to me&amp;rdquo;, which I still don&amp;rsquo;t entirely understand.&lt;/p&gt;
&lt;p&gt;I wish he was still around and I could ask him what he meant by that, but I suppose that&amp;rsquo;s just how it ends. Our last conversation was rather like Rob himself; fun, sensitive, chaotic, and a little bit engimatic.&lt;/p&gt;
&lt;p&gt;Rob Edwards. 1984-2018.&lt;/p&gt;</content></entry><entry><title>QBasic Development</title><link href="https://www.jonatkinson.co.uk/blog/qbasic/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/qbasic/</id><published>2019-01-27T20:08:53Z</published><updated>2019-01-27T20:08:53Z</updated><content type="html">&lt;h3 id="introduction"&gt;Introduction&lt;/h3&gt;
&lt;p&gt;I wanted to put together a decent build system for QBasic programs.&lt;/p&gt;
&lt;p&gt;My goal was to be able to use modern tools such as VSCode and git to edit and manage my code from an OSX host, and to easily run on a representative system of the MS-DOS era, which in this case is an i386 system, with 8MB of RAM, running MS-DOS 6.22.&lt;/p&gt;
&lt;p&gt;Currently I&amp;rsquo;m doing this using virtualized hardward (using the incredibly versatile &lt;code&gt;qemu&lt;/code&gt;), but I plan on modifying this in future to send the application to real hardware.&lt;/p&gt;
&lt;p&gt;My strategy is similar to any other inject-and-run type system; it&amp;rsquo;s heavily inspired by how most CI pipelines work:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Build a predictable base image which can be quickly replicated.&lt;/li&gt;
&lt;li&gt;Build a system to boot that environment and inject my latest code.&lt;/li&gt;
&lt;li&gt;Have the code compiled (or interpreted) inside that environment and be able to quickly interact with the results.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="requirements-and-workspace-layout"&gt;Requirements and workspace layout&lt;/h3&gt;
&lt;p&gt;First, install the packages we need. This assumes you&amp;rsquo;re running a modern OSX (I&amp;rsquo;m using Mojave), with &lt;code&gt;brew&lt;/code&gt; installed:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ brew install qemu
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, I want to layout our working folder as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ mkdir img
$ mkdir src
$ mkdir live
$ mkdir scripts
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="building-the-base-image"&gt;Building the base image&lt;/h3&gt;
&lt;p&gt;make a working folder:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ mkdir qbasic
$ cd qbasic
$ mkdir msdos-disks/
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;move some msdos installer files into place:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ cp *.img ~/qbasic/msdos-disks/
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;make harddisk image:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt; $ qemu-img create dos.img 200M
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;boot the system (with 8mb of RAM attached) with the first msdos disk:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ qemu-system-i386 -hda dos.img -fda msdos-disks/disk1.img -m 8
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As the installer runs, you&amp;rsquo;ll eventually need to change disk. Switch to the QEMU console with &lt;code&gt;ctrl-alt-2&lt;/code&gt;. Then, the following command to switch image:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;(qemu) change floppy0 msdos-disks/disk2.img
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&amp;hellip; then switch back to the primary output with &lt;code&gt;ctrl-alt-1&lt;/code&gt;. My version of MS-DOS came on three floppy disks, so by the time the installed was complete, and I&amp;rsquo;d ejected the final disk, the QEMU console session looked like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;(qemu) change floppy0 msdos-disks/disk2.img
(qemu) change floppy0 msdos-disks/disk2.img
(qemu) eject floppy0
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="mounting-the-disk-image"&gt;Mounting the disk image:&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;$ hdiutil attach -imagekey diskimage-class=CRawDiskImage -mountpoint ./live/ dos.img
$ hdiutil detach live
&lt;/code&gt;&lt;/pre&gt;</content></entry><entry><title>Setting up wee-slack</title><link href="https://www.jonatkinson.co.uk/blog/setting-up-wee-slack-weechat/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/setting-up-wee-slack-weechat/</id><published>2018-12-21T12:24:36Z</published><updated>2018-12-21T12:24:36Z</updated><content type="html">&lt;h3 id="installing-the-prerequisites"&gt;Installing the prerequisites&lt;/h3&gt;
&lt;p&gt;First, we need to install the base &lt;code&gt;weechat&lt;/code&gt; binaries. I like the &lt;code&gt;aspell&lt;/code&gt; plugin to be enabled in Weechat, so I&amp;rsquo;ll install that first:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ brew install aspell
$ aspell dicts # Check the available dictionaries.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, it&amp;rsquo;s time to install &lt;code&gt;weechat&lt;/code&gt; itself. Weechat has just gone through a transition between Python2 and Python3, and at the time of writing, installing from &lt;code&gt;brew&lt;/code&gt; is affected by &lt;a href="https://github.com/Homebrew/homebrew-core/issues/30509"&gt;this bug&lt;/a&gt;, so we&amp;rsquo;re going to explicitly install with Python2 support, and we will install the &lt;code&gt;websocket_client&lt;/code&gt; library in the Python2 tree.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ brew install weechat --with-aspell --with-python@2
$ sudo /usr/local/opt/python@2/bin/pip2 install websocket_client
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="installing-wee-slack"&gt;Installing &lt;code&gt;wee-slack&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;Now it&amp;rsquo;s time to install the &lt;code&gt;wee-slack&lt;/code&gt; plugin. There are more secure ways to do this, but this will work for the purposes of this guide:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ mkdir -p ~/.weechat/python/autoload
$ cd ~/.weechat/python/autoload
$ wget https://raw.githubusercontent.com/wee-slack/wee-slack/master/wee_slack.py
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, launch &lt;code&gt;weechat&lt;/code&gt;, and you&amp;rsquo;ll see the following message:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ERROR: Failed connecting to Slack with token starting with INSERT
VALID KE: invalid_auth
ERROR: Token does not look like a valid Slack token. Ensure it is a valid token and not just a OAuth code.
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To register, in the &lt;code&gt;weechat&lt;/code&gt; console, type:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;/slack register
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Follow the prompts onscreen. The redirect after authentication will fail, and this is expected, however you need to copy the &lt;code&gt;code&lt;/code&gt; parameter from the redirect URL.&lt;/p&gt;
&lt;p&gt;Return to &lt;code&gt;weechat&lt;/code&gt;, and enter the following:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;/slack register [your code parameter here]
/python reload slack
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You should now be connected to your Slack instance. You can navigate using common IRC commands such as &lt;code&gt;/join&lt;/code&gt; and &lt;code&gt;/query&lt;/code&gt;. If you&amp;rsquo;re setting up &lt;code&gt;weechat&lt;/code&gt; for the first time, it will feel more comfortable to turn on mouse mode, which you can do like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;/set weechat.look.mouse on
/mouse enable
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In general, &lt;code&gt;wee-slack&lt;/code&gt; works well. Certain integrations which post rich snippets to Slack (for example, the Bitbucket plugin) are quite visually noisy in Weechat, so I want t figure out how to filter or rewrite these to be more concise. But that&amp;rsquo;s for another day.&lt;/p&gt;</content></entry><entry><title>Remote Working Roundtable</title><link href="https://www.jonatkinson.co.uk/blog/remote-working-roundtable/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/remote-working-roundtable/</id><published>2018-12-21T12:23:46Z</published><updated>2018-12-21T12:23:46Z</updated><content type="html">&lt;p&gt;I was recently invited to participate in a DaaS/Remote Working roundtable discussion at UKFast. I&amp;rsquo;m not sure I could add anything much on DaaS; it&amp;rsquo;s not a technology which we frequently use, but I did try to offer what I could on remote-working culture and benefits.&lt;/p&gt;
&lt;p&gt;The recording is available (after the registration-wall) &lt;a href="https://event.on24.com/eventRegistration/EventLobbyServlet?target=reg20.jsp&amp;amp;referrer=&amp;amp;eventid=1870918&amp;amp;sessionid=1&amp;amp;key=38D49EED399751F8A3F1B5F7CF025F9D&amp;amp;regTag=&amp;amp;sourcepage=register"&gt;here&lt;/a&gt;.&lt;/p&gt;</content></entry><entry><title>Personal VPN Setup</title><link href="https://www.jonatkinson.co.uk/blog/personal-vpn-setup/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/personal-vpn-setup/</id><published>2018-12-21T12:21:27Z</published><updated>2018-12-21T12:21:27Z</updated><content type="html">&lt;p&gt;I feel the need to start this post with something impactful about industrial-scale data capture and the weaponsiation of software exploits; this would be covering a well-trodden path, and it&amp;rsquo;s unnecessary. You need a personal VPN to help secure your privacy your personal data.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve run my own VPN on Digital Ocean (DO) for a few years now. This isn&amp;rsquo;t ideal; I&amp;rsquo;d rather run this on hardware which I racked myself, but it&amp;rsquo;s cheap, and DO have datacentres in privacy-friendly European jurisdictions.&lt;/p&gt;
&lt;p&gt;Previously, I have been using the excellent &lt;a href="https://github.com/StreisandEffect/streisand"&gt;&lt;code&gt;streisand&lt;/code&gt;&lt;/a&gt; scripts to maintain this service. &lt;code&gt;streisand&lt;/code&gt; is a very comprehensive, mature package, and I recommend it for people who live in really hostile environments as it provides a lot of connectivity options to bypass VPN blocking and suchlike. However, there is quite a bit of churn in &lt;code&gt;streisand&lt;/code&gt; (probably for good reason; that project faces evolving threats), and each of the last three times I&amp;rsquo;ve run the install scripts, it&amp;rsquo;s recommended different defaults, and different VPN clients. This isn&amp;rsquo;t optimal; I want to install my VPN, and forget about it for a few years at a time, and most importantly I want effortless connectivity from my devices which run OSX and iOS.&lt;/p&gt;
&lt;p&gt;This has led me to a much slimmer, more focused solution: &lt;a href="https://github.com/jawj/IKEv2-setup"&gt;&lt;code&gt;IKEv2-setup&lt;/code&gt;&lt;/a&gt;. The project self-describes as:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;A Bash script that takes Ubuntu Server 18.04 LTS &amp;hellip; from clean install to production-ready IKEv2 VPN with strongSwan.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Most importantly, &lt;code&gt;IKEv2-setup&lt;/code&gt; supplies a &lt;code&gt;.mobileconfig&lt;/code&gt; profile for OSX and iOS, with on-demand connectivity; this means that I can be reasonably sure that &lt;em&gt;all&lt;/em&gt; my internet traffic will be routed via the VPN.&lt;/p&gt;
&lt;p&gt;First, setup a new Ubuntu 18.04 server using the hosting provider of your choice. You also need to setup a DNS record to point to this server (for example &lt;code&gt;vpn.example.com&lt;/code&gt;). I&amp;rsquo;ll assume you can login and get a &lt;code&gt;root&lt;/code&gt; shell. Make sure that everything is up-to-date:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ apt-get update &amp;amp;&amp;amp; apt-get upgrade
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Download the &lt;code&gt;IKEv2-setup&lt;/code&gt; script from Github:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ cd /root/
$ curl https://raw.githubusercontent.com/jawj/IKEv2-setup/master/setup.sh &amp;gt; setup.sh
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Now, read carefully the contents of &lt;code&gt;setup.sh&lt;/code&gt; and understand what each step does. If there are any steps which you don&amp;rsquo;t understand, ask someone with a little more Linux knowledge to review. Installing untrusted software from the internet is dangerous; you need to understand what is about to happen to your server.&lt;/p&gt;
&lt;p&gt;Once you are ready to continue, run the script:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ /bin/bash setup.sh
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You&amp;rsquo;ll be asked for your DNS name, and a VPN username and password, and an SSH username and password (for the passwords, I recommend generating two long &lt;em&gt;different&lt;/em&gt; password with &lt;code&gt;pwgen --numerals 32&lt;/code&gt;). After these questions, the script will proceed to run; save the output if you&amp;rsquo;re particularly interested in the changes which were made.&lt;/p&gt;
&lt;p&gt;Once the process is complete, further configuration instructions will be located in &lt;code&gt;/home/&amp;lt;username&amp;gt;/vpn-instructions.txt&lt;/code&gt;. For OSX and iOS configuration, download the &lt;code&gt;/home/&amp;lt;username&amp;gt;/vpn-ios-or-mac.mobileconfig&lt;/code&gt; file. From OSX, you can just double-click the file in Finder, and the profile will be installed. Optionally you may want to visit &lt;code&gt;Network.prefpane&lt;/code&gt; and configure the VPN icon in your statusbar. You can then AirDrop this same file to your iOS device and configure the same username and password there.&lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s probably sensible to leave logging enabled on your server for a few days to debug any connectivity problems, however once the service is stable, you can disable any logging as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;rm /var/log/syslog &amp;amp;&amp;amp; ln -s /dev/null /var/log/syslog
rm /var/log/auth.log &amp;amp;&amp;amp; ln -s /dev/null /var/log/auth.log
&lt;/code&gt;&lt;/pre&gt;</content></entry><entry><title>Recursively updating S3 bucket permissions</title><link href="https://www.jonatkinson.co.uk/blog/recursively-updating-s3-bucket-permission/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/recursively-updating-s3-bucket-permission/</id><published>2018-12-21T12:20:59Z</published><updated>2018-12-21T12:20:59Z</updated><content type="html">&lt;p&gt;If you want to recursively apply a permission to an S3 bucket (for example, to add the &lt;code&gt;public-read&lt;/code&gt; permission), then you can use the &lt;code&gt;aws&lt;/code&gt; CLI tool to copy from a bucket to itself, and update the metadata as it does so. It&amp;rsquo;s quicker that using the AWS console, anyway.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ aws s3 cp s3://bucketname/optional/path/ s3://bucketname/optional/path/ \
--recursive \
--metadata-directive REPLACE \
--acl public-read \
--cache-control max-age=31536000
&lt;/code&gt;&lt;/pre&gt;</content></entry><entry><title>Listing ElasticBeanstalk applications with the aws CLI tool.</title><link href="https://www.jonatkinson.co.uk/blog/listing-elasticbeanstalk-applications-with-aws-cli/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/listing-elasticbeanstalk-applications-with-aws-cli/</id><published>2018-12-21T12:20:23Z</published><updated>2018-12-21T12:20:23Z</updated><content type="html">&lt;p&gt;You can easily list EB applications in your default AWS account with the following query.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ aws elasticbeanstalk describe-applications --query 'Applications[*].{Name:ApplicationName}' --output=text
&lt;/code&gt;&lt;/pre&gt;</content></entry><entry><title>How To Fire Someone</title><link href="https://www.jonatkinson.co.uk/blog/how-to-fire-someone/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/how-to-fire-someone/</id><published>2018-12-21T12:19:42Z</published><updated>2018-12-21T12:19:42Z</updated><content type="html">&lt;p&gt;Don’t hide the message. Be direct and begin with the reason: &lt;em&gt;&amp;ldquo;I have bad news, you are being let go due to X&amp;rdquo;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Make eye contact as you do this, and wait for the reaction. In this situation there is still an opportunity to build trust in each other, and communicate that each party will act properly. Most people react calmly; more often than not professional instinct takes over.&lt;/p&gt;
&lt;p&gt;Once that is said, acknowledge the difficulty ahead, and put them at ease. Offer genuine help if you are in the position to give it. &lt;em&gt;“I know this isn’t what you want to hear, but I am going to do my absolute best to make this as smooth as possible”&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;All the salary, compensation and notice period calculations should be performed ahead of time, and you need to immediately summarise these. On recieving this news most people feel immediately threatened and need the reassurance they are going to be treated fairly. “You will be paid in full for your notice period of £X, and your holiday allowance which is £Y”. Stick to the dates outlined in the calculations. If there is any disputes at this point over money owed or compensation due, defuse it and accept there may have been a mistake (very rarely does this actually happen). Just say “we will look into any other money owed and make it 100% right with you before you leave”. This conversation is not about money but sometimes its easier to vent anger or shock by pretending that it is.&lt;/p&gt;
&lt;p&gt;In general, most people move very quickly go into planning mode. I usually offer unlimited time away to attend interviews, and offer use of any company facilities and equipment to help them search. Equally some people don’t want to attend the office at all.&lt;/p&gt;
&lt;p&gt;I usually finish by asking about messaging. I try to give the employee control over the story to their colleagues. Some may be embarrassed, especially if this action is being taken due to performance or conduct reasons. Give the person time to think about this. There is usually zero impact to the company for someone to say “I decided to look for something new” rather than “I was fired”, so unless you have very good reason, you don&amp;rsquo;t need to control the narrative.&lt;/p&gt;
&lt;p&gt;Finally, offer a further meeting very quickly, such as lunch the next day. This offers them a chance to speak again about their feelings once they have processed them.&lt;/p&gt;
&lt;p&gt;Remember at all times that we need to be human, decent and honest to each other and sometimes the best thing you can do is quietly listen.&lt;/p&gt;</content></entry><entry><title>Django coverage reports without unit tests</title><link href="https://www.jonatkinson.co.uk/blog/django-coverage-reports-without-unit-tests/" rel="alternate" type="text/html"/><id>https://www.jonatkinson.co.uk/blog/django-coverage-reports-without-unit-tests/</id><published>2018-12-21T00:00:00Z</published><updated>2018-12-21T00:00:00Z</updated><content type="html">&lt;p&gt;I was recently working on a Django project which had a lot of development effort spent over a wide range of features which never made it to launch. We wanted to analyse the codebase, in part to identify where we had over-delivered on features which didn&amp;rsquo;t make the cut, but also to help the development team find and remove the now &amp;lsquo;dead&amp;rsquo; code.&lt;/p&gt;
&lt;p&gt;The first tool which sprang to mind was &lt;code&gt;coverage.py&lt;/code&gt;, which can be used to evaluate unit test coverage. After a little digging, you can use the same coverage engine to detect which code is used while Django&amp;rsquo;s &lt;code&gt;runserver&lt;/code&gt; is active. The following examples assume you have a working Django and Python environment, and you&amp;rsquo;re installing into a virtualenv:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ pip install coverage
$ coverage run manage.py runserver --noreload
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will spawn a single-threaded server, which coverage can monitor. You can now exercise your code at will, either by manually testing the site or running some browser automation with TestCafe or Selenium. The quality of your coverage results will be influenced by how thorough you are at this stage.&lt;/p&gt;
&lt;p&gt;Once you&amp;rsquo;re finished, &lt;code&gt;^C&lt;/code&gt; the coverage process, and it will create a &lt;code&gt;.coverage&lt;/code&gt; file in the working directory. You can now generate and view a more useful HTML report with &lt;code&gt;coverage html &amp;amp;&amp;amp; open htmlcov/index.html&lt;/code&gt;. You&amp;rsquo;ll notice this is very noisy; it includes all your dependencies in &lt;code&gt;env/&lt;/code&gt;, and also various files which Django evaluates in full on startup. These skew your results, and make the document more difficult to read, so it&amp;rsquo;s useful to exclude some results using &lt;code&gt;--omit&lt;/code&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Anything in the virtualenv folder (in my case &lt;code&gt;env/&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Any &lt;code&gt;__init__.py&lt;/code&gt; files.&lt;/li&gt;
&lt;li&gt;All Django migrations.&lt;/li&gt;
&lt;li&gt;All Django &lt;code&gt;admin.py&lt;/code&gt; files.&lt;/li&gt;
&lt;li&gt;All instances of &lt;code&gt;urls.py&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The final command was as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ coverage html --omit=*env*,apps/*/admin.py,apps/*/__init__.py,apps/*/migrations/*.py,apps/*/urls.py
&lt;/code&gt;&lt;/pre&gt;</content></entry></feed>