The anti-AI position has an information problem
Some scenarios.
A game ships. Someone digs through the credits, finds an AI tool listed, and the review-bombing begins. A few weeks later a different game ships, the same tool sat in the pipeline, but nobody notices. That game does fine. The difference between the two outcomes is disclosure, not use.
The same pattern shows up in film. A studio quietly uses a generative tool to clean up a background plate and nobody objects, because nobody finds out. A different studio admits to using one for concept art and gets dragged for a week.
In publishing, a novelist gets accused of “AI-assisted” writing because their prose has a particular rhythm, while every traditionally-published author this year has been edited in Word, which has had AI-assisted suggestions on by default for two years.
In professional service, half the legal briefs filed this quarter went through a tool with a generative summarisation feature, and nobody seems to be boycotting their solicitor.
This is the shape of the problem with maximalist anti-AI consumer positions. The stated rule is “I won’t consume AI-touched work.” The operational rule that actually governs behaviour, is “I won’t consume work where AI use was disclosed or discovered.” Those are very different rules, and the gap between them is where the position falls apart.
The tools have been here for a long time.
Generative AI in the headline sense (Sora, Veo, Claude, Gemini) is one small slice of what “AI in production” is in 2026. The rest is woven into the tools every studio, publisher, and firm uses every day:
- Upscalers, denoisers, and frame interpolation in film and TV post-production
- Voice cleanup and dialogue restoration in audio
- Tweening, motion matching, and lip-sync in animation
- Concept exploration and reference generation in art departments
- Code completion in every major game engine
- Localisation and subtitle passes
- Content-aware fill, sky replacement, and rotoscoping in editing suites
- Grammar and style suggestions in Word, Google Docs, Grammarly
- Drafting and summarisation tools in legal, finance, and consultancy
- Marketing copy and product descriptions across every retailer
Some of this is disclosed. Most of it isn’t. None of it is going to be unwoven, because the features shipped inside Adobe, Maya, Pro Tools, Word, the engines, and the major DCC suites years ago. Opting out of “any pipeline that touched AI” means opting out of contemporary consumption entirely, including from artists who would describe themselves as anti-AI.
Any boycott selects against the behaviour it wants, and here is the structural problem. A boycott needs a legible signal. When the signal is “did this product use AI,” and the answer is almost always yes in some form, enforcement collapses onto whatever happens to be visible. Visibility depends on disclosure. Disclosure is voluntary.
Studios and publishers that are honest about their tooling get review-bombed. Pearls are clutched about the end of creativity. Quiet ones sail through. The market has now taught everyone the lesson: do not volunteer information about your production pipeline. Commentators have already selected against the behaviour it claims to want. It rewards concealment and punishes transparency, which is the opposite of how a functioning consumer signal should work.
You can watch this happen in real time when a beloved product gets caught. The response shifts. “That kind of AI is fine.” “It’s only the bad kind that counts.” “DLSS isn’t really AI.” “Spell-check has been around for years.” “Photoshop’s content-aware fill is just a feature.” The category of “AI-tainted” gets quietly redrawn to protect prior consumption choices. The line moves to wherever it needs to be.
This is a No True Scotsman fallacy in plain sight. “No real anti-AI consumer would object to motion matching.” “That’s not the kind of AI we mean.” Each redrawing of the category protects something the speaker wanted to keep. The move is structural, not personal, and the position requires it. If you commit to an absolute rule about a category that cannot be reliably observed, the only way to maintain consistency is to keep redefining the category.
People sometimes respond by arguing for stricter disclosure norms or better detection. Neither gets you out of the problem:
First, detection asymmetry. You cannot reliably tell from the artefact whether AI was involved. Output that looks generated often is not. Output that looks fully human-made often is not either. The artefact does not carry the signal, so enforcement has to depend on disclosure or leaks, which fail in opposite directions.
Second, tooling diffusion. The features are inside the software. They are not optional plugins you can audit. When Photoshop’s “Generative Fill” sits next to “Content-Aware Fill” sits next to the healing brush, the line between “AI” and “not AI” is a UI decision made by Adobe’s product team. Word’s editor pane, Grammarly, and the suggestion engine in Final Draft all sit in the same category. There is no clean place to draw the line from the outside.
Third, definitional drift. Yesterday’s AI is today’s just-a-feature. Spell-check was AI. Autocomplete was AI. Auto-tune was AI. Motion matching was AI. Each was controversial when it arrived and invisible now. The category is a moving target, and the people most invested in policing it are the least likely to notice when something they already accept gets reclassified.
There is a coherent anti-AI stance available. It just has to be narrower than “no AI.” Something like:
I object to generative AI replacing creditable human creative labour, and I will boycott when I can identify it.
That is defensible. It names the specific harm (displacement of paid creative work), it acknowledges the limits of detection (“when I can identify it”), and it does not require you to pretend that the upscaler in your favourite movie’s HD remaster is somehow morally distinct from the 2026 image model that did not make the cut.
The maximalist version (no AI anywhere in the pipeline) collapses into one of two things. Purity theatre, where the rule is performed but never applied, because applying it consistently would mean consuming almost nothing. Or selective enforcement, where the rule is applied to whichever studios got caught and ignored for whichever ones you wanted to keep enjoying. Selective enforcement is where the goalpost-shifting kicks in, and it is why the same person can rage about a Sora-generated cutscene and praise a Pixar film without noticing they are holding two incompatible positions.
The interesting question is not whether the maximalist position is inconsistent. It is, and so is most of how anyone consumes anything. The interesting question is what a coherent disclosure regime would actually look like.
If you want to know what tools touched a piece of work, you need a labelling standard. Studios, publishers, and firms would have to declare specific uses (training data, generation, editing, in-betweening, upscaling, cleanup, drafting, summarisation) at a granularity that means something. Consumers would have to read those labels and apply consistent rules to them. Detection would not go away as a problem, but the question “what am I actually objecting to” would have an answer.
I suspect the people loudest about wanting this regime would not, in practice, accept its outputs. A label that says “AI-assisted upscaling” on a beloved remaster, or “AI-assisted copy-edit” on a novel, would force a choice that the current vibes-based system lets people avoid. The current system is not a failure of disclosure. It is a preference for ambiguity, dressed up as a principled stance.
I am not arguing that AI in media is ‘fine’, or that concerns about creative displacement are misplaced. They are not. The labour question is meaningful, the credit question is meaningful, and I write this as someone who has worked all my life in a labour category which is seriously under threat - software engineering!
I am arguing that the absolutist consumer position does not address any of those questions. It rewards creators for hiding their tooling, punishes the ones being honest, and requires a steady supply of redefinition to stay internally consistent. It is a vibes-based loyalty test wearing the costume of a principled stance.
The narrower version of this stance is harder to perform but easier to defend.
It is the only one that survives contact with how media actually gets made now.