A Publisher Pulled a Book This Week. The Reason Should Worry You.

By Morgan Paige Published March 21, 2026
A Publisher Pulled a Book This Week. The Reason Should Worry You.

A horror novel called Shy Girl by Mia Ballard got picked up by Hachette after blowing up on BookTok. Then people started saying it read like ChatGPT. A Reddit thread. A two-and-a-half-hour YouTube video (two and a half hours!). A New York Times investigation. And yesterday, Hachette pulled the book from the UK market and canceled the US release.

Ballard says she didn’t use AI. She says an editor she hired for the original self-published version did, without her knowledge. She says her mental health is destroyed and her name is ruined. She’s pursuing legal action.

I don’t know if Mia Ballard used AI to write her book. Neither do you. That’s not even the part of this story I can’t stop thinking about.

The part that matters

What I keep coming back to is the mechanism. A major publisher pulled a book, with real sales (nearly 2,000 copies in the UK), based on what amounts to vibes and public pressure.

There is no AI detection tool that can reliably determine whether a finished, edited novel was written with AI assistance. The “evidence” here was Goodreads reviews calling the prose repetitive, a Reddit post from someone claiming to be an editor, and a very long YouTube video. The New York Times said AI appeared to have been used in “significant parts” of the work, but the methodology for that determination isn’t a settled science. It’s pattern recognition by humans who think they know what AI sounds like.

One Goodreads reviewer put it this way. “If it isn’t AI, she’s a terrible writer. Her writing is truly indistinguishable from an LLM.”

Read that sentence again. The reviewer is acknowledging, in the same breath, that bad writing and AI writing are indistinguishable. And then concluding it must be AI. Wild.

What gets called “AI writing”

So what actually gets flagged as AI writing? Repetitive phrasing, flat emotional register, overuse of certain sentence structures, generic descriptions, awkward formatting. You know what else produces all of those things? A first-time novelist who self-published without a strong developmental edit.

I’ve read self-published books that predated ChatGPT by a decade that would absolutely get flagged by today’s AI-suspicion crowd. Stiff dialogue, recycled descriptions, prose that reads like it was written at 3 AM (because it was), and a general vibe of “I finished NaNoWriMo and hit publish the next day.” That’s not AI. That’s someone learning to write in public, which is what self-publishing has always been.

This doesn’t mean Ballard didn’t use AI. Maybe she did. But the standard of evidence here was “this writing isn’t very good, and AI writing also isn’t very good, therefore AI.” That’s a witch trial with better production values.

The precedent problem

Hachette’s statement was carefully vague, saying the company “remains committed to protecting original creative expression and storytelling.” Which is a nice thing to say and means absolutely nothing in terms of policy. What’s the threshold? What counts as AI use? If an author runs their manuscript through Claude for line-edit suggestions and accepts some of them, is that AI-written? If they use Sudowrite to brainstorm plot alternatives and then write the scenes themselves, does that count? If they dictate into Whisper and clean up the transcription with GPT, are they disqualified?

Nobody at Hachette has answered these questions because answering them would require drawing lines, and drawing lines would reveal how arbitrary those lines are.

What they’ve established instead is something worse. A system where public accusation is sufficient. Where a loud enough Reddit thread and a viral YouTube video can end a book deal. Where “it sounds like AI” is treated as evidence even though we can’t agree on what AI sounds like.

If you’re an indie author, you should care about this even if you’ve never touched an AI tool in your life. Because the accusation doesn’t require proof. It just requires enough people to believe it.

Meanwhile, in the other timeline

While one corner of the publishing world was busy pulling books over AI suspicions, Future Fiction Academy published a guide to something called “kitbashing.” It’s a technique for deliberately generating multiple AI drafts of a scene, each with a different focus (structure, emotion, voice, risk), and then manually assembling the best elements into a final version.

The term comes from physical model-making. Prop designers in the ’70s and ’80s would buy multiple commercial model kits and combine parts from each to build something new. The Millennium Falcon was kitbashed. (How cool is that?)

The technique is interesting on its own merits. But what struck me is how openly and specifically it’s being taught. There’s no hand-wringing about whether this counts as “real” writing. The framework assumes that the human is the editor, the decision-maker, the one with taste. The AI generates variations. The writer evaluates and selects.

This is what thoughtful AI-assisted writing actually looks like, and it’s happening right now, in plain sight. Authors are developing craft around these tools the same way photographers developed craft around Photoshop and musicians developed craft around digital audio workstations. Did you use a tool? Who cares. Is the final work any good, and is it yours? That’s the only question that ever mattered.

The uncomfortable middle

I think the Shy Girl situation is genuinely complicated, and I’m not interested in pretending otherwise. If Ballard’s account is true, that an editor she hired introduced AI-generated text without her knowledge, that’s a real problem. But it’s a contract and liability problem between her and that editor. Not evidence that she’s a fraud.

If she did use AI extensively and lied about it, that’s a different kind of problem. But even then, the question of what Hachette’s obligation was, and what standard of evidence should be required before a publisher torpedoes an author’s career, still matters.

Nobody wants to say this out loud, but within five years most commercially published books will have been touched by AI at some point in their creation. Brainstorming, outlining, drafting, editing, proofreading, cover design, marketing copy. The line between “AI-assisted” and “not AI-assisted” is already blurring, and it’s going to keep blurring until it’s invisible.

Publishers who want to ban AI use entirely are going to find themselves in the same position as record labels who tried to ban sampling, or studios that tried to ban digital effects. You can hold the line for a while. But the line is moving under your feet.

What I’d actually like to see

I’d like to see publishers do the hard work of defining what they mean. Not “we’re committed to original creative expression” (everyone’s committed to that, it’s like being committed to good weather). Actual policies. What tools are prohibited? What level of AI involvement triggers disclosure? What’s the process when someone is accused? Is there a right of response before your book gets pulled?

I’d like to see the conversation shift from “was AI used” to “was the reader deceived.” Because a reader who buys a book expecting a human-written novel and gets a ChatGPT dump has a legitimate complaint. But a reader who buys a book, enjoys it, and then learns the author used AI for brainstorming? That reader hasn’t been harmed. They’ve been told to feel harmed, which is a different thing.

And I’d like to see authors, especially indie authors, stop treating AI use as something to be ashamed of. Not because every use of AI is good (some of it is lazy, some of it produces garbage). But the secrecy and shame around it is creating exactly the environment where accusations can destroy careers without evidence.

The authors who are openly developing techniques like kitbashing, who are treating AI as a tool to be mastered rather than a secret to be kept… they’re building something more durable than the authors who are either hiding their AI use or pretending the tools don’t exist.

The gap

The Vergecast ran a segment this week about why people hate AI. There’s a massive gap between how companies talk about AI and how regular people feel about it. Study after study shows most people are worried about AI’s effects and don’t think the benefits outweigh the downsides.

The Shy Girl situation is a perfect microcosm of this gap. The tools are getting better. Authors are finding genuinely useful ways to work with them. And simultaneously, the public conversation is stuck on gotcha accusations and purity tests.

The tools are useful and the backlash is real. I’m not going to spend my energy trying to convince the loudest voices that AI is fine. I’d rather build work good enough that the question of how it was made becomes less interesting than what it is.

Sources