AI Disclosure

In August 2023, author and publishing consultant Jane Friedman discovered something unsettling: five books were for sale on Amazon under her name, and she hadn’t written any of them. The titles were AI-generated, cobbled together in a style vaguely resembling her voice, and they were listed right alongside her real work. When she reported them, Amazon’s response was to ask for her trademark registration number. She didn’t have one (most authors don’t), so they closed the case.

It took a social media uproar and intervention from the Authors Guild to get the books removed. One month later, Amazon introduced its AI content disclosure policy.

That chain of events captures why AI disclosure matters. Without it, there’s no mechanism to distinguish between what a person wrote and what a machine generated. And for authors, that distinction is everything.

What AI Disclosure Means

AI disclosure is the practice of transparently communicating when artificial intelligence has been used to create, assist with, or substantially modify a piece of content. In publishing, it usually means telling a platform, publisher, or reader about the role AI played in producing a book.

The concept is straightforward, but the details matter. Most policies draw a line between two categories: AI-generated content, where the AI created the actual text, images, or translations that end up in front of readers, and AI-assisted content, where a human author used AI tools for brainstorming, editing, grammar checks, or research but remained the creative force behind the work. That distinction determines what you need to disclose and where.

If you use ChatGPT to brainstorm character names for your novel, that’s AI-assisted work. If you paste a prompt into Claude and publish the resulting paragraphs with light editing, that’s AI-generated content. The line between the two can feel blurry, and different platforms draw it in slightly different places, but the core question is the same: did a human or a machine do the creative work that the reader ultimately sees?

How We Got Here

For most of publishing history, nobody needed a word for this. The assumption was simple: a book had an author, and the author wrote it. Ghostwriting complicated that picture somewhat, but even ghostwritten books were human-produced.

The first signs of change came from academic publishing. Journals like Nature, Science, and The Lancet had long required authors to disclose when tools or outside contributions influenced their research. When AI writing tools started appearing in academic settings, these journals adapted their existing disclosure frameworks to cover them. Science took the hardest line, banning AI-generated text entirely and treating violations as scientific misconduct. Most others settled on requiring disclosure in a paper’s methods or acknowledgments section.

But it was the explosion of consumer AI tools in late 2022 and early 2023 that forced the publishing industry’s hand. After ChatGPT launched in November 2022, the barriers to producing a full-length book dropped from months of effort to hours. Amazon’s Kindle store began filling with AI-generated titles, many of them low-quality, some of them (as Jane Friedman discovered) impersonating real authors. Draft2Digital reported publishing volume trending roughly 50 percent higher than normal, and distributors scrambled to distinguish legitimate authors from content mills.

Amazon KDP introduced its AI disclosure policy in September 2023. IngramSpark updated its terms of service around the same time, taking a harder stance by restricting books created entirely through AI or automated processes. The Authors Guild called Amazon’s move “a welcome first step,” while acknowledging it didn’t solve everything.

Then, in October 2024, the Authors Guild launched something that would have been unimaginable a few years earlier: the “Human Authored” certification mark. Authors could apply for a verified badge to place on their book covers, spines, or promotional materials, certifying that a human wrote the text. Each certification is numbered and recorded in a public database for reader verification. For the first time in the history of publishing, authors needed to prove their work was human-made rather than machine-made. That a such a mark could be necessary, or commercially valuable, says something about where the industry has arrived.

On the regulatory side, the European Union’s AI Act (Article 50) introduced legally binding transparency obligations requiring AI system providers to mark outputs in machine-readable formats so they can be detected as AI-generated. Those provisions are expected to take effect by 2026 or 2027. In the United States, Representative Adam Schiff introduced the Generative AI Copyright Disclosure Act in April 2024, which would require AI companies to report what copyrighted works were used in training their models. The Authors Guild, the Directors Guild, and the Writers Guild all backed it.

What Disclosure Looks Like in Practice

If you publish through Amazon KDP, you’ve likely already encountered the disclosure process. During title setup, KDP asks a straightforward yes-or-no question: does this book contain AI-generated content? If you select “Yes,” follow-up questions ask whether the AI-generated material includes text, images, or translations.

What requires disclosure:

  • Chapters drafted by an AI model, even if you edited them afterward
  • Cover art or interior illustrations created by AI image generators
  • Machine-translated editions
  • Activity books, puzzle books, or prompt collections generated by AI

What doesn’t require disclosure:

  • Using AI for grammar and clarity checks
  • Brainstorming ideas or outlines
  • Proofreading or line editing assistance
  • Research and terminology lookups

Amazon makes clear that disclosure doesn’t shift responsibility. You’re still accountable for everything in your book, including content an AI helped produce. Getting the disclosure wrong can slow your release, trigger a review, or in serious cases, result in book removal or account suspension.

IngramSpark’s approach is stricter. Their updated terms of service restrict books created using AI or automated processes from distribution, though enforcement specifics remain somewhat unclear.

Why This Matters for Your Writing Life

If you’re an indie author publishing through KDP or any other platform, disclosure isn’t optional. Failing to disclose when required can put your entire catalog at risk, and losing your publishing account means losing your income stream in one stroke. That alone makes it worth understanding where the line falls in your own workflow.

But the practical implications run deeper than checkbox compliance.

Know where your workflow lands. The distinction between “assisted” and “generated” applies to how you actually use AI tools day-to-day. Running your manuscript through ProWritingAid or Grammarly (which use AI under the hood) is assistance that doesn’t require disclosure. Asking NovelCrafter or Sudowrite to generate prose that ends up in your final draft moves into territory that likely does. If you’re using AI to write scenes, dialogue, or descriptions that readers will encounter more or less as the AI produced them, that’s the kind of thing platforms want to know about.

Disclosure can be a competitive advantage. As AI-generated content floods publishing platforms, transparency becomes a way to stand out. Readers who care about authorship (and many do, as the Friedman incident proved) will gravitate toward authors who are upfront about their process. The Authors Guild’s “Human Authored” certification is one formal option, but even a brief note in your book’s front matter about how you did or didn’t use AI tools can build trust with your audience.

The legal landscape is still shifting. AI copyright questions remain unresolved, and the rules around disclosure will almost certainly get more specific in the coming years. The EU AI Act will introduce enforceable requirements, and U.S. legislation is working its way through Congress. Authors who build good disclosure habits now won’t have to scramble when the rules tighten.

The simplest way to think about it: if you’d feel uncomfortable telling a reader exactly how AI was involved in creating your book, that’s a sign you need to either adjust your process or be more transparent about it. AI disclosure isn’t about shaming anyone for using technology. It’s about making sure readers know what they’re getting, and making sure the author’s name on the cover actually means something.