The Classroom Didn't Break Because of AI, It Was Already Cracking

By Morgan Paige Published April 13, 2026
The Classroom Didn't Break Because of AI, It Was Already Cracking

There’s a piece making the rounds from Ars Technica where a part-time college Earth science instructor basically pours his heart out about what it’s like teaching asynchronous online courses in the ChatGPT era. It’s brutal. He talks about being forced to moonlight as a “detective and prosecutor,” trying to figure out which students are doing the work and which ones are turning in what he calls a “work-shaped simulacrum.” He mentions a College Board survey where 84% of high school students said they’ve used generative AI for schoolwork.

Eighty-four percent. That number is wild, but also… is anyone surprised?

The Cheating Isn’t New, The Scale Is

Students have always cheated. Not being cynical, that’s just reality. Wikipedia. SparkNotes. Paying someone on Craigslist to write your essay. (I’ve heard stories…) The difference now is that the effort required to fake it dropped to basically zero. You don’t even need to find a shady essay mill anymore. You just paste the prompt and hit enter.

So yeah, I get why teachers are losing their minds. The tools that used to catch cheaters, the plagiarism detectors, the “this doesn’t sound like your other work” gut feelings, they’re way less reliable now. And the institutional response is always six steps behind, because institutions move at the speed of committee meetings.

But Can We Talk About the Actual Problem?

Okay so I’m about to be the annoying person in the room. The article describes asynchronous online courses, AKA recorded videos. No live interaction. Students who, in his own words, just “fall off.” And I’m reading this thinking… the AI didn’t create the motivation problem. It gave unmotivated students a more efficient shortcut.

Which sucks! I’m not saying it doesn’t suck. But the framing of “AI ruined teaching” kind of skips over the part where asynchronous online education was already struggling with engagement and completion rates long before ChatGPT showed up.

So Why Should Authors Care?

Okay I’m not going to tell you why you should care. But I will say this connects to something we talk about a lot around here.

The fear that AI output is indistinguishable from human output? That’s the thing keeping this teacher up at night, and it’s the same fear that drives a lot of the anti-AI panic in publishing. If you can’t tell the difference, what’s the point of doing the work?

And I think the answer is sort of the same in both cases. The point was never the output. A student essay about igneous rocks isn’t valuable because of the words on the page. It’s valuable (theoretically) because the process of writing it forces you to think about igneous rocks. The essay is evidence of learning, not the learning itself.

Same with fiction. The manuscript isn’t the magic part. The thinking is. The choices you make. The weird connections your brain draws between things that shouldn’t go together but somehow do. AI can help you get the words down faster, but it can’t replace the part where you decide what the story is about and why it matters.

The “256 Shades of Gray” Thing

The teacher mentions being forced to “adjudicate 256 shades of gray” instead of the old binary of cheating-or-not. And I think that’s actually the most interesting part of the whole piece.

Because yeah. The line between “used AI to cheat” and “used AI as a tool” is genuinely blurry. A student who has ChatGPT explain a concept and then writes about it in their own words, is that cheating? What about one who generates a draft and then rewrites it? What about one who just submits the raw output?

We’re sorting through the exact same questions in publishing. Where’s the line between using AI as a brainstorming partner and having AI write your book? I use AI constantly, and I think most authors who are being honest with themselves do too, even if it’s just for research or outlining or getting unstuck on a scene. The line exists somewhere, but pretending it’s clean and obvious is kind of silly.

What Nobody Wants to Hear

Education is going to have to change. Not “maybe someday” change. Like, now change. The model where you assign essays and grade them as proof of understanding was already shaky, and it’s not going to hold up against tools that can produce passable (sometimes good) writing on demand.

That probably means more in-person assessment, more project-based learning, more oral exams, more showing-your-work in real time. It means the asynchronous online model, which was already struggling, needs a serious rethink.

And for authors? It’s simpler than all that. The people panicking about AI making human writing “worthless” are worried about the wrong thing. Nobody was ever buying your books for the arrangement of words. They were buying your perspective, your taste, your weird brain. AI can mimic the words. It can’t mimic you.

(Unless you’re super boring, in which case… maybe start worrying? I’m kidding.) (Sort of.)

Sources