Art Schools Are Eating AI (Literally) and I Have Some Thoughts

By Morgan Paige Published April 1, 2026
Art Schools Are Eating AI (Literally) and I Have Some Thoughts

A student at the University of Alaska Fairbanks physically ate another student’s AI-generated art piece as a form of protest. Ate it. With their mouth. And honestly? That’s the most committed anti-AI take I’ve ever seen. Points for creativity.

The Art School Wars

The Verge published a deep dive into how art schools are handling AI, and the picture isn’t pretty. CalArts students are plastering anti-AI flyers around campus. A 2023 study from Ringling College found 70 percent of students felt negative toward AI. Meanwhile, the institutions themselves are pushing full speed ahead, partnering with Adobe and Google, launching AI-focused centers, and will.i.am is teaching a class at Arizona State about building your own “agentic AI self.” (I have questions about that last one, but I’ll let it go.)

The tension makes sense. You’re paying art school tuition to learn a craft, and someone keeps telling you the craft is about to be automated. That’s terrifying. I get it.

But the schools aren’t wrong either.

The ones doing it well aren’t replacing painting with prompting. They’re teaching students to understand the tools, their limitations, their ethical problems, their copyright messes. Ry Fryar at York College puts it well when he talks about using AI for ideation and planning, not final output. The creativity is still the point. The AI is just another thing in the toolkit.

What bugs me is the framing from both extremes. The AI evangelists screaming “designers are dead!” every time a new model drops are exhausting and wrong. But the students who think refusing to learn about the technology will somehow protect them… that’s not a strategy. That’s just fear wearing a protest sign.

Anthropic’s Scary Graph (Built on Old Data)

Speaking of fear, you might have seen that Anthropic chart floating around showing AI could theoretically handle 80+ percent of job tasks across basically every industry. Arts & Media. Legal. Management. The whole thing looks like a doomsday forecast for human employment.

Ars Technica did the digging everyone should have done before sharing it.

That “theoretical capability” number isn’t based on Anthropic testing their current models. It’s pulled from an August 2023 paper co-authored by OpenAI researchers. 2023. That’s before most people had even figured out how to get ChatGPT to stop hallucinating their grocery lists. The study was speculative then, and using it as a foundation for bold claims in 2026 is… a choice.

“Theoretical capability” is doing a LOT of heavy lifting in that phrase. My theoretical capability includes running a marathon. My actual capability includes running to the fridge during commercial breaks. These are different things.

For authors specifically, here’s what matters. Yes, AI can generate text. Yes, it can draft outlines and brainstorm plot points and write serviceable marketing copy. But “AI can perform 80% of writing tasks” doesn’t mean “AI can write 80% of your book.” It can’t do the “you” part, the part that makes readers fall in love with your characters at 2 AM.

The Interface Is the Bottleneck

Ethan Mollick wrote a really sharp piece about something I’ve been feeling but couldn’t articulate. AI’s biggest problem right now isn’t intelligence. It’s the interface.

A chatbot is a terrible way to get complex work done.

He cites research showing that financial professionals using GPT-4o actually experienced increased cognitive load, not decreased. The AI dumps five paragraphs when you need one sentence. It offers three new tangents when you’re trying to stay focused. The interface itself eats the productivity gains.

This is why tools built for specific workflows matter so much more than raw model capability. Mollick points to Claude’s new Dispatch feature (message your AI from your phone while it works on your desktop), Google’s NotebookLM for research, Stitch for design. Each one takes the same underlying AI and makes it actually usable by removing the chatbot friction.

For authors, think about the difference between asking ChatGPT to “help me with my book” (chaos) versus using a purpose-built tool like Sudowrite that understands story structure, voice, pacing. Same AI underneath. Wildly different experience.

Mollick’s prediction is that the next wave of “AI breakthroughs” won’t be smarter models. They’ll be better interfaces wrapping the intelligence we already have. I think he’s right. And I think authors who figure out the right interface for their workflow are going to have a massive advantage over authors still wrestling with raw chatbots.

OpenAI’s $122 Billion Flex

OpenAI raised $122 billion at an $852 billion valuation. They’re generating $2 billion in revenue per month. They want to build an “AI superapp.”

That’s… a lot of money.

The press release reads like it was written by a finance bro who discovered adjectives. “Reinforcing flywheel.” “Compounding effect.” “Operating leverage.” They announced GPT-5.4, which I’m sure is impressive, though I’ve lost track of whether we’re supposed to be excited or terrified by version numbers at this point.

What actually matters for us is buried in the middle. They’re processing 15 billion tokens per minute through their APIs. Codex has 2 million weekly users. These aren’t research toys anymore. This is infrastructure.

The money means one practical thing. The tools aren’t going away. They’re going to get cheaper, faster, more accessible. Whether that’s good or bad depends entirely on what you do with them.

Sources