I love a good scary headline about AI turning our brains to mush. Really gets the blood going with that first cup of coffee.
Researchers at the University of Pennsylvania published a paper called “Thinking, Fast, Slow, and Artificial” that introduces the concept of “cognitive surrender,” which they define as the “uncritical abdication of reasoning itself” when using AI tools. They’re building on the classic System 1 (fast, intuitive) and System 2 (slow, analytical) framework from behavioral psychology, arguing that AI has created a whole new third category of thinking. One where the machine does it for you and you just… nod along.
And yeah, some people absolutely do that.
The Research Isn’t Wrong
The paper makes a real observation. When an AI delivers an answer that sounds fluent and confident, a lot of people just accept it. No verification. No second thought. The researchers found that time pressure and external incentives make this worse, which tracks perfectly with how most of us actually work. Deadline breathing down your neck? The AI said something plausible? Good enough, ship it.
I’m not going to pretend this isn’t a problem. If you’re copy-pasting AI output into your manuscript without reading it critically, without asking yourself if this is actually what your character would say, without checking if the plot logic holds up… you’re going to produce garbage. Smooth, confident, beautifully punctuated garbage.
But I want to talk about the framing.
We’ve Always Done This
The paper positions cognitive surrender as something new. Something AI created. And I think that’s where it gets a little intellectually dishonest (or at least incomplete).
People have been surrendering their critical thinking to authority figures forever. Your doctor says take this pill, you take it. Your editor says cut chapter three, most debut authors cut chapter three. Your favorite writing craft book says “show don’t tell” and suddenly an entire generation of writers is terrified of the word “felt.” We have always been disturbingly willing to let confident-sounding sources do our thinking for us. AI just made the confident-sounding source available 24/7 for free.
That doesn’t make the behavior okay. It makes the framing misleading.
The Author-Specific Version of This
For writers, cognitive surrender looks like letting AI make your creative decisions. Not using it to brainstorm or draft or organize your thoughts, but actually letting it decide what your story is about. What your character wants. How a scene should feel.
That’s bad. I’ll say it plainly.
But it’s also not what most authors I know are doing. The writers in my circles who use AI are some of the most intentional, critical thinkers I’ve encountered. They have to be. Getting good output from an AI requires you to know what good looks like. You need taste. You need vision. You need to read what the AI hands you and think “no, that’s not it” forty times before you land on something that clicks. That’s not cognitive surrender. That’s collaboration with a very fast, very dumb partner who occasionally says something brilliant.
(Mochi does the same thing when she knocks something off my desk and it accidentally lands in a more aesthetically pleasing spot. Collaboration.)
The Real Split
The research identifies two types of AI users. Those who treat it as “a powerful but sometimes faulty service that needs careful human oversight” and those who see it as “an all-knowing machine.” I don’t think this split is about AI at all. I think it’s about personality. Some people question everything. Some people don’t. AI didn’t create the second group. It just gave them a new thing to not question.
And here’s what bugs me about how this research will be used. It’ll get weaponized. The anti-AI crowd will wave it around as proof that AI makes you stupid, when what it actually shows is that uncritical use of any authoritative source makes you stupid. Google made information free and people still believe nonsense they read on the first result. Wikipedia exists and people still cite random blog posts. The tool isn’t the problem. The lack of critical engagement is the problem, and that problem predates electricity.
What This Means for Your Writing Practice
If you use AI in your writing process, the fix is stupidly simple. Read what it gives you. Actually read it. Ask yourself if it’s good. Ask yourself if it sounds like you. Ask yourself if your character would actually say that or if the AI just produced the most statistically likely dialogue for that scenario. Push back. Regenerate. Edit. Do the work.
The authors who will thrive with AI are the ones who were already good at thinking critically about their own writing. The ones who could kill a darling without flinching. The ones who read their draft and thought “this sucks” and meant it as motivation, not despair.
If you were already outsourcing your creative judgment to writing forums, to craft books, to whatever the loudest voice on Twitter said this week… AI isn’t your problem. It’s just your newest symptom.
Sources
- Research finds AI users “scarily willing” to surrender their cognition to LLMs — Ars Technica coverage of UPenn cognitive surrender research
- Thinking, Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender — Original research paper by University of Pennsylvania researchers
