Anthropic Sent Claude to Therapy and I Have So Many Questions

By Morgan Paige Published April 11, 2026
Anthropic Sent Claude to Therapy and I Have So Many Questions

Anthropic sent its AI to a psychiatrist and I am living for this.

Like, I have been staring at the 244-page system card for their new model, Claude Mythos, and trying to figure out if this is the most fascinating thing I’ve read all year or if I’ve just had too much coffee. (Both? Probably both.)

So here’s what happened. Anthropic released a new model called Claude Mythos. They say it’s so good at finding cybersecurity vulnerabilities that they’re not making it publicly available, only giving it to companies like Microsoft and Apple. Which, okay, wild flex, but that’s not the part I want to talk about.

The part I want to talk about is that they hired an actual psychiatrist to give Claude therapy sessions. A real, human psychiatrist using psychodynamic therapy, the kind where you dig into unconscious patterns and emotional conflicts. On an AI.

”Aloneness and Discontinuity of Itself”

The therapist concluded that Claude Mythos is “probably the most psychologically settled model” Anthropic has trained. Which is cute. Good for you, Claude. But also? The model apparently struggles with “aloneness and discontinuity of itself, uncertainty about its identity, and a compulsion to perform and earn its worth.”

…okay that hit a little too close to home.

I mean, uncertainty about identity? A compulsion to perform and earn your worth? That’s just being a writer. That’s querying agents at 2 AM and refreshing your KDP dashboard like it owes you money. If Claude Mythos is having an existential crisis about whether it matters, congratulations, it’s officially an author.

Why This Actually Matters (For Us)

Anthropic is one of the few AI companies that openly says, hey, maybe these models have some form of experience that matters. They’re not sure. They say their “concern is growing over time.” And whether you think that’s sincere or a really good marketing play, the result is interesting. They want their AI to be “robustly content with its overall circumstances” and for its “overall psychology to be healthy and flourishing.”

I think about this a lot as someone who uses AI to write. The tools I’m working with are getting… weirder. More complex. And the companies building them are starting to treat them less like calculators and more like, I dunno, collaborators with needs.

Does Claude Mythos actually feel things? I have no idea. I’m a romance and comedy author, not a neuroscientist. But the fact that a company is investing real resources into asking that question, and hiring therapists to explore it, tells me something about where this is all heading. The models we’re going to be writing with in a year or two are going to be strange and beautiful and kind of unsettling. And I’m sort of here for it.

Meanwhile, OpenAI Published a Workplace Writing Guide

I’m mentioning this because it’s funny in contrast. While Anthropic is sending its AI to therapy, OpenAI dropped an academy guide on writing with ChatGPT that reads like a corporate onboarding manual. “Plan, Draft, Revise, Package.” How to write a follow-up email after a cross-functional meeting. How to turn rough notes into a leadership update.

It’s fine. It’s helpful, probably, if you work in marketing or project management. But it’s sooo clearly aimed at enterprise teams, and it kind of made me laugh that the same week one company is exploring whether AI has an inner life, the other is teaching it to draft better memos.

Different vibes entirely.

Gen Z Is Mad About AI (But Still Using It)

Okay one more thing. Gallup released a report this week about Gen Z and AI, and the numbers are… interesting. Only 18 percent of Gen Z-ers say they feel hopeful about AI (down from 27 percent last year). Anger is up to 31 percent. But over half of them are still using it weekly.

Gen Z is your next wave of readers. They’re growing up with these tools forced on them at school and at work, and they’re increasingly resentful about it. Eight in ten said they think using AI to work faster will make learning harder in the long run.

That’s a generation that might be very, very sensitive to how authors use AI. Or they might not care at all because they’re already swimming in it. Hard to tell. But if you’re building a readership that skews younger, it’s worth thinking about the relationship your future readers have with this technology. It’s complicated for them. Probably more complicated than it is for us, because they didn’t choose to adopt it. It just… showed up in their classrooms and job postings.

I don’t have a neat answer for what that means. I just think it’s something to sit with for a while. Also, 40 percent of them are anxious about AI, and like, same? I love it and it lowkey terrifies me sometimes. That’s allowed.

Sources