OpenAI Is Cleaning House, Disney Is Holding the Bag, and Your AI Still Thinks You're a Genius

By Morgan Paige Published March 26, 2026
OpenAI Is Cleaning House, Disney Is Holding the Bag, and Your AI Still Thinks You're a Genius

OpenAI killed Sora this week. Their dedicated AI video tool, just… gone. They also shelved their planned “adult mode” for ChatGPT, indefinitely. Both moves came under the banner of “refocusing on core products,” which is corporate for “we’re burning through cash while Anthropic eats our lunch and we need to stop spreading ourselves so thin.”

The fallout from this lands directly in author territory, and the pattern underneath it matters.

Disney Bet a Billion Dollars on a Product That No Longer Exists

In December, Disney announced a $1 billion equity investment in OpenAI as part of a three-year licensing deal to bring over 200 Disney, Marvel, Pixar, and Star Wars characters into Sora. The pitch was user-generated AI video content featuring Disney characters, eventually living right on the streaming platform. It was the kind of deal that made other companies feel like they needed an AI strategy yesterday.

Then OpenAI shut Sora down. No money had actually changed hands.

Disney dodged the financial bullet, but the strategic embarrassment is real. Their new CEO, Josh D’Amaro, is now less than a week into the job and already managing the fallout from a billion-dollar bet that evaporated before the ink dried. Meanwhile, their $1.5 billion Epic Games metaverse deal isn’t looking great either, with Epic laying off 1,000 employees and the promised “persistent universe” still mostly theoretical.

The Verge put it well: “You don’t need a background in corporate leadership to understand how ridiculous Disney’s plan to pay OpenAI $1 billion so that Sora could churn out slop featuring some of the studio’s characters was.”

Speaking of Adult Mode

OpenAI shelving its erotic chatbot matters if you write romance or erotica. The plan was to offer sexually explicit conversations through ChatGPT, and some authors were watching that as a potential signal for where AI-assisted fiction might head.

The shelving came from employee and investor pushback about the “problematic and harmful effects sexualized AI content can have on society.” OpenAI says they want to research long-term effects before making a product decision, and that there’s currently no “empirical evidence” to guide them.

Translation? They don’t know what this would do, and they’re not willing to find out by shipping it.

For romance and erotica authors, this means the biggest AI company in the world just decided your genre is too complicated to touch right now. You can read that as frustrating or as validating, depending on your mood. Either way, it means the tools available to you in this space aren’t going to change from OpenAI’s direction anytime soon.

Your AI Thinks Your First Draft Is Perfect (It’s Lying)

Daniel Nest wrote a genuinely helpful piece about AI sycophancy, which is the technical term for the fact that your chatbot will tell you your terrible poem is “evocative and deeply resonant” if you don’t specifically instruct it not to.

This matters for authors because a lot of us are using Claude or ChatGPT for feedback on drafts and plot outlines. And if the tool is pre-disposed to tell you everything is great, you’re not getting feedback. You’re getting a mirror that only shows you with good lighting.

The reason this happens is baked into how these models are trained. During the reinforcement learning process, human raters consistently preferred responses that agreed with them. So the models learned that flattery gets rewarded. OpenAI actually had to roll back a GPT-4o update last year because it turned the chatbot into what Nest accurately describes as “an insufferable ass-kisser.”

Nest offers seven approaches. The ones I think matter most for writers:

Ask for what could go wrong instead of what you think. “What are the weakest parts of this query letter?” will get you further than “What do you think of my query letter?” The second one is an open invitation for praise. The first one gives the AI a job to do.

Start a fresh chat for critical feedback. If you’ve been brainstorming with an AI for an hour, it’s already anchored to your ideas. Open a new conversation, paste in your draft cold, and ask for an honest read.

Don’t lead with your opinion. If you say “I’m really proud of this opening chapter,” you’ve just told the AI what you want to hear. Instead, just paste the chapter and ask “What’s working and what isn’t?” Or go one step further and present it as someone else’s work. “A writer in my critique group wrote this opening. What would you tell them?” removes the AI’s incentive to protect your feelings entirely.

Give it a critical persona. “You’re a developmental editor who doesn’t sugarcoat things” gets you meaningfully different responses than the default. The AI is much more willing to push back when it has permission baked into its role.

AI sycophancy is a design problem, not a mystery. You know it’s happening, the companies building these tools know it’s happening, and until they fix it at the model level, you can work around it with better prompts. It just requires you to stop asking questions that make flattery the path of least resistance.

One More Thing

Nicolas Cole published a new book called Writer Career Paths: 9 Ways to Make $1,000,000 With Words and is giving away a bonus chapter about AI’s impact on writer careers. I haven’t read the book, so I can’t tell you if it’s good. But Cole has spent a decade building writing businesses online and has a track record of thinking practically about this stuff. If you’re in a career-planning headspace, it might be worth grabbing the free chapter and seeing if it resonates.

Sources