AI companies are hiring improv actors to teach their models how human emotion works, and honestly? This makes more sense than anything I’ve heard in AI training news all year.
Handshake, a training data company that works with OpenAI and other major labs, posted a job listing inviting performers to participate in paid improv sessions over video. The goal is to teach AI systems how to recognize and shift between emotions in ways that feel genuinely human. If you’re an author who uses AI for dialogue or character work, this one matters.
What’s Actually Happening
Handshake pairs performers together, gives them a light prompt or scenario, and lets them improvise scenes. The sessions are unscripted and open-ended, with performers encouraged to explore characters and respond naturally.
The listing specifically calls for “emotional awareness,” defined as the “ability to recognize, express, and shift between emotions in a way that feels authentic and human.” It also asks for “interactions that feel grounded, human, and fun to play.”
Pay averages $74 per hour, though The Verge has previously reported these training gigs often see pay decrease after initial sign-up. Handshake declined to comment on what the training data will be used for specifically.
Why AI Labs Need Performers
This hiring push reflects a broader trend in AI. Models are often described as “jagged,” meaning they can handle surprisingly complex tasks while stumbling on seemingly simple ones.
Emotional nuance is one of those persistent gaps.
From a training data perspective, AI models have ingested enormous amounts of written text, but text is a lossy format for emotion. A novel might describe a character’s anger beautifully, but the model is learning from the description, not from the experience of shifting into anger, holding it, and letting it evolve naturally through a scene. That’s what improv actors do instinctively.
By recording performers as they react and challenge each other in real time, AI labs get something their text datasets can’t provide. Dynamic emotional data with natural transitions. Not “character feels sad, then character feels angry,” but the messy, overlapping, contradictory way real humans actually move through feelings.
Handshake’s demand for training data tripled last summer, and the company surpassed a $150 million run rate by November 2025. They’ve been hiring professionals across dozens of fields to fill knowledge gaps in AI models. Actors and improvisers are the latest addition.
What This Means for Your AI Writing Tools
If you’ve ever asked ChatGPT, Claude, or Sudowrite to write dialogue and gotten something that felt technically correct but emotionally flat… yeah. This is the kind of training designed to fix that.
AI writing assistants pull from what their underlying models understand about human interaction. When those models have a shallow grasp of emotional dynamics (how tension builds in a conversation, how people deflect when they’re hurt), the dialogue they generate reads like a summary of feelings rather than an actual exchange between real people. You know the vibe. “She felt a pang of sadness.” Cool, thanks, very helpful.
Training on live improv could help models internalize the rhythm and unpredictability of genuine emotional exchange. That doesn’t mean your AI assistant will suddenly write like Elmore Leonard, but it could mean the gap between “usable first draft” and “needs a complete rewrite” gets a little narrower.
Some practical context on where things stand.
- This is early-stage work. We don’t know which models will use this data or when improvements might show up in consumer tools. Don’t expect your next ChatGPT session to feel dramatically different tomorrow.
- Multimodal is the direction. AI labs have been investing heavily in voice interactions with realistic inflections, and training on performed emotion fits that trajectory. OpenAI’s Advanced Voice Mode, Anthropic’s Claude voice beta, and xAI’s Grok voice chat all reflect this push. Better emotional understanding in the base model improves everything built on top of it, including text generation.
- Improv teaches reaction and spontaneity. It doesn’t teach the slow internal shift of a character across 300 pages. Keep that gap in mind when setting expectations for AI dialogue tools.
The Performer Side
Not everyone in the improv community is thrilled. Reddit discussions have ranged from cautious curiosity to calling the arrangement “dystopian.” Some performers worry they’re training systems that will eventually replace human creative work. One commenter’s plan was simply “to sabotage the inputs.” (Points for honesty, I guess.)
Those concerns aren’t unreasonable. Many white-collar professionals have expressed similar worries about AI training work, feeling like they’re accelerating their own obsolescence.
For authors, the dynamic is different. You’re not being asked to train the model. You’re on the other side of the equation, as someone who benefits from better emotional intelligence in AI tools. But the improvements you’ll eventually see in your writing assistant exist because a performer somewhere did real creative work to make them possible. That’s worth sitting with for a minute.
When you’re evaluating AI writing tools over the coming months, pay attention to how they handle emotional transitions in dialogue. That’s likely where this kind of training shows up first.
Sources
- AI companies want to harvest improv actors’ skills to train AI on human emotion — The Verge’s reporting on Handshake’s improv actor recruitment for AI training
- Handshake AI Improv Actor Opportunity — Original job listing referenced in the reporting
- White-collar workers worry about training their AI replacements — The Verge’s coverage of professional concerns about AI training work