Anthropic accidentally showed us the future this week because somebody forgot to exclude a file from their software release. One file. That’s all it took.
The Claude Code source leak dropped 512,000 lines of readable source code into the wild, complete with 44 hidden features nobody was supposed to see yet. A debug file that should’ve been stripped before publishing went out the door at 59.8 MB, and within hours the entire codebase was copied across GitHub. Developers rewrote the whole thing from scratch so fast it became one of the fastest-growing repositories in GitHub history. Anthropic’s response was basically “oops, human error, not a security breach,” followed by legal takedown notices that accidentally nuked thousands of unrelated GitHub repos in the crossfire. You know, as damage control goes… not great.
But the code itself is way more interesting than the drama.
Kairos, the AI That Never Closes
The biggest reveal is something called Kairos (Greek for “at the right time”), referenced over 150 times in the code. It’s a mode where Claude keeps running in the background after you walk away. It checks in periodically to see if there’s something it should be doing. It monitors your projects for changes. It refreshes every five minutes, like a very diligent intern who never sleeps.
And it has a setting called “proactive.”
That means it can bring you things you didn’t ask for but probably need to see. If you’re an author using Claude to help manage a series bible or track continuity across books, imagine your AI noticing a contradiction you introduced two chapters ago and flagging it before you even open the app. You didn’t ask. It was just paying attention.
The Memory System
AI memory exists already, sure. Claude has it, ChatGPT has it. But what we have right now is basically sticky notes. Short fragments saved between sessions, useful but shallow. What the Kairos code reveals is a memory system designed to build a comprehensive picture of who you are over time. What you’re working on and how you prefer to collaborate. What frustrates you and what you want more of.
For authors, that’s the difference between re-explaining your magic system every Monday morning and picking up exactly where you left off on Friday. Your AI remembering that you were wrestling with a subplot last week, that you brainstormed alternatives and seemed most excited about one of them in particular.
AI Dreams (Seriously)
AutoDream is a background process that kicks in while you’re away from the keyboard. When you go quiet or tell Claude to sleep, it performs what Anthropic literally calls a “reflective pass” over its memory files. The process has four phases. First it orients, scanning what it already knows about you. Then it gathers, checking recent conversations for outdated memories and things it missed. Then it consolidates, merging new stuff in, correcting things that are no longer true, cleaning up vague notes into specific ones. Finally it prunes, trimming the fat so the whole system stays lean.
It won’t run constantly, either. It waits until at least twenty-four hours and five sessions have passed since the last time, and it makes sure no other consolidation process is already running. Thoughtful little thing.
Your AI is organizing its notes about you while you sleep.
I find this genuinely fascinating. Anyone who’s used AI for a long project knows the frustration of re-establishing context every single session. AutoDream is Anthropic looking at that problem and actually engineering a fix instead of slapping a “memory” label on a glorified notepad.
Meanwhile, Google Wants to Make You a Movie
Google upgraded its Vids tool with Veo 3.1 and directable AI avatars, plus Lyria 3 for AI-generated music. You get 10 free video generations per month. The videos are eight seconds long at 720p.
Eight seconds at 720p…
For book trailers and social media marketing, this is kind of getting there?
The avatar feature is interesting for authors who want a video presence without being on camera. You can customize how they look and how they talk, supporting eight languages. But right now it’s more “animated birthday card” than “compelling book trailer.” Workspace AI Ultra subscribers get up to 1,000 clips per month and access to Lyria’s music generation, which lets you create 30-second to 3-minute compositions from a vibe prompt. That music feature might honestly be useful for trailer audio before the video part catches up.
Keep an eye on it. Video AI is moving fast. The day this produces a decent 30-second book trailer from a text prompt, a lot of authors’ marketing workflows change overnight.
The Weird Stuff They Didn’t Want You to See
The memory and background-agent features are the headline, but the leak exposed some genuinely wild internal stuff too.
The leak spoiled Anthropic’s April Fools’ surprise by exactly one day. BUDDY is a Tamagotchi-style virtual pet that lives in your terminal, and it went live on April 1st as planned. Type /buddy in Claude Code and you get randomly assigned one of eighteen species, from a duck to a dragon to an axolotl to a capybara. Rarity tiers from Common to Legendary with a 1% drop rate. Shiny variants. Five RPG stats, one of which is literally called Snark. It’s supposed to be in preview through April 7th with a full launch in May, but it works right now and it’s incredibly cute. I got a common Urchin and I’m unreasonably attached already.
There’s a hidden defense system that quietly slips fake information into responses when it suspects a competitor might be copying Claude’s outputs to train their own AI. Basically, booby-trapped answers. That is spicy.
“Undercover Mode” strips all internal codenames and Anthropic references when Claude is working on outside projects, with the ominous note “there is NO force-OFF.” A built-in mood detector scans for keywords like “wtf” and “this sucks” so it knows when you’re frustrated without having to think too hard about it. Oh, and one single code file clocked in at 5,594 lines long with one function spanning 3,167 of them. (Imagine a novel where one chapter is 300 pages and contains 12 layers of nested flashbacks. That’s what Anthropic’s engineers shipped.)
Even Anthropic’s engineers are human. Comforting, honestly. ;)
Sources
- The Claude Code Source Leak: 512,000 Lines, a Missing .npmignore — Layer5 deep dive into the leak’s technical details and community response
- Claude Code Source Leak: fake tools, frustration regexes, undercover mode — Alex Kim’s analysis of hidden features in the leaked codebase
- Claude Code Leaked Source: BUDDY, KAIROS & Every Hidden Feature — WaveSpeedAI comprehensive feature breakdown
- Anthropic took down thousands of GitHub repos trying to yank its leaked source code — TechCrunch on Anthropic’s DMCA takedown mishap
- Google Vids updates include high-quality video generation — Google’s announcement of Vids AI upgrades with Veo 3.1 and Lyria 3
