Three AI announcements dropped this morning and by the time I finished my coffee, I was genuinely giddy. Not because any single one was earth-shattering, but because they all pointed the same direction, and that direction is “everything you’re doing with AI is about to get way cheaper and way more useful.”
Also, Google wants to read your email now. We’ll get to that.
The Models Get Small
OpenAI released GPT-5.4 mini and nano this morning, and I know your eyes just glazed over because model names are the license plates of the AI world. Nobody cares. But you should care about these two.
GPT-5.4 mini costs $0.75 per million input tokens. Nano costs $0.20. When GPT-4 launched in 2023, you were paying $30 per million input tokens for something significantly dumber than what you can now get for less than a dollar.
That’s not an incremental price drop. That’s a cliff.
In practical terms, the AI tools you’re already using (Sudowrite, Novelcrafter, ChatGPT, whatever you’ve built into your workflow) are about to get faster and cheaper to run. Not because the tool makers are feeling generous, but because their costs just cratered. Some will pass the savings along. Some will use the headroom to add features that weren’t economically viable before. Either way, you benefit.
The more interesting thing is what OpenAI is calling “subagent” architecture. A big, expensive model handles the hard thinking, the planning and judgment calls, while smaller, cheaper models sprint through the grunt work in parallel. Think of it like a head editor who delegates research and fact-checking to a team of fast interns. The editor is still expensive. The interns are practically free. (And occasionally hallucinate. But that’s a whole other article.)
Your AI writing assistant six months from now won’t be one model doing everything. It’ll be a system, capable model thinking through story structure, cheap models handling continuity checks and summaries in the background. More capability for less money.
The Personal AI Gets Free
Google announced that its Personal Intelligence feature, previously locked behind paid tiers, is now free for all US users. Personal Intelligence connects Gemini to your Gmail, Google Photos, YouTube history, and other Google apps so it can tailor responses to you without you having to explain your entire life story in every prompt.
If you’re an indie author who lives in Google’s ecosystem (and statistically, a lot of you do), pay attention.
An AI assistant that actually knows your context has been theoretically possible for a while. Now it’s practically available. Instead of telling an AI “I write cozy mysteries set in the Pacific Northwest and I’m working on the third book in a series where…” every single time, a personalized AI already has that context. It’s read your emails to your editor and seen what you’ve been researching. It knows what stage of the publishing process you’re in because it can see your calendar.
That’s genuinely useful and also a little creepy. I say that as someone who likes AI.
Google says it’s opt-in, that they don’t train on your Gmail inbox or photo library, and that you can disconnect apps at any time. I’d encourage you to actually read those privacy settings before you flip the switch, though. “We don’t train on your inbox” and “we don’t use your inbox data at all” are two very different sentences, and the distinction matters.
AI personalization is moving from premium feature to baseline expectation. Within a year, an AI tool that doesn’t know anything about you will feel as clunky as a website that makes you re-enter your shipping address every time you order.
Everyone’s a Coder Now (Sort Of)
The Vergecast ran a segment today with Paul Ford, one of the sharpest tech writers around, talking about his experience with “vibe coding,” using AI coding tools like Claude Code to build software projects despite not being a professional developer. He’s building more than ever. He’s also feeling conflicted about it.
This one hits close to home.
I’ve talked to indie authors over the past year who have used AI coding tools to build their own book launch dashboards, automate their newsletter workflows, create custom formatting scripts, and even prototype apps for their readers. Things that would have required hiring a developer two years ago. These aren’t people who went to coding bootcamp. They’re people who described what they wanted to an AI and iterated until it worked.
That’s genuinely powerful. But Ford’s emotional conflict is worth sitting with. When a tool makes something radically easier, it changes your relationship with the craft involved. Authors know this tension intimately, because it’s the exact same one we’ve been wrestling with around AI writing tools. The capability is real. The unease is also real. Both things get to be true.
The part I find most relevant for us is the “managing agents” framing. Ford and the Vergecast hosts talk about how professional developers are spending less time writing code and more time directing AI agents and reviewing their output. That sounds a lot less like traditional coding and a lot more like… editing. Which is basically what authors already do when they work with AI writing tools.
You’re ahead of the curve and you didn’t even know it ;)
And Then There’s DLSS 5
This one isn’t directly about writing, but it’s too good to skip.
Nvidia announced DLSS 5, which takes their existing frame-upscaling technology for video games and cranks it into full generative AI territory. Instead of just sharpening what’s already there, it now reimagines lighting, textures, and materials using AI. The result, according to basically every gamer who’s seen the demos, looks like someone ran a AAA game through an Instagram beauty filter. Flat shadows. Waxy skin. Blown-out highlights. Every environment looking like it was lit by a real estate photographer.
The gaming community’s reaction was immediate and brutal. (As a PC gamer, I felt this one in my soul.)
Nvidia built something technically impressive and deployed it in a way that ignores what the people who actually use it care about. Gamers don’t want “more realistic.” They want art direction, the specific aesthetic that the game’s artists intended. The AI steamrolled right over that, and the backlash was instant.
Same energy as when someone uses AI to generate a whole book without any editorial judgment. The AI produces text that reads fine on a technical level but has zero personality. Capability without taste just produces slop, and people can tell. They will always be able to tell.
Every author is going to have access to the same cheap models and personalized tools soon. None of that will be the differentiator. Your weird brain and your specific creative vision will be.
Sources
- Now everyone in the US is getting Google’s personalized Gemini AI — The Verge’s coverage of Google’s Personal Intelligence expansion to free-tier users
- Introducing GPT-5.4 mini and nano — OpenAI’s announcement of smaller, faster GPT-5.4 model variants with pricing and benchmarks
- The future of code is exciting and terrifying — The Vergecast episode featuring Paul Ford on vibe coding and the changing nature of software development
- Gamers react with overwhelming disgust to DLSS 5’s generative AI glow-ups — Ars Technica’s report on the gaming community’s backlash against Nvidia’s generative AI rendering
