On March 24, OpenAI pulled the plug on Sora. The video generation tool that was supposed to change everything had been burning through an estimated $15 million per day in compute costs, accumulated just $2.1 million in lifetime revenue, and watched its user base collapse from a million to under 500,000. Disney, which had pledged a billion-dollar investment in OpenAI partly on the strength of Sora’s potential, found out about the shutdown less than an hour before the public announcement. They walked away from the deal.
Nine days later, on April 2, Google shipped the biggest update Google Vids has ever received. Veo 3.1 for video generation. Lyria 3 for music. Natural-language-controlled AI avatars. Direct YouTube publishing. And here’s the part that matters: the base tier is completely free. Every Google account on the planet — all three billion of them — now has access to AI video generation at zero cost.
The timing is almost surgical. TechRadar captured the moment in a headline: “Google is pushing AI video into ordinary life — just as OpenAI pulls Sora back.” By April 4, Google Vids 2.0 had climbed to the number one spot on Product Hunt with 241 upvotes, and tech outlets from TechRepublic to 9to5Google were running coverage. The AI video war didn’t end when Sora died. It just shifted to a company that can actually afford to give this technology away.
What Veo 3.1 Actually Does Inside Google Vids
The core video engine is Veo 3.1, Google’s latest generation model that’s been steadily climbing the AI video benchmarks. Feed it a text prompt or upload a photo, and it generates an eight-second clip at 720p. That sounds short, but eight seconds is the standard unit for AI video right now. Runway Gen-4.5 and the late Sora operated in similar ranges. The interesting part is what happens within those eight seconds.
Veo 3.1 generates synchronized audio natively. Dialogue, ambient sounds, effects — all produced in the same forward pass as the video, without requiring a separate audio model or post-production step. When Google first showed off this capability in earlier Veo versions, it was locked behind premium tiers and standalone tools. Now it’s baked directly into a video editor that millions of Workspace users already touch every day.
Then there’s Lyria 3, Google’s music generation model. AI Pro and Ultra subscribers can generate custom soundtracks from 30 seconds up to three minutes long. Describe the mood, tempo, and genre in plain English, and Lyria 3 produces original music you can drop straight into your timeline. This used to require a separate subscription to something like Suno — which just crossed $300 million ARR doing exactly this. Google bundled the same capability into an existing product.
The third pillar is what Google calls AI avatars, and this is where the update gets genuinely ambitious. You can create virtual presenters and direct them using natural language prompts. Not just “stand here and talk” — you can tell the avatar to pick up a product, switch outfits, change scenes, interact with props. The system maintains visual and voice consistency across the entire video. Character consistency across shots was exactly what Runway Gen-4.5 built its reputation on, and now Google is shipping a version of it inside a free productivity tool.
One more detail worth flagging: there’s a new Chrome extension for screen recording and a direct export pipeline to YouTube. That second feature sounds minor until you think about what it means for the millions of small businesses, educators, and solo creators who already live inside Google’s ecosystem. They can go from text prompt to published YouTube video without opening a single additional app.
The Pricing Play That Should Worry Runway and Everyone Else
Here’s where Google Vids 2.0 turns from a product update into a market strategy.
The free tier gives every personal Google account 10 video generations per month. Ten Veo 3.1 clips with synchronized audio, at no cost whatsoever. For a teacher building lesson explainers, a startup founder making a product walkthrough, or a freelancer experimenting with AI content — that’s not a demo. It’s a usable production tool.
Google AI Pro at $19.99 per month unlocks Lyria 3 music generation, enhanced avatar controls, and a higher monthly generation cap. Google AI Ultra — currently $124.99 per month during a promotional window, normally $249.99 — turns everything up: Lyria 3 Pro for longer and higher-quality music, advanced scene direction for avatars, and significantly more generations. Enterprise customers on Workspace AI Ultra get up to 1,000 Veo video generations per month. That’s not experimentation. That’s a content production pipeline.
Now stack this against what everyone else charges. Runway’s standard plan runs $12 per month for 625 credits, which translates to roughly 25 seconds of Gen-4.5 video. Their Pro plan at $28 per month gives you more headroom, but you’re still paying per second of generated output. Kling 3.0 from Kuaishou has a free tier but caps quality and length hard on unpaid accounts. Pika, Luma, and the smaller players all operate on similar credit-based models where “free” means barely enough to evaluate the product.
Google’s approach is structurally different. They’re not selling AI video as a standalone product. They’re folding it into an ecosystem that already holds your email, your documents, your spreadsheets, and your slide decks. The video generation is a feature of the platform, not a business unto itself. And that distinction is exactly what makes it threatening for pure-play video generation startups. Google doesn’t need Vids to generate revenue. They need it to make Workspace stickier — and they can subsidize the compute cost with ad revenue from the other side of the company.
Directing Avatars in Plain English
The avatar system deserves a closer look because it signals where AI video production is heading.
Most AI video tools today are prompt-to-clip generators. You type a description, cross your fingers, and get a video back. Google Vids 2.0 is introducing something closer to AI video direction. The avatar system lets you set up a scene, place a character in it, and then give iterative instructions about what happens next. “Have the presenter hold up the product and rotate it slowly.” “Change the background to a modern office.” “Switch to business casual and look directly at the camera.”
This is fundamentally different from typing “a person in a blue shirt holding a phone in an office” and hoping the model gets it right on the first try. You’re iterating on a persistent character in a persistent environment — making adjustments the way a director talks to an actor on set. Google says the system maintains visual consistency even as you change outfits, swap backgrounds, and move between shots.
The practical use cases write themselves. Corporate training without booking a studio. Product demonstrations without hiring talent. Localized marketing videos without reshooting in five languages. Sales pitches customized for individual prospects. These are all workflows that currently cost thousands of dollars per video in traditional production, and Google is offering a basic version of them for free.
Real limitations exist. The avatars are AI-generated faces, not yours — though that’s almost certainly coming. The eight-second clip limit means anything longer requires stitching multiple segments together. And the output quality, while impressive for a free tool, won’t fool anyone into thinking they’re watching footage shot by a human crew. But for the vast majority of business video use cases — where the real benchmark is “more engaging than a slide deck with bullet points read aloud” — this is more than good enough.
The Competitive Map After Sora’s Collapse
Sora’s death reshuffled the AI video market in ways that are still unfolding. The biggest players left standing are Runway, Kling, the open-source movement, and now Google Vids — each running a fundamentally different strategy.
Runway Gen-4.5 remains the quality benchmark for creative professionals. Its Elo score of 1,247 on the Artificial Analysis Text-to-Video leaderboard still sits above Veo 3.1, and its multi-shot sequencing with character consistency is genuinely best-in-class for narrative work. If you’re making something that needs to look cinematic, Runway is still the tool to beat.
Kuaishou’s Kling 3.0 carved out the speed-and-volume niche with 15-second clips and native audio that’s good enough for social content, particularly strong across Asian markets.
The open-source side is heating up fast. Lightricks shipped LTX-2.3 with 22 billion parameters right after Sora went down, generating video and audio in a single forward pass. Netflix just released Void, which scored 3.5x higher than Runway on specific blind-test benchmarks. When media companies start open-sourcing their own video models, the dynamics of this market change fast.
And then there’s Google, which isn’t even trying to compete on quality benchmarks. Google’s play is distribution. Three billion accounts. Zero friction. Free entry point. Integrated into the same workspace where people already spend their working hours. When the next hundred million people generate their first AI video, the overwhelming majority won’t do it on Runway or Kling or some open-source model running on a rented GPU. They’ll do it in the same browser tab where they were editing a Google Doc five minutes earlier.
That’s the real significance of Google Vids 2.0. It’s probably not the best AI video tool on raw output quality. It might not crack the top three on benchmark leaderboards. But it’s the first major platform to treat AI video generation as a default capability rather than a premium product. And if the last two decades of tech have made anything clear, it’s that “free, everywhere, and good enough” beats “best in class but behind a paywall” almost every single time.
OpenAI learned that at $15 million a day.
You Might Also Like
- 3 Months 1 Billion 200 Characters the Openai Sora Shutdown Disney Deal Collapse That Reshapes ai Video
- Starnus Just hit 1 on Product Hunt and Yeah its Worth the Hype
- Lovon Just Topped Product Hunt on Valentines day and its not a Dating app
- Zenmux Just hit 1 on Product Hunt Heres why Everyones Paying Attention
- Google Workspace Studio Just Made ai Agents a Thing Everyone can Build

Leave a comment