I’ve been following the AI video space pretty closely, and honestly, [Runway Gen-4.5](https://runwayml.com/research/introducing-runway-gen-4.5) caught me off guard. Not because another model dropped — that happens every other week — but because this one actually delivers on the hype. With an Elo score of 1,247 on the [Artificial Analysis Text-to-Video leaderboard](https://artificialanalysis.ai/leaderboards/video-generation), it sits above Google’s Veo 3 (1,226) and OpenAI’s Sora 2 Pro (1,206). Those aren’t vanity metrics either — the rankings come from blind A/B tests where real humans compare outputs across motion quality, visual fidelity, and prompt adherence.
So what makes Gen-4.5 worth paying attention to? For starters, native audio generation. You’re no longer stitching together video and sound in post — the model generates synchronized dialogue, ambient audio, and soundtracks right alongside the visuals. Then there’s multi-shot sequencing with character consistency, meaning you can produce longer, coherent scenes (up to a minute) where the same character actually looks like the same person across different angles and cuts. That’s been a pain point in AI video for a while, and Runway handles it surprisingly well.
The timing of this release is worth noting too. Runway just closed a [$315 million Series E](https://techcrunch.com/2026/02/10/ai-video-startup-runway-raises-315m-at-5-3b-valuation-eyes-more-capable-world-models/) led by General Atlantic, pushing their valuation to $5.3 billion — nearly double their Series D. Nvidia, Fidelity, Adobe Ventures, and AMD Ventures all participated. The money is earmarked for what Runway calls “world models” — AI systems that build internal representations of environments to predict and plan for future events. It’s an ambitious bet, but with Gen-4.5 as proof of concept, investors clearly bought in.
The model has been getting serious attention across [AI Business](https://aibusiness.com/generative-ai/runway-releases-gen-4-5-video-model), [TechCrunch](https://techcrunch.com/2026/02/10/ai-video-startup-runway-raises-315m-at-5-3b-valuation-eyes-more-capable-world-models/), and the broader creator community. There’s a solid [discussion thread on Hacker News](https://news.ycombinator.com/item?id=46111113) with plenty of prompt examples and real user showcases if you want to see what people are actually making with it.
Is it perfect? No. Object permanence still trips up occasionally — things vanish between frames, and sometimes the physics feel slightly off in ways that break the illusion. But compared to where AI video was even six months ago, the gap is shrinking fast. Gen-4.5 is available on all paid [Runway plans](https://runwayml.com/) at 25 credits per second of video, which isn’t cheap, but for anyone doing serious creative work, it’s hard to argue with the output quality right now.

Leave a comment