There’s a new name making waves across TechCrunch and PR Newswire this week, and it’s not another chatbot trying to write your emails faster. Meet Poetiq, a Mountain View-based startup that just pulled off something genuinely remarkable in the AI world — and they did it with a team small enough to fit around a single dinner table.
Founded in June 2025 by former Google DeepMind scientists Shumeet Baluja and Ian Fischer, Poetiq represents a fundamentally different approach to artificial intelligence. Instead of building yet another large language model from scratch, they’ve created what they call a “meta-system” — essentially a layer of intelligence that sits on top of existing LLMs like GPT, Claude, or Gemini and makes them significantly smarter at solving complex problems.
The timing couldn’t be more interesting. Just this January, Poetiq announced a jaw-dropping $45.8 million seed funding round led by FYRFLY Venture Partners and Surface Ventures, with Y Combinator and 468 Capital jumping in as well. For a six-person team that started mere months ago, that’s not just validation — it’s a statement that the investment world sees something special here.
So what exactly does Poetiq do that has investors this excited? The magic lies in something called recursive self-improvement. Here’s how it works: instead of feeding millions of training examples into a model and waiting weeks for traditional reinforcement learning to work its magic, Poetiq’s system takes just a few hundred examples and generates a specialized AI agent tailored to your specific problem. This agent then iteratively improves itself, getting more accurate and cost-efficient with each cycle.
The results speak for themselves. In December, Poetiq set a new state-of-the-art record on ARC-AGI-2, one of the toughest benchmarks for measuring machine reasoning and progress toward artificial general intelligence. They didn’t just beat the previous leader — they did it at roughly half the cost per task. When OpenAI released GPT-5.2 shortly after, Poetiq immediately integrated it and pushed accuracy to 75%, a full 16 percentage points above the previous best. OpenAI’s own Greg Brockman took notice, tweeting that Poetiq was “exceeding the human baseline on ARC-AGI-2.”
What’s particularly fascinating about Poetiq is their philosophy about LLMs. As co-CEO Shumeet Baluja puts it, “LLMs are impressive databases that encode a vast amount of humanity’s collective knowledge. They are simply not the best tools for deep reasoning.” Rather than competing with frontier models, Poetiq enhances them — working alongside any combination of existing models to extract better reasoning capabilities without the astronomical costs typically associated with AI training.
For enterprises, this is potentially game-changing. An MIT study from August 2025 found that despite $30-40 billion invested in generative AI, 95% of organizations were seeing zero return. Poetiq addresses exactly this problem — making it feasible to tackle complex business workflows that were previously too expensive or technically challenging for AI to handle.
The story of Poetiq is ultimately about elegant problem-solving. Six people, four thousand dollars, and a fresh perspective on how to make AI actually reason rather than just recall. In an industry obsessed with bigger models and bigger budgets, sometimes the most interesting breakthroughs come from those who choose to think differently.
If you’re curious about where AI might be heading next, Poetiq is definitely worth keeping an eye on. Visit poetiq.ai to learn more about their approach to building intelligence that improves itself.

Leave a comment