Top AI Product

Every day, hundreds of new AI tools launch across Product Hunt, Hacker News, and GitHub. We dig through the noise so you don't have to — surfacing only the ones worth your attention with honest, no-fluff reviews. Explore our latest picks, deep dives, and curated collections to find your next favorite AI tool.


Subquadratic SubQ claims 1,000x less compute at 12M tokens — researchers want receipts

A Miami startup nobody had heard of two weeks ago shipped what it calls the first fully sub-quadratic commercial LLM. SubQ runs a native 12-million-token context window. The numbers Subquadratic put on the page: 50x faster and 50x cheaper than frontier models at 1M tokens, and roughly 1,000x less compute at the full 12M window. The company claims SubQ beats GPT-5.5 on long-context retrieval. VentureBeat reports independent researchers want third-party benchmarks before they buy any of it.

What’s actually new

Standard transformers scale O(n²) with context — double the tokens, quadruple the compute. SubQ’s architecture, Subquadratic Selective Attention (SSA), scales linearly in compute and memory with context length. That’s the entire pitch. If it holds up under independent testing, the attention bottleneck that’s been the hard ceiling since 2017 just got rewritten.

Two products at launch

SubQ API exposes the 12M context window to developers and enterprise teams. SubQ Code is a CLI coding agent built around the obvious use case — load an entire codebase into a single context window and skip the RAG plumbing. A 50M-token version is on the Q4 roadmap.

The team: CEO Justin Dangel, CTO Alex Whedon (ex-Head of Generative AI at Meta). $29M seed, Justin Mateen and Javier Villamizar in the cap table.


You Might Also Like


Discover more from Top AI Product

Subscribe to get the latest posts sent to your email.



Leave a comment