Top AI Product

Every day, hundreds of new AI tools launch across Product Hunt, Hacker News, and GitHub. We dig through the noise so you don't have to — surfacing only the ones worth your attention with honest, no-fluff reviews. Explore our latest picks, deep dives, and curated collections to find your next favorite AI tool.


OpenUI (by Thesys) Wants to Replace JSON as the Language Between AI and Your Interface

AI chatbots are everywhere, but most of them still talk in walls of text. You ask for a sales dashboard, you get a paragraph. You request a comparison table, you get bullet points. The models are smart enough to reason about complex data — they just have no good way to show it.

OpenUI, the open-source framework from Thesys, is betting that the fix isn’t better prompts or fancier markdown. It’s a new rendering language built specifically for how LLMs generate output: streaming, token by token, with a syntax compact enough to not blow your API budget.

The project hit Product Hunt on March 11 (352 upvotes, #4 of the day), then resurfaced on Hacker News frontpage on March 21 with a technical blog post about rewriting their Rust WASM parser in TypeScript — and getting 3x faster. That dual-platform traction puts it squarely in the “developers are paying attention” category.

Why Text-Based AI Responses Are a Dead End

The problem OpenUI addresses is straightforward: most AI applications respond with plain text, even when the answer would be far more useful as a chart, a form, or an interactive table. Existing approaches to generative UI — where models output structured UI components instead of text — typically rely on JSON schemas. And JSON has three problems in this context.

First, it’s verbose. Describing a simple data table in JSON burns through tokens quickly. OpenUI’s benchmarks show their custom language uses 4,800 tokens across seven UI scenarios where equivalent JSON requires 10,180 — a 45-67% reduction. At scale, that’s a meaningful cost difference.

Second, JSON doesn’t stream well. LLMs generate output token by token, but JSON requires complete, valid structures before you can render anything. Users stare at a blank screen until the entire response is finished. OpenUI Lang is designed for progressive rendering — UI elements appear as the model streams, chunk by chunk.

Third, LLMs produce invalid JSON roughly 3% of the time, even with validation layers. OpenUI’s syntax, which resembles code patterns LLMs already encountered during training, drops that error rate below 0.3%.

How OpenUI Actually Works

OpenUI is not a single tool — it’s a stack with three layers.

OpenUI Lang is the core: a compact, code-like specification that replaces JSON for describing UI components. Instead of deeply nested brackets and quotes, it uses a syntax closer to what developers would write in a component framework. The key insight is that LLMs are better at generating code-like patterns than strict data formats, because that’s what they trained on.

The React runtime takes that streamed output and renders it progressively. As each token arrives from the model, the renderer updates the UI in real time. The framework ships with built-in component libraries — charts, forms, tables, layouts — that developers can use directly or extend with their own design system.

The CLI and integration layer (@openuidev/cli) scaffolds projects and generates prompt instructions automatically from your component library. You define what components are available, and OpenUI produces the system prompt that tells the model how to use them. It plugs into Vercel AI SDK, LangChain, CrewAI, and other major LLM frameworks.

The whole thing is MIT-licensed, TypeScript-first (84.9% of the codebase), and currently sitting at 2.2k GitHub stars.

The Rust-to-TypeScript Rewrite That Sparked a Hacker News Debate

The blog post that landed on Hacker News — “We rewrote our Rust WASM Parser in TypeScript and it got 3x Faster” — is worth unpacking because it reveals a non-obvious engineering lesson.

Thesys originally built OpenUI Lang’s parser in Rust, compiled to WebAssembly. The assumption was obvious: Rust is fast, WASM runs in the browser, performance solved. But the benchmarks told a different story.

The TypeScript rewrite delivered 2.2-4.6x faster performance per single parse call and 2.6-3.3x lower total streaming cost. The contact-form benchmark went from 61.4 microseconds (Rust/WASM) to 13.4 microseconds (TypeScript). The dashboard streaming test dropped from 840 microseconds to 255 microseconds.

The bottleneck was never the computation itself — it was the boundary between WASM and JavaScript. Every parse call required copying strings from the JS heap to WASM linear memory, serializing Rust structs to JSON via serde_json, transferring that JSON back to the JS heap, then deserializing with JSON.parse(). For a parser called on every streaming chunk where latency is critical, that overhead dominated.

They also found an algorithmic fix hiding in plain sight: the parser had been re-parsing the entire accumulated string on every streaming chunk — an O(N^2) pattern. The rewrite introduced statement-level incremental caching, where completed statements get cached and only the trailing incomplete statement gets re-parsed. That alone dropped complexity from O(N^2) to O(N).

The Hacker News discussion (135 points) was predictably nuanced. The top comment (137 upvotes) argued the real win was the algorithmic improvement, not the language switch. Others pointed out that any rewrite creates opportunities to fix accumulated technical debt. One commenter shared a 1997 story where porting C++ to Python yielded 10x speedup — because the rewrite accidentally fixed a hidden cache bug.

Where OpenUI Fits in the Generative UI Landscape

The generative UI space is getting crowded, and positioning matters. Here’s how the key players compare:

Vercel AI SDK 3.0 open-sourced v0’s generative UI technology and remains the most widely adopted option with 20+ million monthly downloads. It leans heavily on React Server Components and is tightly integrated with the Next.js ecosystem. If you’re already on Vercel’s stack, it’s the path of least resistance — but it’s also more opinionated about your architecture.

Google A2UI (Agent-to-User Interface) takes a cross-platform approach, aiming for portability across frameworks and devices. It’s the strongest option if you need to render AI-generated UI beyond React.

CopilotKit focuses on embedding AI copilots into existing applications with runtime hooks, while Tambo targets the agent-builder space with a UI-first approach.

OpenUI’s differentiator is the language layer. While competitors focus on the rendering side, Thesys is arguing that the bottleneck is between the model and the renderer — how the AI describes the UI in the first place. Their 67% token reduction claim is the sharpest competitive edge: if your app makes thousands of AI calls per day, cutting token usage by half or more is a direct cost saving.

The trade-off is ecosystem maturity. Vercel AI SDK has years of community adoption. OpenUI has 2.2k stars and is still early. The framework is also React-only for now, which limits its reach compared to framework-agnostic alternatives.

Thesys C1: The Commercial Layer

OpenUI is the open-source foundation; Thesys C1 is the commercial API built on top. C1 is a middleware that augments any LLM to respond with interactive UI components instead of plain text, using an OpenAI-compatible endpoint.

Pricing scales from a free tier (5,000 API calls/month with $10 in LLM credits) through Build ($49/month, 25,000 calls) and Grow ($499/month, 500,000 calls) to custom enterprise plans with self-hosting, VPC deployment, and compliance support (SOC2, HIPAA).

The founders — Rabi Shankar Guha (CEO) and Parikshit Deshmukh (President) — come from Google, Stripe, and Salesforce backgrounds. The company also ships an Agent Builder tool for creating AI agents that respond with UI instead of text, which launched on Product Hunt in February 2026.

What to Watch

OpenUI’s dual appearance on Product Hunt and Hacker News within the same month signals genuine developer interest, not just marketing buzz. The Rust-to-TypeScript story, in particular, is the kind of honest engineering post that earns credibility in technical communities.

The bigger question is whether “generative UI” as a category will consolidate around a standard or fragment into framework-specific implementations. OpenUI is explicitly positioning itself as the open standard — framework-agnostic, LLM-agnostic, MIT-licensed. If that bet pays off, being the specification layer is a strong position. If each framework builds its own generative UI approach, OpenUI risks being one more option in an already crowded field.

For developers evaluating the space right now: if token cost and streaming performance are primary concerns, OpenUI’s numbers are worth a serious look. If ecosystem maturity and community support matter more, Vercel AI SDK is still the safer bet.

FAQ

Is OpenUI free to use?
Yes. OpenUI itself is fully open source under the MIT license. You can use the language, runtime, and component libraries at no cost. Thesys C1, the commercial API layer, offers a free tier with 5,000 API calls/month. Paid plans start at $49/month.

How does OpenUI compare to Vercel AI SDK for generative UI?
Vercel AI SDK 3.0 uses React Server Components and is tightly coupled with the Next.js ecosystem. OpenUI introduces its own compact language (OpenUI Lang) that claims 67% fewer tokens than JSON-based approaches and supports progressive rendering during streaming. Vercel has a much larger ecosystem; OpenUI has a more focused value proposition around token efficiency and streaming performance.

What LLMs and frameworks does OpenUI support?
OpenUI is LLM-agnostic — it works with OpenAI, Anthropic, Google Gemini, Mistral, and others. It integrates with Vercel AI SDK, LangChain, CrewAI, and various agent frameworks. The rendering layer currently requires React.

Who is behind OpenUI?
OpenUI is built by Thesys, a San Francisco-based company founded in 2024 by Rabi Shankar Guha and Parikshit Deshmukh. The team includes former engineers from Google, Stripe, and Salesforce. Their commercial product, C1, is an API for generative UI.

Can I use my own design system with OpenUI?
Yes. OpenUI is intentionally design-agnostic. You define your own component library, and the framework generates prompt instructions from it. The built-in components (charts, forms, tables) are ready to use but fully replaceable with custom implementations.


You Might Also Like


Discover more from Top AI Product

Subscribe to get the latest posts sent to your email.



Leave a comment