Top AI Product

Every day, hundreds of new AI tools launch across Product Hunt, Hacker News, and GitHub. We dig through the noise so you don't have to — surfacing only the ones worth your attention with honest, no-fluff reviews. Explore our latest picks, deep dives, and curated collections to find your next favorite AI tool.


Claw Code rewrote Claude Code in Rust before sunrise — and hit 50K GitHub stars in 2 hours

At 4 AM on March 31, 2026, a developer named Sigrid Jin woke up to his phone exploding with notifications. Anthropic had just accidentally shipped a 59.8 MB source map inside a routine npm update of Claude Code. Inside that file: 512,000 lines of unobfuscated TypeScript across roughly 1,900 files. The entire architecture of the most popular AI coding agent on the planet, exposed to anyone who ran npm install that morning.

Jin didn’t write a blog post about it. He didn’t tweet a hot take. He sat down, ported the core features to Python from scratch, and pushed a new repository called Claw Code before the sun came up. Two hours after publication, the repo hit 50,000 GitHub stars — the fastest any project has ever reached that milestone in the platform’s history. By April 1, it was sitting at 55,800 stars and 58,200 forks, with a trendshift.io engagement score of 350,303.

This isn’t your typical open-source clone story. This is what happens when a $350-billion-valuation AI company forgets to add *.map to their .npmignore file.

How a missing line in .npmignore became the biggest AI leak of 2026

The technical details are almost comically simple. Claude Code is built on Bun, which Anthropic acquired in late 2025. Bun generates source maps by default — they’re debugging artifacts that map compiled code back to the original source. Someone on Anthropic’s release team failed to exclude these files from the npm package. Security researcher Chaofan Shou spotted it first, and within hours the entire codebase was being passed around developer communities like contraband.

What made this leak genuinely consequential wasn’t the code itself. It was the metadata. Hacker News commenters noted that feature flag names alone were more revealing than the TypeScript — codenames like KAIROS, anti-distillation flags, unreleased features including a “persistent assistant” background mode and a session review capability. One highly upvoted comment captured the sentiment: “You can refactor code in a week. You cannot un-leak a roadmap.”

Anthropic responded predictably. “No sensitive customer data or credentials were involved,” a spokesperson said. “This was a release packaging issue caused by human error, not a security breach.” They filed DMCA takedowns against GitHub mirrors. GitHub complied, and pages hosting the original TypeScript went dark.

But the internet had already moved on.

The one-developer clean-room sprint

Sigrid Jin isn’t a random developer who got lucky with timing. The Wall Street Journal profiled him as one of the world’s most active Claude Code users — he’s processed over 25 billion tokens through the tool. When he says he understands Claude Code’s architecture, he has the receipts.

What Jin built is technically a clean-room rewrite. That distinction matters enormously. He didn’t fork the leaked TypeScript. He reimplemented the core architectural patterns — the tool system, query engine, multi-agent orchestration, and memory management — from scratch, first in Python, then progressively in Rust. The current codebase splits 72.9% Rust and 27.1% Python: Rust handles performance-critical runtime paths, Python manages the agent orchestration layer.

The Rust port was developed using oh-my-codex (OmX) for scaffolding, orchestration, and architecture direction, with oh-my-opencode (OmO) handling implementation acceleration and verification. Jin used $team mode for parallel code review and $ralph mode for persistent execution loops — essentially building an AI coding agent with AI coding agents. The recursive irony writes itself.

The technical architecture includes crates for API client streaming, session and tools runtime, MCP configuration, an interactive CLI binary, a plugin system, slash commands, an HTTP/SSE server built on axum, and LSP client integration. It’s not a toy reimplementation. It’s a full-featured agent harness framework that aims to be production-ready.

The legal gray area Anthropic can’t solve

This is where it gets interesting for the industry. Gergely Orosz, who runs The Pragmatic Engineer newsletter, pointed out a dilemma that Anthropic’s legal team is probably losing sleep over: a Python and Rust rewrite constitutes a new creative work, potentially outside DMCA reach. If Anthropic argues that an AI-generated transformative rewrite infringes their copyright, they’d undermine their own defense in the training-data copyright cases that the entire AI industry is fighting.

So Anthropic can take down the original TypeScript mirrors. They did, and GitHub complied. But Claw Code? It’s a clean-room reimplementation in different languages, built by a developer who demonstrably understood the architecture through legitimate use. The legal toolkit that works against copy-paste piracy doesn’t easily apply here.

Meanwhile, someone mirrored the original leaked code to Gitlawb, a decentralized git platform, with the message: “Will never be taken down.” The community wasn’t just archiving code. They were stripping telemetry, flipping hidden feature flags, and drafting their own reimplementations. Anthropic’s DMCA whack-a-mole was already a losing game before Claw Code even entered the picture.

Where Claw Code fits in the open-source AI coding landscape

The AI coding agent space has gotten crowded fast. OpenCode, built by the neovim team, has 95,000 GitHub stars and 2.5 million monthly developers. It offers multi-session agents, LSP integration, and support for 75+ LLM providers — the flexibility play. Aider, with 39,000+ stars, owns the git-first workflow niche where every AI edit becomes a reviewable commit. Cursor’s Composer 2 went the opposite direction entirely, training a proprietary model in-house at $0.50 per million tokens.

Claw Code occupies a different position. It’s not trying to be a polished developer tool with a slick TUI. It’s a harness engineering framework — a system for experimenting with how agents are structured, how they interact with tools, and how they maintain context during execution. Think of it less as a competitor to OpenCode and more as a competitor to Claude Code itself: the underlying infrastructure that powers agent workflows.

The fact that it’s built in Rust gives it a performance angle that pure-Python alternatives can’t match. OpenFang, another Rust-based agent framework that hit GitHub trending earlier this year, proved there’s strong demand for memory-safe, high-performance agent runtimes. Claw Code pushes that trend further by combining Rust’s speed with Python’s flexibility for orchestration — the same dual-language strategy that’s worked for projects like PyTorch (C++/Python) and Polars (Rust/Python).

The wild card is provider lock-in. Claude Code only works with Anthropic’s models. OpenCode supports 75+ providers. Claw Code, being open-source harness infrastructure, can theoretically plug into any model backend. For developers who’ve watched the AI coding tool wars play out over the past year, that flexibility isn’t a nice-to-have — it’s insurance against the next provider shakeup.

What 50K stars in two hours actually means

GitHub stars are a vanity metric. Everyone knows this. But 50,000 stars in two hours isn’t about developer enthusiasm for a coding tool. It’s a signal — a measure of how much pent-up demand exists for open-source alternatives to closed AI infrastructure.

Claude Code was, until March 31, a black box. Developers used it daily, built workflows around it, integrated it into their pipelines. But they couldn’t inspect it, modify it, or understand why it made the decisions it did. The leak, and the clean-room rewrite that followed, cracked that box open. The star count isn’t saying “this project is great.” It’s saying “we’ve been waiting for this.”

The timing is brutal for Anthropic. Fortune reported that the leak “rattles $350 billion IPO ambitions.” Whether that’s hyperbole or prescient depends on what happens next. If Claw Code matures into a genuine alternative — one that developers can self-host, customize, and run against any model provider — Anthropic loses one of its strongest distribution channels for Claude. Claude Code wasn’t just a product. It was a funnel that kept developers inside Anthropic’s ecosystem, burning tokens on Anthropic’s API.

Now that funnel has a leak. Literally.


You Might Also Like


Discover more from Top AI Product

Subscribe to get the latest posts sent to your email.



Leave a comment