Top AI Product

Every day, hundreds of new AI tools launch across Product Hunt, Hacker News, and GitHub. We dig through the noise so you don't have to — surfacing only the ones worth your attention with honest, no-fluff reviews. Explore our latest picks, deep dives, and curated collections to find your next favorite AI tool.


Block Goose scored 34K GitHub stars by giving away what others charge $200/month for

Block — the fintech company behind Square, Cash App, and Afterpay — built an AI agent for its own engineers. They called it Goose. They deployed it across all 12,000 engineers in the company. Those engineers started shipping 40% more production code. Each one reported saving 8 to 10 hours a week.

Then Block cut 4,000 jobs.

That’s the uncomfortable story behind one of 2026’s most popular open-source projects. Block Goose now sits at 34,200 GitHub stars and 3,200 forks, ranking second on trendshift.io’s April leaderboard with an engagement score of 7,714 — right behind claw-code. It’s written in Rust, licensed under Apache 2.0, and it costs exactly zero dollars.

The AI coding agent space is getting crowded fast. Claude Code charges $200/month. Cursor’s Ultra tier matches that price. GitHub Copilot runs $19-39/month depending on the plan. Meanwhile, Goose does roughly the same things — builds projects from scratch, executes code, debugs failures, manages files, orchestrates multi-step workflows — and charges nothing. When paired with a local LLM through Ollama, you don’t even need an API subscription. Full air-gap, zero cost.

So why isn’t everyone using it? That question gets more interesting the deeper you dig.

The model-agnostic bet that none of the paid tools can match

Most AI coding tools lock you into a single model provider. Claude Code runs on Anthropic’s Claude. GitHub Copilot is tied to OpenAI. Devin and Amazon Q don’t support bring-your-own-model either.

Goose flips this entirely. You can run Claude Opus for complex architectural decisions, switch to DeepSeek R1 for cost-efficient routine work, drop to Gemini Flash for simple refactoring, and use a local Ollama model for anything touching sensitive proprietary code. All within the same tool. You can even configure different models for different task types and swap mid-project without restarting your session.

This isn’t flexibility for its own sake. It’s a strategic advantage most developers don’t appreciate until they’ve been rate-limited at 2 AM during a production incident, or until their $200/month Claude Code subscription burns through its allocation in three weeks. With Goose, you control the spend. You control the model. You control where your code goes.

The architecture making this possible is the Model Context Protocol — MCP. Block didn’t just adopt MCP. Block co-developed it with Anthropic. Goose was one of the protocol’s first major consumers, and MCP’s design was heavily shaped by what Block’s engineers needed Goose to do in production.

The MCP ecosystem has exploded since then. Over 3,000 MCP servers now exist, covering everything from developer tools to productivity suites to specialized industry services. Goose connects to any of them. Want it to manage your Jira tickets while simultaneously refactoring your codebase? Hook up the right MCP servers and it handles both. JetBrains already lets you install Goose with one click through the Agent Client Protocol registry, and the desktop app runs on Electron if you prefer a GUI over the CLI.

Inside Block: from productivity miracle to mass layoff

Here’s where the Goose story gets uncomfortable — and honest.

Block started rolling out Goose internally in mid-2024, well before open-sourcing it. By early 2025, every engineer in the company had access. The results were dramatic: 8-10 hours saved per engineer per week. Production code shipped per engineer rose over 40% from September 2025 onward. These aren’t self-reported vibes. Block’s CFO cited these numbers on an earnings call.

In late February 2026, Jack Dorsey announced Block would slash its headcount from over 10,000 to under 6,000. More than 4,000 roles eliminated — one of the largest workforce reductions in tech history explicitly attributed to AI. Block’s CFO told Fortune that AI “leaps over 18 months” directly led to the decision.

Goose wasn’t the only factor, but it was a central one. The tool proved that smaller teams, augmented by AI agents, could ship at the same pace as larger teams without the coordination overhead. Block wasn’t shy about making that connection public.

This makes Goose a unique case study. Most AI coding tools sell on productivity gains — write code faster, debug quicker, automate boilerplate. Goose has real, enterprise-scale data proving those claims. The uncomfortable flip side is what companies do with those productivity gains. Block chose fewer humans, not more ambitious projects.

For developers evaluating Goose, this is worth sitting with for a moment. The tool is genuinely excellent. It’s also the tool that a $40 billion fintech company used to justify eliminating a third of its engineering org. Both things are true simultaneously.

How Goose stacks up against Claude Code, Cursor, and Copilot

Goose operates through a CLI and an Electron desktop app. It writes and executes code, runs shell commands, manages files, interacts with APIs, debugs failures, and orchestrates complex multi-step workflows — all autonomously. It supports subagents for parallel task execution, named sessions, skill customization, and full export of sessions as JSON with metadata including token usage, model config, and timestamps.

Against Claude Code, the trade-off is sharp: Claude Code has superior reasoning depth. Anthropic’s Opus 4.6 scores 80.9% on SWE-bench Verified — the highest of any model — with a 200K token context window that can hold entire codebases in memory. When you hit a gnarly multi-file bug that requires deep architectural understanding, Claude Code’s reasoning is still the benchmark. But you pay $200/month for it, and you’re locked into Anthropic’s models and rate limits.

Against Cursor, the comparison shifts to workflow philosophy. Cursor is IDE-first: you drive, the AI assists with completions and suggestions you approve. Goose is agent-first: you describe what you want and the agent drives. Cursor recently built its own proprietary model — Composer 2, which is impressive at $0.50 per million tokens — but it’s still bound to Cursor’s ecosystem. Goose works with everything.

Against GitHub Copilot, Goose has a clear edge in autonomy. Copilot is still primarily a code completion tool that’s expanding into agent territory. Goose was born as an agent — it doesn’t suggest code, it executes entire workflows.

One advantage Goose holds over every paid competitor: privacy. All prompts, code, and intermediate outputs stay on your machine. Block ran a red team exercise in January 2026, discovered a prompt injection vulnerability in invisible Unicode characters within shared recipes, published the results, and patched it. That level of transparency around AI agent security is rare. Anthropic hasn’t published comparable red team results for Claude Code.

The companies using Goose tell the same story. Databricks runs it. Startups run it. University labs run it. The common thread: they needed an AI agent they could customize, deploy on their own infrastructure, and not pay subscription fees for.

The AAIF donation: Goose is now bigger than Block

In late 2025, something happened that elevated Goose from “cool open-source project” to “industry infrastructure.” Block donated Goose to the Agentic AI Foundation — a new directed fund under the Linux Foundation co-founded by Block, Anthropic, and OpenAI.

The AAIF now governs three cornerstone projects: Anthropic’s MCP, OpenAI’s AGENTS.md, and Block’s Goose. The platinum member list reads like a who’s who of tech: AWS, Google, Microsoft, Bloomberg, Cloudflare. Under AAIF governance, Goose keeps its Apache 2.0 license and commercial-friendly terms while gaining neutral oversight and broader community input.

This is a significant signal. When your competitors donate their own protocols to the same foundation governing your tool, it stops being a product play and becomes an infrastructure play. Goose is no longer just Block’s AI agent. It’s a shared standard that the entire industry has a stake in maintaining.

Block also launched a Goose grant program to fund external development. The project has grown to hundreds of contributors from outside Block and dozens of companies building on top of it. The latest release — v1.29.1, dropped April 3, 2026 — added sub-recipe management and improved logging. Boring, essential improvements. The kind that signal a tool transitioning from flashy new project to reliable infrastructure.

For developers who care about keeping AI agents under control, the AAIF governance model offers something no single-company open-source project can: durability. A tool backed by a foundation with Google, Microsoft, OpenAI, and Anthropic on the board is much harder to kill, abandon, or lock down behind a paywall.

The uncomfortable irony persists. The same tool that helped Block justify cutting 4,000 jobs is now positioned as open infrastructure for the entire developer ecosystem. Whether that’s a cautionary tale or a sign of progress depends entirely on where you sit. But the code is free, the community is growing fast, and at 34,200 stars, developers are clearly voting with their keyboards.


You Might Also Like


Discover more from Top AI Product

Subscribe to get the latest posts sent to your email.



Leave a comment