The Linux kernel receives thousands of patches every month. A small, overworked group of maintainers reviews each one — and inevitably, bugs slip through. Now a Google engineer has open-sourced an AI system that found more than half of those missed bugs, and the open-source community is divided on what that means.
Sashiko, built by Roman Gushchin from Google’s Linux kernel team, is an agentic AI code review system written in Rust. It monitors the Linux Kernel Mailing List (LKML), ingests every submitted patch, and runs each one through a 9-stage review protocol designed to simulate a panel of specialist reviewers. The project launched publicly on March 18, 2026, and within days had coverage from Phoronix, The Register, LWN.net, and a front-page spot on Hacker News.
The headline number: when tested against 1,000 recent upstream commits tagged with “Fixes:”, Sashiko detected 53.6% of the bugs using Gemini 3.1 Pro. Every single one of those bugs had already passed through human code review and been merged into the mainline kernel.
Why the Linux Kernel Needs an AI Reviewer
Kernel maintainership has been a documented crisis for years. The Linux kernel has roughly 2,000 active developers, but only about 200 are paid for their work. Subsystem maintainers are aging, recruitment is difficult, and burnout is real. A 2023 LWN.net discussion explicitly addressed “reducing kernel-maintainer burnout” as a systemic issue.
The review bottleneck is the core problem. Every patch touching memory management, locking, device drivers, or security needs expert eyes — and there simply aren’t enough of them. Bugs in categories like use-after-free, double frees, race conditions, and missing error path handling slip through not because reviewers are careless, but because the volume is overwhelming.
Sashiko doesn’t replace those reviewers. It acts as a first-pass filter, catching issues before a human ever looks at the patch. The distinction matters: unlike AI tools that generate or submit code (which have sparked controversy in open-source communities), Sashiko only reviews existing human-written submissions.
How the 9-Stage Review Protocol Works
What sets Sashiko apart from a generic “throw code at an LLM” approach is its structured, multi-stage protocol. Each patch goes through nine distinct analysis phases, each focused on a different category of potential issues:
- Commit goal analysis — Evaluates the big picture: architectural flaws, UAPI breakages, conceptual correctness
- Implementation verification — Checks whether the code actually matches what the commit message claims
- Execution flow verification — Traces C code paths looking for logic errors, missing return checks, off-by-one errors
- Resource management — Hunts for memory leaks, use-after-free, double frees, and object lifecycle issues across queues, timers, and workqueues
- Locking and synchronization — Investigates deadlocks, RCU rule violations, and thread-safety problems
- Security audit — Scans for buffer overflows, TOCTOU races, and other exploitable patterns
- Hardware-specific review — Checks register access patterns, DMA handling, and device interaction correctness
- Consolidation — Aggregates findings and estimates severity
- Report generation — Produces output formatted for LKML email conventions
The system runs as a daemon that monitors lore.kernel.org via NNTP, uses a SQLite backend for state management, and exposes a web interface for browsing results. It’s entirely self-contained — no external agentic CLI dependencies required.
The Numbers: 53% Detection, Under 20% False Positives
The 53.6% bug detection rate needs context. Sashiko was tested against an unfiltered set of the most recent 1,000 upstream commits carrying a “Fixes:” tag — meaning these were confirmed bugs that had been merged, discovered later, and then patched. Human reviewers had approved every one of them.
On the false positive side, the team reports a rate “well within 20%,” based on limited manual review. That number is crucial. As one Hacker News commenter pointed out, the precision rate determines whether maintainers will trust the system or treat it as noise. A tool that flags too many non-issues creates more work than it saves.
For comparison, traditional static analysis tools like Coccinelle and Sparse have been part of the kernel development workflow for years, but they operate on pattern-matching rules and don’t understand semantic context the way an LLM-based system can. Sashiko’s approach sits somewhere between a smart linter and a junior reviewer — limited, but covering ground that existing tools don’t.
What Developers Are Actually Saying
The Hacker News discussion reveals a split that mirrors broader open-source attitudes toward AI tooling.
On the positive side, developers noted that Sashiko avoids the biggest controversy in AI-assisted open source: it doesn’t submit code. Projects like curl and SQLite have been hit with waves of low-quality AI-generated bug reports, creating friction for maintainers. Sashiko sidesteps that by keeping its findings on its own platform (sashiko.dev) rather than spamming mailing lists directly.
Critics raised several points. Some argued that many of these bugs would be caught by existing static analyzers if they were used more consistently — making Sashiko “a talking linter” rather than a breakthrough. Others questioned the privacy implications of sending kernel code to external LLM providers, even with Google footing the bill.
One practical concern: the web dashboard was described as confusing, with internal pipeline states dominating the interface while actual findings were buried. For a tool that needs maintainer trust to succeed, the UX matters.
Sashiko vs. Existing Code Review Tools
Sashiko occupies a unique niche. It’s not a general-purpose AI code review tool — it’s built specifically for the Linux kernel, with kernel-specific prompts and per-subsystem review configurations.
Compared to static analyzers (Coccinelle, Sparse, Smatch): These tools are deterministic, produce fewer false positives, and are already integrated into kernel workflows. But they can’t understand intent, evaluate whether code matches its commit message description, or reason about complex multi-step interactions the way an LLM can.
Compared to commercial AI review tools (CodeRabbit, Greptile, Graphite Agent): These target typical software development workflows — GitHub PRs, CI/CD pipelines, enterprise codebases. None of them are designed for the kernel’s mailing-list-based workflow or its unique coding conventions.
Compared to general LLM prompting: Sashiko’s 9-stage protocol and kernel-specific prompt engineering give it an advantage over simply pasting a diff into ChatGPT. The structured approach reduces hallucination and focuses the model’s attention on categories of bugs that actually matter in kernel code.
The tool currently supports Gemini (designed for Gemini 3.1 Pro) and Claude (recommending claude-sonnet-4-6 with its 1M context window). Other LLM providers can likely be integrated as well.
Who’s Behind It and Who’s Paying
Sashiko was built by Roman Gushchin and a small team at Google, with 769 commits from 6 contributors. The GitHub repository (sashiko-dev/sashiko) has accumulated 384 stars and 31 forks since launch. The codebase is 85.7% Rust and 13.7% HTML.
Google is funding the compute and LLM token costs for reviewing all LKML submissions. The project has been moved to the Linux Foundation under the Apache 2.0 license, with all contributions requiring DCO (Developer Certificate of Origin) sign-off.
This funding model is notable. By covering the operational costs and placing the project under the Linux Foundation, Google has made a bet that community trust matters more than corporate control. Whether that trust materializes depends on how the tool performs over the coming months of production use.
FAQ
Is Sashiko free to use?
Yes. Sashiko is open-source under the Apache 2.0 license. For LKML reviews, Google covers all infrastructure and LLM token costs. If you run it locally on your own kernel patches, you’ll need to provide your own API credentials for a supported LLM provider (Gemini or Claude), and you’ll be responsible for those API costs.
Can Sashiko review code outside the Linux kernel?
Not currently. The entire system — prompts, review stages, output formatting — is purpose-built for Linux kernel C code. It recently added support for reviewing Rust code in the kernel as well. Adapting it to other codebases would require significant prompt and protocol rework.
How does Sashiko compare to CodeRabbit or SonarQube?
They target different workflows. CodeRabbit and SonarQube are designed for GitHub/GitLab PR-based development with CI/CD integration. Sashiko is built for the kernel’s mailing-list-based patch submission process. If you’re not working on the Linux kernel, those tools are more relevant to your workflow.
Does Sashiko send my code to external servers?
Yes. Sashiko sends patch data to whichever LLM provider you configure (Google’s Gemini or Anthropic’s Claude). The project documentation explicitly warns users to understand the privacy implications. For LKML patches, this is less of a concern since all submissions are already public.
What’s the false positive rate?
The team reports it’s “well within 20%,” though this is based on limited manual review. In practice, this means roughly 1 in 5 flagged issues may not be actual bugs — a rate that some developers consider acceptable for a first-pass filter, while others see it as too noisy.
You Might Also Like
- Claude Channels Scores 375 Points on Hacker News Anthropics Play to Replace Openclaw
- Mcp2cli the Tool That Cuts mcp Token Costs by 99 Just hit Hacker News
- 685 Hacker News Upvotes in one day why Canirun ai Struck a Nerve With Local ai Enthusiasts
- George Hotz Wants to Sell you a 12000 ai Supercomputer and 221 Hacker News Comments Cant Stop Arguing About it
- Warp oz Just Dropped and its Exactly What dev Teams Have Been Missing

Leave a comment