Top AI Product

Every day, hundreds of new AI tools launch across Product Hunt, Hacker News, and GitHub. We dig through the noise so you don't have to — surfacing only the ones worth your attention with honest, no-fluff reviews. Explore our latest picks, deep dives, and curated collections to find your next favorite AI tool.


From 739K to 15K Tokens: How code-review-graph Slashes Claude Code Costs with a Local Knowledge Graph

If you’re running Claude Code on anything bigger than a hobby project, you already know the pain: token bills that climb fast because the AI insists on reading files it doesn’t need. A new open-source tool called code-review-graph attacks this problem at the structural level — and the benchmarks are hard to ignore.

Built by developer Tirth Kanani, code-review-graph uses Tree-sitter to parse your entire codebase into an AST, then stores every function, class, import, call relationship, inheritance chain, and test mapping as nodes and edges in a local SQLite database. When Claude Code needs context — for a review, a refactor, or a new feature — it queries the graph instead of scanning raw files. The result: an average 6.8x token reduction on code reviews, and up to 49x on daily coding tasks.

The project hit GitHub Trending in mid-March 2026 with over 700 stars in its first wave (now approaching 1,000), and Kanani’s Medium article documenting the “49x fewer tokens” claim sparked active discussion on Hacker News. For a tool written in about 3,700 lines of typed Python, that’s a lot of attention — and it points to just how much pent-up demand exists for solving the token cost problem.

The Token Problem Nobody Wants to Talk About

Claude Code’s average cost sits around $6 per developer per day, with power users easily hitting $100–200 per month. A big chunk of that spend comes from context loading — Claude reading thousands of files to understand your project before it can answer a single question or review a single PR.

On a Next.js monorepo with 27,732 files, that means feeding Claude roughly 739,000 tokens just to establish context. Most of those tokens come from files that have zero relevance to the change being reviewed. You’re paying for Claude to read your entire vendor directory, your generated types, and that migration file from 2023 that nobody has touched since.

This isn’t a Claude-specific problem. Every AI coding tool that operates on large codebases faces the same context window economics. But Claude Code users feel it more acutely because they’re billed directly on token consumption, making waste immediately visible in the bill.

How code-review-graph Actually Works

The core idea is “blast radius analysis.” When a file changes, the graph traces every caller, dependent, and test that could be affected — and Claude reads only those files. Everything else gets excluded.

Here’s the pipeline:

1. Initial Parse: Tree-sitter converts your repository into abstract syntax trees across 12 supported languages — Python, TypeScript, JavaScript, Go, Rust, Java, C#, Ruby, Kotlin, Swift, PHP, and C/C++. Each function, class, and import becomes a node. Each call site, inheritance relationship, and test mapping becomes an edge.

2. Graph Storage: Everything lands in a SQLite file inside a .code-review-graph/ directory. No Neo4j, no Memgraph, no cloud database. The directory travels with the repo, so teammates can clone and go.

3. Incremental Updates: After the initial build (~10 seconds for a 500-file project), the graph re-indexes only changed files. A 2,900-file project rebuilds in under 2 seconds. A watch mode keeps the graph updated continuously in the background.

4. Context Delivery: When Claude needs to review a change or work on a task, it gets a compact structural summary (156–207 tokens) covering the blast radius, test coverage gaps, and dependency chains — instead of reading the raw source files themselves.

The tool integrates as an MCP server with Claude Code, exposing tools like get_impact_radius_tool for blast radius computation and get_review_context_tool for token-optimized review context. Optional embeddings support adds semantic search capabilities.

The Benchmarks: Three Codebases, Three Stories

Kanani tested code-review-graph against three open-source projects of different sizes. The results tell a consistent story, but with interesting variation:

httpx (125 files): 26.2x token reduction on reviews. Small codebases benefit the most in percentage terms because even a modest graph can eliminate nearly all irrelevant context.

FastAPI (2,915 files): 8.1x reduction. The mid-range sweet spot — large enough that the graph pays for itself many times over, small enough that the structural map captures relationships cleanly.

Next.js (27,732 files): 6.0x reduction on reviews, but the live coding benchmark is where things get dramatic. For a task like adding a rate limiter, the graph narrowed context from 739,352 tokens to 15,049 — a 49x reduction — by pointing Claude to the right 15 files out of 27,000+.

Beyond token savings, review quality actually improved. Kanani reports scores of 8.8 versus 7.2 on a 10-point evaluation scale. The theory: when Claude reads fewer but more relevant files, it produces more focused, accurate reviews instead of getting lost in noise.

How It Stacks Up Against Alternatives

code-review-graph isn’t the only tool trying to solve the context efficiency problem. Here’s how the landscape looks:

Claudette is a Go rewrite directly inspired by code-review-graph, built by Nicolas Martignole. It targets medium-sized Go/TypeScript/Python/JS projects and trades some flexibility for a single-binary deployment. If you want the same core idea but prefer Go’s deployment model, Claudette is worth a look.

code-graph-rag takes the RAG (Retrieval-Augmented Generation) approach — building knowledge graphs with Tree-sitter but layering vector search on top for natural language querying. It’s more ambitious in scope, supporting codebase editing through natural language, but that ambition comes with more complexity.

Serena uses the Language Server Protocol (LSP) instead of Tree-sitter for code analysis. This gives it type-aware semantic understanding across 30+ languages and symbol-level editing capabilities. The trade-off: LSP servers are heavier to run and configure than Tree-sitter grammars.

Claude Code’s built-in optimizations — prompt caching and auto-compaction — help reduce costs by 40–80% depending on usage patterns, but they don’t solve the fundamental problem of reading irrelevant files. code-review-graph works on a different layer, complementing rather than replacing these features.

Manual approaches like custom preprocessing hooks can grep logs or filter context before Claude sees it, but they require per-project configuration and don’t understand code structure.

The key differentiator for code-review-graph is simplicity. SQLite storage, no external dependencies, MIT license, ~3,700 lines of code. It does one thing — structural code mapping for context reduction — and the benchmarks suggest it does it well.

Community Reception and Growing Pains

The Hacker News thread revealed both enthusiasm and friction. The token savings resonated immediately with developers who described paying for tokens that “add zero value” when Claude reads unnecessary files. The 49x headline number, while representing a peak rather than an average, captured attention because it aligned with a real frustration.

On the criticism side, user chreniuc reported installation issues using the recommended Claude Plugin Marketplace method, opening a GitHub issue documenting setup errors. For a tool that’s only at v1.8.2 and has been public for about a week, rough installation edges are expected — but they matter for first impressions.

The project’s contributor list is small: primarily Kanani, with contributions from a handful of others. At 984 stars and 74 forks, it’s in that early-traction phase where community adoption will determine whether it matures into a reliable tool or stays a promising experiment.

One technical concern worth flagging: the tool currently stores its graph in a directory that travels with the repo. For teams with strict .gitignore policies or monorepo setups, this design choice could create friction. The .code-review-graphignore file helps by supporting glob patterns for excluding paths like generated/** or vendor/**, but it’s another config file to manage.

Who Should Actually Use This

The value proposition scales with codebase size. On a 50-file project, the token savings are real but probably not worth the setup. On a 2,000+ file codebase where Claude Code bills are a line item in your team’s budget, the math gets compelling fast.

The 12-language support covers most production stacks, and the SQLite approach means zero infrastructure overhead. Install it, run build, and the graph is ready. The watch mode handles ongoing updates without manual intervention.

If you’re a solo developer on Claude Code’s Max plan, the cost savings translate directly to staying under rate limits longer. If you’re on API pricing running Claude Code across a team, the 6.8x average reduction on reviews could meaningfully impact your monthly bill.

Frequently Asked Questions

Is code-review-graph free?
Yes. It’s open-source under the MIT license with no telemetry, no cloud dependency, and no sign-up required. Everything runs locally and stores data in a SQLite file within your repository.

Does code-review-graph work with languages other than Python?
It supports 12 languages: Python, TypeScript, JavaScript, Go, Rust, Java, C#, Ruby, Kotlin, Swift, PHP, and C/C++. Each language has full Tree-sitter grammar support for functions, classes, imports, call sites, inheritance, and test detection.

How does code-review-graph compare to using CLAUDE.md files for context?
CLAUDE.md files give Claude static instructions about your project. code-review-graph dynamically computes which files are relevant to a specific change. They serve different purposes and work well together — CLAUDE.md for project-level guidance, code-review-graph for change-level context reduction.

Does it slow down Claude Code?
The initial graph build takes about 10 seconds for a 500-file project. After that, incremental updates complete in under 2 seconds, even for projects with 2,900+ files. The time spent building the graph is recovered many times over through reduced token processing.

Can I use code-review-graph with other AI coding tools besides Claude Code?
Currently, it’s designed specifically for Claude Code and integrates as an MCP server. However, since it exposes standard MCP tools, any AI coding tool that supports MCP could theoretically connect to it. The project’s MIT license also means forks for other platforms are possible.


You Might Also Like


Discover more from Top AI Product

Subscribe to get the latest posts sent to your email.



Leave a comment