Top AI Product

We track trending AI tools across Product Hunt, Hacker News, GitHub, and more  — then write honest, opinionated takes on the ones that actually matter. No press releases, no sponsored content. Just real picks, published daily.  Subscribe to stay ahead without drowning in hype.


Rudel Analyzed 1,573 Claude Code Sessions — 26% Were Abandoned in Under a Minute

Everyone talks about how AI coding tools are boosting developer productivity. But almost nobody is measuring what actually happens inside those sessions. How many tokens get burned per task? How often do developers bail out before getting any real value? Which features go completely unused?

Rudel, an open-source analytics platform built specifically for Claude Code, set out to answer those questions. The team collected 1,573 real Claude Code sessions containing over 15 million tokens and 270,000+ interactions, then published findings that are making developers rethink how they use AI coding assistants. The project hit Hacker News on March 12, 2026, pulling in 143 points and 85 comments — a sign that the “how do we measure AI coding ROI” question is hitting a nerve across the industry.

The Dataset That Started a Conversation

The core of Rudel’s pitch isn’t the dashboard itself — it’s the data. After building their analytics layer and onboarding early users, the team aggregated anonymized session data into what might be the first large-scale public analysis of Claude Code usage patterns. The numbers tell a story that’s both surprising and uncomfortable.

26% of all sessions are abandoned. Most of those bail-outs happen within the first 60 seconds. That’s more than one in four sessions where a developer fires up Claude Code, types something, and walks away before the tool can deliver anything useful. Whether that’s a prompt quality problem, a context-loading issue, or simply developers testing the waters isn’t entirely clear — but the number is high enough to warrant attention.

Skills are used in only 4% of sessions. Claude Code ships with a growing library of skills — specialized capabilities that can handle specific workflows. But the data shows almost nobody is using them. This finding sparked debate on Hacker News, with some developers pointing out that newer Claude models (particularly 4.6) have improved skill invocation, while others argued that most users simply don’t know skills exist or how to trigger them.

Session success rates vary dramatically by task type. Documentation tasks scored highest for completion, while refactoring scored lowest. This aligns with what many developers report anecdotally: AI tools handle well-scoped, greenfield tasks better than complex restructuring of existing code.

Error cascades in the first 2 minutes predict session abandonment. When a session starts with errors, there’s a measurable pattern that leads to the developer giving up. The Rudel team suggests this could become a predictive metric — if a tool can detect early failure patterns, it could intervene with better prompts or alternative approaches before the user bails.

How Rudel Works Under the Hood

The architecture is straightforward. Rudel installs as a global npm package and registers a Claude Code hook that triggers when a session ends. That hook uploads the session transcript to Rudel’s servers (or your own, if you self-host), where it gets stored in a ClickHouse database and processed into dashboard metrics.

The setup takes about three commands:

  1. Install the CLI globally via npm
  2. Authenticate through the browser
  3. Enable auto-upload with rudel enable

From there, every Claude Code session automatically feeds into your dashboard. Teams can invite members through the settings panel, and there’s a batch upload option for importing historical sessions.

The dashboard surfaces metrics across several dimensions: token consumption per session, session duration distributions, activity patterns over time, model usage breakdowns (Opus vs. Sonnet vs. Haiku), and team-level aggregations. For engineering managers trying to understand whether their $200/month-per-seat Claude Code investment is paying off, this is the kind of visibility that’s been missing.

The project is built with TypeScript, runs on the Bun runtime, and uses a Turbo monorepo structure. It’s MIT-licensed with 185 GitHub stars and 7 contributors as of mid-March 2026.

The Privacy Elephant in the Room

The Hacker News discussion surfaced one dominant concern: data privacy. Rudel ingests full session transcripts, which means uploaded data can include source code, file contents, command outputs, URLs, and potentially secrets that appeared during a coding session.

The team acknowledged this directly. Their hosted service at app.rudel.ai states that they don’t access personal data contained in transcripts, but the nature of the data being transmitted is inherently sensitive. Multiple commenters pushed for better documentation around data handling, and the Rudel team agreed they needed to be more transparent on their website.

For organizations with strict data policies, Rudel does offer a self-hosting option with documentation in their repository. This is likely the path that most enterprise teams would take — the analytics value is compelling, but sending full coding session transcripts to a third-party service is a hard sell for any security-conscious engineering org.

Some commenters also questioned whether the infrastructure was over-engineered for the dataset size, suggesting that local processing with simpler tools would suffice for individual developers. That criticism has some merit for solo use, but starts to break down when you need team-level aggregation and historical trend analysis.

Rudel vs. the Alternatives: A Growing Field

Rudel isn’t operating in a vacuum. The Claude Code analytics space has gotten crowded fast, with options ranging from official Anthropic tools to open-source CLI utilities.

Anthropic’s built-in analytics dashboard is the most direct comparison. Available on Team and Enterprise plans, it tracks lines of code accepted, suggestion accept rates, daily active users, and PR-level contribution metrics with GitHub integration. It’s free for paying customers and doesn’t require sending data to a third party. But it’s focused on high-level usage metrics — it won’t tell you about session-level patterns, abandonment rates, or skill utilization. Rudel goes deeper into the session itself.

ccusage is an open-source CLI tool that analyzes Claude Code usage from local JSONL files. It provides daily, monthly, and session-based reports with cost breakdowns by model type. The key advantage: everything stays local. No data leaves your machine. The trade-off is that it’s a personal tool — there’s no team dashboard, no trend visualization, and no aggregated insights across an organization.

claude-code-otel takes the enterprise observability route, using OpenTelemetry to pipe Claude Code metrics into existing monitoring stacks like Grafana or Datadog. It’s powerful but requires significant infrastructure setup and is aimed at organizations that already have observability tooling in place.

Datadog and Faros AI offer Claude Code monitoring as part of broader developer productivity platforms. These are enterprise-grade solutions with corresponding enterprise pricing and setup complexity.

The positioning gap Rudel fills is specific: session-level analytics with team collaboration, packaged as a standalone product that’s easier to set up than enterprise observability stacks but more comprehensive than local CLI tools. Whether that middle ground is big enough to sustain a product remains to be seen.

What the Community Response Reveals

Beyond the specific data findings, the Hacker News discussion around Rudel reflects a broader shift in how engineering teams think about AI coding tools. The initial wave of adoption — just get everyone on Claude Code or Copilot — is giving way to a more measured phase where teams want to understand usage patterns, optimize workflows, and justify costs.

Several commenters expressed interest in correlating CLAUDE.md file quality with session outcomes — a feature the Rudel team added to their roadmap. Others debated whether the 26% abandonment rate is actually a problem, arguing that some “abandoned” sessions might represent quick lookups or context switches rather than failures.

The discussion also touched on a meta-question: as AI coding assistants become a standard part of the development workflow, who should own the analytics? Should it be the AI provider (Anthropic), the IDE (Cursor, VS Code), or independent tools like Rudel? The answer probably depends on what you’re measuring and who needs to see it.

For individual developers, local tools like ccusage might be enough. For engineering managers trying to optimize team-wide AI adoption, something like Rudel or Anthropic’s built-in dashboard makes more sense. And for enterprises with existing observability infrastructure, piping Claude Code data into Datadog or Grafana is the natural path.

If you’re interested in other tools shaping the Claude Code ecosystem, check out coverage of Claude Code’s security model and Anthropic’s Code Review system that’s already changing how teams handle AI-generated PRs.

FAQ

Is Rudel free to use?
Rudel offers a free hosted version at its web app, and the source code is MIT-licensed on GitHub. Self-hosting is available for teams that need to keep data on their own infrastructure. No paid tiers have been announced publicly as of March 2026.

Does Rudel work with coding tools other than Claude Code?
Currently, Rudel is built exclusively for Claude Code sessions. It hooks into Claude Code’s session lifecycle and parses its specific transcript format. There’s no support for GitHub Copilot, Cursor, or other AI coding assistants at this time.

Is it safe to upload Claude Code sessions to Rudel?
Session transcripts can contain sensitive data including source code, secrets, and command outputs. The hosted service states it doesn’t access personal data in uploaded transcripts, but security-conscious teams should consider the self-hosting option. Always review the privacy policy before enabling auto-upload.

How does Rudel compare to Anthropic’s official analytics dashboard?
Anthropic’s built-in dashboard (available on Team and Enterprise plans) tracks high-level metrics like lines of code accepted and daily active users. Rudel goes deeper into individual session analysis — token consumption patterns, abandonment rates, skill utilization, and error cascades. They serve different levels of the analytics stack and can be complementary.

What are the system requirements for Rudel?
Rudel requires the Bun runtime for installation. The CLI installs globally via npm, and setup takes about two minutes. For self-hosting, you’ll need a ClickHouse database instance. The hosted version requires no infrastructure on your end.


You Might Also Like


Discover more from Top AI Product

Subscribe to get the latest posts sent to your email.



Leave a comment