Every major AI tool in 2026 seems to be racing toward the same destination: bigger, more complex, more dependencies. Claude Code, Goose, Gemini CLI — they’re powerful, but they come with significant weight. JavaScript runtimes, Python environments, hundreds of megabytes of dependencies.
Then there’s Axe, a Go-based CLI tool that takes the opposite approach. At just 12MB with only two direct dependencies, it defines AI agents as plain TOML configuration files and runs them the way Unix runs programs — one focused task per agent, composable through pipes, triggerable from cron jobs, git hooks, or the terminal. No daemon, no GUI, no framework to buy into.
The project hit Hacker News on March 12, 2026 as a Show HN post, pulling 192 points and 108 comments. The pitch — “A 12MB binary that replaces your AI framework” — clearly struck a nerve with developers who’ve been watching AI tooling bloat grow unchecked.
One Agent, One TOML File
Axe’s core design decision is treating each agent as an isolated unit of configuration. Every agent lives in a .toml file under $XDG_CONFIG_HOME/axe/agents/, and the config itself is readable at a glance:
name = "code-reviewer"
model = "anthropic/claude-sonnet-4-20250514"
description = "Reviews code diffs for issues"
skill = "code-review"
tools = ["read_file", "list_directory"]
That’s it. The agent has a name, a model, a description, a skill file (a Markdown document containing instructions), and a list of tools it’s allowed to use. You can also attach context files via glob patterns, set a working directory, configure persistent memory, define sub-agents, and connect external MCP servers — but none of those are required. A functional agent can be five lines of TOML.
This approach solves a real problem. In most agent frameworks, agent behavior is scattered across Python classes, decorator chains, and configuration spread across multiple files. With Axe, you open one file and see everything: what the agent does, what model it uses, what tools it has access to, and what its boundaries are. Version-controlling agent configs in git becomes trivial because they’re just text files.
Piping, Chaining, and the Unix Playbook
Where Axe really earns its Unix comparison is in how agents consume and produce data. Agents read from stdin and write to stdout, which means standard shell composition works out of the box:
git diff | axe run code-reviewer
cat error.log | axe run log-analyzer
curl -s api.example.com/data | axe run summarizer
The --json flag wraps output in a structured envelope for scripting. Chain multiple agents together the same way you’d chain grep, awk, and sort:
git diff | axe run reviewer | axe run commit-writer
The creator, jrswab, shared specific workflows in the Hacker News thread: piping YouTube transcripts through a blog post writer, then into an Instapaper publisher. Voice notes through a research gatherer, then into a blog drafter. These are multi-stage pipelines where each agent handles one transformation — exactly how Unix pipes were designed to work 50 years ago.
Under the Hood: Tools, Memory, and Sub-Agents
Despite its small footprint, Axe packs a practical feature set.
Built-in tools. When enabled, agents get access to read_file, write_file, edit_file, list_directory, and run_command. All file operations are sandboxed to the agent’s working directory — absolute paths and .. traversal are rejected. Enabling tools triggers a conversation loop of up to 50 turns, letting the agent iterate on tasks.
Persistent memory. Agents can maintain memory across runs, stored as timestamped Markdown logs. You configure how many recent entries to load into context per run, and when memory grows too large, axe gc <agent> runs an LLM-assisted garbage collection that analyzes and trims entries intelligently rather than just dropping old ones.
Sub-agent delegation. Agents can call other agents via the call_agent tool, with configurable depth limits (hard-capped at 5), parallel execution, and per-agent timeouts. This enables orchestration patterns without requiring a separate scheduler — the calling agent manages the workflow.
MCP server integration. External tools can connect via Model Context Protocol using SSE or streamable-HTTP transport. MCP tools are auto-discovered and available alongside built-in tools, with built-ins taking precedence on name conflicts.
Multi-provider support. Axe works with Anthropic Claude, OpenAI, and Ollama for local models. All LLM calls use Go’s standard library (net/http), which is part of how the binary stays small. Provider base URLs are configurable via environment variables or config.toml.
How Axe Compares to Other AI CLI Tools
The AI CLI tool space has exploded in 2026. Here’s where Axe fits relative to the major players:
Claude Code is Anthropic’s full-featured coding agent that understands entire codebases and executes multi-step tasks. It’s powerful but heavy — it runs on Node.js, costs $20+/month (often $150-200/month with heavy Opus usage), and is primarily designed for interactive coding sessions. Axe is not trying to replace Claude Code. It’s for automation: scheduled tasks, git hooks, CI pipelines, and batch processing where you want a focused agent that runs and exits.
Goose (by Block/Square) is a fully open-source agent with both desktop and CLI modes, native MCP integration, and the ability to run locally with Ollama. It’s closer to Axe in philosophy but still heavier, with a broader scope as a general-purpose coding assistant. Goose focuses on interactive development; Axe focuses on automation.
Fabric supports 75+ LLM providers and is free and open-source, but it’s more of a prompt management system — it organizes and runs prompts (called “patterns”) rather than defining agents with tools, memory, and delegation capabilities.
Aider is a popular pair programming CLI with 41,000+ GitHub stars, but it’s deeply specialized for code editing with tight git integration. It’s an interactive tool for writing code, not a general-purpose agent runner.
The key distinction: most AI CLI tools are designed for interactive use — a developer sitting at a terminal, having a conversation with an AI. Axe is designed for non-interactive automation. An agent runs, does its job, produces output, and exits. That makes it more comparable to a well-configured shell script than to an AI chatbot.
What the Hacker News Community Thinks
The 108-comment Hacker News thread reveals both enthusiasm and legitimate concerns.
What people liked. The Unix philosophy resonated. One commenter noted that “small tools, small contexts, and explicit data flowing between steps” mirrors Unix design in a way that feels natural. Others expressed relief at finding a tool that doesn’t require downloading “50 TB of JS/TS packages.” The single-responsibility design also got praise — as one commenter put it, “LLMs rarely mess up specific low-level instructions, compared to open-ended, long-horizon tasks.”
Security concerns. Multiple commenters flagged prompt injection risks, especially when agents have credential access. The creator acknowledged this and suggested Docker containerization as mitigation. Axe does include Docker support with security hardening — non-root user, read-only root filesystem, all capabilities dropped, no privilege escalation. But as commenters pointed out, containerization doesn’t fully solve the problem when agents need to call external APIs.
Cost control. Several developers asked about preventing expensive token runaway when agents delegate to sub-agents and fan out. The creator acknowledged this isn’t addressed yet but said adding token limits is on the roadmap. For now, the depth limit (max 5) and per-agent timeouts provide some guardrails.
Binary size debate. A Zig developer argued that 12MB is actually large for what amounts to an HTTP client with TLS and TOML parsing — claiming a similar tool could be built in under 400KB. Go developers countered that 12MB is reasonable given static linking. This debate is mostly academic, but it reflects the broader tension in the developer community about what “lightweight” really means.
Configuration location. One developer objected to agents living in ~/.config, arguing that agent definitions should live in the project repository for version control. This is a fair point — while Axe’s TOML files are easy to version-control, the default config location separates agent definitions from the codebases they operate on.
Who Should Pay Attention
Axe makes the most sense for developers who want to automate repetitive tasks with LLMs without building a full agent framework. The sweet spots are:
- CI/CD pipelines — run a code reviewer on every PR, generate commit messages, analyze build logs
- Git hooks — pre-commit reviews, automated changelog entries
- Cron jobs — scheduled log analysis, report generation, data summarization
- Batch processing — pipe large datasets through focused agents
If you’re looking for an interactive coding assistant, Axe isn’t it. But if you want to sprinkle focused AI automation into your existing Unix-based workflows without adopting an entire framework, a 12MB binary and some TOML files is a compelling pitch.
The project has 459 GitHub stars as of March 2026 and is under active development with 85 commits. It’s Apache 2.0 licensed and requires Go 1.24+ to build from source, though pre-built binaries are also available.
Frequently Asked Questions
What is Axe and how much does it cost?
Axe is a free, open-source CLI tool written in Go for running single-purpose AI agents. The tool itself is free under the Apache 2.0 license. You’ll need API keys for your chosen LLM provider (Anthropic, OpenAI) which have their own costs, or you can use Ollama to run local models at no cost beyond hardware.
What LLM providers does Axe support?
Axe supports three providers: Anthropic (Claude models), OpenAI (GPT models), and Ollama (local open-source models). All providers are configured through environment variables or a central config.toml file. Base URLs are customizable, so it can also work with API-compatible services.
How does Axe compare to Claude Code or Goose?
Claude Code and Goose are primarily interactive coding assistants designed for developers to have conversations with. Axe is designed for non-interactive automation — agents run, produce output, and exit. It’s closer to a scriptable Unix tool than a chatbot. If you need help writing code interactively, use Claude Code or Goose. If you need to automate focused LLM tasks in pipelines, cron jobs, or git hooks, Axe is built for that.
Is Axe secure enough for production use?
Axe sandboxes all file operations to the agent’s working directory and rejects path traversal attempts. For stronger isolation, it provides Docker support with security hardening (non-root user, read-only filesystem, dropped capabilities). However, prompt injection remains a concern when agents process untrusted input — this is a known limitation shared by all agent tools, and users should evaluate risk based on their specific use cases.
Can Axe agents work together on complex tasks?
Yes. Agents can delegate to sub-agents via the call_agent tool, with configurable depth limits and parallel execution. You can also chain agents through Unix pipes, where the output of one agent becomes the input of the next. Both approaches enable multi-step workflows while keeping each individual agent focused on a single task.
You Might Also Like
- Google A2ui Agent to User Interface Finally a Standard way for ai Agents to Show you Things
- Mcp2cli the Tool That Cuts mcp Token Costs by 99 Just hit Hacker News
- Agent Builder by Thesys When ai Agents Stop Talking and Start Showing
- Cloudrouter Gives Your ai Coding Agent its own Cloud Machine and Thats a big Deal
- Cline cli 2 0 Just Dropped and its way More Than a Terminal Wrapper

Leave a comment