AI agents are calling more APIs than ever — and most of them are doing it with raw, unscoped API keys sitting in environment variables. If a prompt injection attack tricks your agent into exfiltrating data, those keys go with it. OneCLI, an open-source credential vault written in Rust, offers a different architecture: agents never touch real secrets at all.
The project hit Hacker News on March 12 with 160 points and 50 comments, landing on bestofshowhn.com’s March rankings at #18. With 601 GitHub stars and growing, it’s tapping into a problem that gets worse every time someone spins up another autonomous agent.
The Problem: Agents With Master Keys
Here’s the standard setup for most AI agent deployments in 2026: you stuff API keys into environment variables, the agent reads them, and makes HTTP calls. Simple. Also dangerous.
A survey from Gravitee’s State of AI Agent Security report found that 45.6% of teams use static API keys for agent-to-agent authentication, and 44.4% rely on generic tokens. Secure standards like mTLS sit at just 17.8%. The industry knows this is a problem — Auth0, Aembit, and HashiCorp have all published guides on why static credentials in agent environments are a liability — but the default developer workflow hasn’t caught up.
The specific threat model is straightforward. An AI agent that holds real API keys can be manipulated through prompt injection to leak those credentials, make unauthorized requests, or forward secrets to attacker-controlled endpoints. The agent doesn’t even need to be “hacked” in the traditional sense — a cleverly crafted input is enough.
How OneCLI Works: Placeholder Keys and a Rust Proxy
OneCLI’s architecture has three components:
1. Encrypted Vault — You store real API credentials once, encrypted with AES-256-GCM. Secrets are decrypted only at the moment a request passes through the gateway, never before.
2. Rust Gateway (Port 10255) — A high-performance HTTP proxy that intercepts outbound agent requests. When an agent sends a request with a placeholder key (e.g., FAKE_KEY_STRIPE), the gateway matches the destination host and path pattern, swaps in the real credential, and forwards the request. The agent authenticates to the gateway via Proxy-Authorization headers with scoped access tokens.
3. Next.js Dashboard (Port 10254) — A web UI for managing agents, secrets, and permissions. It supports single-user mode for local development and Google OAuth for team environments.
The key insight: agents make normal HTTP calls through the proxy. They don’t need any SDK, library, or code modification. You configure HTTP_PROXY and the gateway handles the rest. Each agent gets its own access token with scoped permissions, so Agent A can access Stripe but not your email API, while Agent B gets the reverse.
Deployment is a single Docker command:
docker run --pull always -p 10254:10254 -p 10255:10255 -v onecli-data:/app/data ghcr.io/onecli/onecli
The tech stack breaks down to TypeScript (65.8%) for the dashboard and API layer, Rust (28.6%) for the gateway, with Prisma ORM and either embedded PGlite or external PostgreSQL for storage.
What the Hacker News Community Actually Said
The HN thread was split between praise and skepticism — which is usually a sign that the project is solving something real but not universally.
The supporters liked the architecture of keeping secrets completely out of agent memory. One commenter noted that it “removes friction from a specific use case” even if auth-proxying isn’t a new concept. The combination of secret management with per-agent scoping was seen as genuinely agent-specific, not just another vault wrapper.
The critics raised valid points. Multiple commenters pointed out that credential proxying has existed for years — HashiCorp Vault, BuzzFeed SSO, and various MITM proxy setups all do some version of this. More fundamentally, several argued that OneCLI prevents secret exfiltration but doesn’t stop a prompt-injected agent from abusing the services it has access to. An agent that can’t see your Stripe key can still make charges through the gateway if it has access to the Stripe endpoint.
Technical concerns surfaced around TLS interception (OneCLI requires installing a local CA certificate in agent containers for HTTPS traffic) and Node.js compatibility (the HTTP_PROXY environment variable was historically unreliable in Node, though versions 22.21+ and 24+ support it via NODE_USE_ENV_PROXY).
The creators acknowledged these limitations and discussed roadmap items including integration with existing secret stores like 1Password and Vault, dynamic access policies, and human approval workflows for sensitive operations.
OneCLI vs. The Alternatives
The credential management space for AI agents is getting crowded. Here’s how OneCLI stacks up:
HashiCorp Vault is the enterprise heavyweight. It offers dynamic, short-lived credentials with full audit logging, automatic rotation, and integrations with every major cloud provider. But it’s complex to operate, requires significant infrastructure, and wasn’t designed with AI agent workflows in mind. For a team running 3 agents calling 5 APIs, Vault is overkill.
Aembit takes a different approach entirely — replacing static credentials with temporary, just-in-time access rights that combine agent and user identity into a “Blended Identity.” It’s a managed enterprise solution with per-task contextual auth. Powerful, but not open-source and not something you spin up locally in 30 seconds.
Composio focuses on the integration layer, providing managed authentication for hundreds of SaaS tools. It’s developer-friendly but opinionated about which services you connect to.
AgentSecrets positions itself as a zero-knowledge credential platform for AI teams, architecturally preventing prompt injection credential theft — similar goals to OneCLI but with a different implementation.
OneCLI’s niche is clear: it’s the lightweight, self-hosted, open-source option. Apache-2.0 licensed, one Docker command to run, no external dependencies. If you want enterprise-grade dynamic credentials with SSO and compliance, look at Vault or Aembit. If you want a fast, simple proxy you control, OneCLI is hard to beat at its price point (free).
The Bigger Picture: Why This Matters Now
The timing of OneCLI’s traction isn’t accidental. 2026 has seen an explosion in autonomous AI agents — coding agents, research agents, customer support agents, data pipeline agents — and every single one of them needs API access. The number of credentials floating around in agent environments has grown by an order of magnitude, but security practices haven’t kept pace.
As Aembit’s research puts it: giving an agent a static API key is “the digital equivalent of handing a master key to a contractor who only needs to fix a single sink.” The industry is moving toward ephemeral, scoped, just-in-time credentials, but most teams aren’t there yet. OneCLI sits in the gap — it’s not the final answer, but it’s a meaningful step above “paste the key in .env and hope for the best.”
The project is early (601 stars, 2 contributors, 42 commits), but the architecture is sound and the problem is real. Whether OneCLI itself becomes the standard or inspires better tooling from larger players, the proxy-based credential isolation pattern deserves attention.
FAQ
Is OneCLI free to use?
Yes. OneCLI is fully open-source under the Apache-2.0 license. You can self-host it at no cost using Docker. There are no paid tiers or usage limits in the current release (v1.1.2).
Does OneCLI work with any AI agent framework?
It works with any agent that makes HTTP calls and supports proxy configuration via the HTTP_PROXY environment variable. No SDK or code changes needed — it’s framework-agnostic. The main caveat is that some Node.js versions require the NODE_USE_ENV_PROXY flag.
Can OneCLI prevent prompt injection attacks?
Partially. It prevents an attacker from stealing your API keys through a compromised agent, since the agent never holds real credentials. However, it cannot prevent a prompt-injected agent from making unauthorized requests through the gateway to services it already has access to. It reduces blast radius, not eliminates it.
How does OneCLI compare to just using environment variables?
With environment variables, a compromised agent can read and exfiltrate every key it has access to. With OneCLI, the agent only holds placeholder tokens — the real keys exist only in the encrypted vault and are injected at the proxy level during request time. It also adds per-agent scoping, so each agent only reaches the specific APIs it needs.
What databases does OneCLI support?
It ships with embedded PGlite for zero-config local use, or you can connect an external PostgreSQL instance for production deployments. All credential encryption uses AES-256-GCM regardless of the database backend.
You Might Also Like
- Mcp2cli the Tool That Cuts mcp Token Costs by 99 Just hit Hacker News
- 685 Hacker News Upvotes in one day why Canirun ai Struck a Nerve With Local ai Enthusiasts
- Webmcp Just Dropped in Chrome 146 and it Might be the Biggest Shift in how ai Talks to the web
- Moltis a 60mb Rust Binary That Wants to be Your Entire ai Stack
- Google A2ui Agent to User Interface Finally a Standard way for ai Agents to Show you Things

Leave a comment