One npx command. That’s all it takes to spin up a local proxy that gives you OpenAI API access without spending a cent on API credits — as long as you have a ChatGPT subscription. A developer named Evan Zhou published openai-oauth this week, and it landed on the Hacker News front page with 40 points and a lively comment section that quickly turned into a debate about pricing fairness, Terms of Service boundaries, and whether OpenAI secretly wants this to exist.
The timing is no accident. Anthropic locked down third-party OAuth access to Claude subscriptions back in February 2026, sparking developer backlash and subscription cancellations. OpenAI, meanwhile, has taken the opposite approach — partnering with third-party tools like OpenCode and keeping the door open. openai-oauth walks right through that door, and the community isn’t sure whether to applaud or cringe.
How It Actually Works
The technical trick behind openai-oauth is straightforward once you understand the plumbing. When OpenAI launched the Codex CLI, they created a special backend endpoint at chatgpt.com/backend-api/codex/responses that lets Codex authenticate via ChatGPT OAuth tokens instead of API keys. This means your ChatGPT Plus ($20/month) or Pro ($200/month) subscription already grants access to an API-compatible endpoint — you just couldn’t reach it outside of Codex.
openai-oauth bridges that gap. Run npx openai-oauth, authenticate through the same OAuth flow Codex uses, and the tool spins up a local proxy at 127.0.0.1:10531/v1. That proxy exposes three standard OpenAI-compatible endpoints: /v1/responses, /v1/chat/completions, and /v1/models. No API key required. It supports streaming responses, tool calls, and reasoning traces. Point any OpenAI-compatible client at it, and it works.
The project is a TypeScript monorepo with three packages: openai-oauth-core for shared transport and auth refresh, openai-oauth-provider for Vercel AI SDK integration, and the main openai-oauth CLI. Configuration is flexible — you can set custom host, port, model allowlists, and even override the base URL and OAuth client ID via CLI flags. The auth credentials live in ~/.codex/auth.json, the same file Codex itself uses.
The Pricing Gap That Made This Possible
Here’s the economic reality that makes openai-oauth attractive. OpenAI API pricing is pay-per-token: GPT-5.4 runs $2.50 per million input tokens and $15 per million output tokens. A developer building an application or running agentic coding loops can easily burn through $50–$100 in a single day of heavy usage. Meanwhile, ChatGPT Plus costs a flat $20/month with generous usage limits — the same account that, through the Codex endpoint, now gives you API-level access.
The math doesn’t add up in OpenAI’s favor, and that’s exactly why this project is controversial. As one Hacker News commenter put it, it’s like “going to an all-you-can-eat buffet” — the subscription price was never designed to subsidize programmatic API usage at scale.
But the counterargument is equally compelling: OpenAI built this endpoint. They gave Codex CLI users OAuth-based API access as a feature. They partnered with OpenCode to extend this same authentication model to third-party tools. If the endpoint exists and works with standard OAuth, is it really an exploit — or just an undocumented feature?
The Anthropic Contrast
The backdrop here matters. In January 2026, Anthropic deployed server-side checks blocking all third-party tools from using Claude subscription OAuth tokens. The policy was clear: OAuth authentication is only for Claude Code and Claude.ai. Using tokens in any other product — even Anthropic’s own Agent SDK — violates the Consumer Terms of Service. George Hotz called it “a huge mistake.” Developers canceled subscriptions. The community was furious.
OpenAI watched and took the opposite bet. They officially partnered with OpenCode, signaling that third-party tools using ChatGPT OAuth were welcome. On the Hacker News thread for openai-oauth, the developer cited this partnership as a positive sign: “it’s a good sign at least.” Another commenter predicted OpenAI would “make this official rather than ban it.”
This strategic divergence is significant. Anthropic chose to protect margins by locking down access. OpenAI chose to grow the ecosystem by keeping it open — at least for now. openai-oauth is a direct product of that openness.
The Community Is Split
The Hacker News discussion reveals a genuine divide. On one side, developers see practical value: if you’re already paying for ChatGPT Plus, why shouldn’t you be able to programmatically access the same models? The tool is explicitly limited to personal, local experimentation. It’s AGPL-3.0 licensed. The README warns against running it as a hosted service, sharing access, or redistributing tokens.
On the other side, the concerns are real. One commenter warned the method has “a short shelf life” due to detectable usage pattern anomalies. Another cautioned developers against building anything serious on top of it: relying on an unofficial endpoint for commercial products “will silently leave you marginalized from serious software.” The project has 59 GitHub stars and 6 forks as of today — popular enough to get attention, small enough that it’s clearly early-stage.
There’s also a practical limitation most people miss: only Codex-supported models are accessible. You can’t reach every model in OpenAI’s lineup. The proxy is stateless, so you need to send full conversation history with each request. And model availability depends on your subscription tier — a Plus account won’t see the same models as a Pro account.
Who This Is Actually For
openai-oauth isn’t for building production applications. The disclaimers make that clear, and the technical constraints reinforce it. It’s for a specific audience: developers who already pay for ChatGPT, want to experiment with API-based workflows locally, and don’t want to set up a separate billing account for API credits just to test an idea.
Think of it as a bridge for prototyping. You want to test how an agent framework handles tool calls? Wire it up to the local proxy. Curious whether a Vercel AI SDK integration works with OpenAI’s responses endpoint? The openai-oauth-provider package handles it. Want to compare model behavior between the web interface and the API? Now you can, without a credit card on file.
For anything beyond personal experimentation — anything that touches production, anything shared, anything commercial — the standard API with proper billing is the only legitimate path.
FAQ
Is openai-oauth free to use?
The tool itself is free and open-source (AGPL-3.0). However, it requires a ChatGPT subscription (Plus at $20/month or Pro at $200/month) to authenticate. It doesn’t bypass payment — it redirects your existing subscription’s access through a local proxy.
Does openai-oauth violate OpenAI’s Terms of Service?
This is the gray area. OpenAI hasn’t explicitly banned the Codex endpoint from third-party use, and they’ve partnered with tools like OpenCode that use the same authentication mechanism. But the endpoint was designed for Codex, and OpenAI could change access policies at any time. The project’s own disclaimer warns that misuse may result in rate limits, suspension, or termination.
How does openai-oauth compare to using the official OpenAI API?
The official API offers the full model lineup, guaranteed uptime, SLA support, and is the only option approved for commercial use. openai-oauth only exposes Codex-supported models, has no uptime guarantees, and is restricted to personal experimentation. The trade-off is cost: API access is pay-per-token, while openai-oauth piggybacks on a flat-rate subscription.
What are the alternatives for free or cheap OpenAI API access?
For truly free local inference, tools like Ollama and LocalAI let you run open-source models with OpenAI-compatible APIs. For multi-provider routing, LiteLLM acts as an API gateway with budget controls. For official free-tier access, OpenAI offers limited free credits to new API accounts.
Can OpenAI shut this down?
Yes. OpenAI controls the Codex endpoint and can modify authentication requirements, add rate limiting, or block non-Codex clients at any time. The project’s developer acknowledges this risk, which is why the tool is positioned strictly for local experimentation rather than anything you’d depend on.
You Might Also Like
- Mcp2cli the Tool That Cuts mcp Token Costs by 99 Just hit Hacker News
- Chatgpt Interactive Visuals Just Dropped Openai Wants 140 Million Weekly Learners to Ditch Static Explanations
- 685 Hacker News Upvotes in one day why Canirun ai Struck a Nerve With Local ai Enthusiasts
- Openharness got an Open Source Idea let ai Agents Build it for you for Free
- Cline cli 2 0 Just Dropped and its way More Than a Terminal Wrapper

Leave a comment