Top AI Product

We track trending AI tools across Product Hunt, Hacker News, GitHub, and more  — then write honest, opinionated takes on the ones that actually matter. No press releases, no sponsored content. Just real picks, published daily.  Subscribe to stay ahead without drowning in hype.


GPT-OSS-120B: OpenAI Finally Goes Open Source, and It’s Worth the Wait

I honestly didn’t think this day would come. OpenAI — the company that built its entire brand on keeping models locked behind APIs — just dropped [GPT-OSS](https://openai.com/index/introducing-gpt-oss/), their first open-weight large language model since GPT-2. And it’s not some watered-down afterthought. This thing is genuinely impressive.

Let’s talk numbers. GPT-OSS-120B packs 117 billion total parameters, but here’s the clever part: it uses a mixture-of-experts (MoE) architecture that only activates 5.1 billion parameters per token. Combined with MXFP4 quantization, the whole model fits on a single 80GB GPU like an NVIDIA H100. That’s wild for something that performs near the level of OpenAI’s own o4-mini on reasoning benchmarks. If you want something even lighter, they also released [gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b), which squeezes onto a 16GB card.

The license is Apache 2.0 — no weird restrictions, no usage caps, no “open but actually not” fine print. You can fine-tune it, deploy it commercially, or just tinker with it on your laptop during a long flight (someone on [Hacker News](https://news.ycombinator.com/item?id=45847733) literally reported running the 120B variant on a MacBook while traveling). The model excels at reasoning, tool use, and chain-of-thought tasks, outperforming o3-mini and even beating o4-mini on competition math and health-related queries.

The community response has been massive. The [GitHub repo](https://github.com/openai/gpt-oss) racked up nearly 20k stars quickly, [Hugging Face](https://huggingface.co/openai/gpt-oss-120b) already has the weights ready for download, and Hacker News threads are full of people sharing benchmarks and deployment setups — from [running it at 500 tokens/sec on Nvidia GPUs](https://news.ycombinator.com/item?id=44819968) to building RAG agents on top of it. It’s also available through Azure, Google Cloud Vertex AI, and OpenRouter if you’d rather not self-host.

Whether OpenAI did this because of competitive pressure from Meta’s Llama and DeepSeek, or because they genuinely want to support the open-source ecosystem, the result speaks for itself. GPT-OSS-120B is a strong model, it runs on accessible hardware, and the license actually lets you do things with it. That’s all most developers were ever asking for.


Discover more from Top AI Product

Subscribe to get the latest posts sent to your email.



Leave a comment

Discover more from Top AI Product

Subscribe now to keep reading and get access to the full archive.

Continue reading