Top AI Product

We track trending AI tools across Product Hunt, Hacker News, GitHub, and more  — then write honest, opinionated takes on the ones that actually matter. No press releases, no sponsored content. Just real picks, published daily.  Subscribe to stay ahead without drowning in hype.


GLM-5 Just Dropped, and It’s the Open-Source Model Nobody Saw Coming

So Zhipu AI quietly released [GLM-5](https://huggingface.co/zai-org/GLM-5) on February 11th, and honestly, this thing deserves way more noise than it’s getting in the Western AI bubble. We’re talking about a 744-billion-parameter Mixture-of-Experts model with 40B active parameters per token, fully open-sourced under the MIT license. Yes, MIT — the “do whatever you want with it” license. And here’s the kicker: the entire model was trained on Huawei Ascend chips. Not a single NVIDIA GPU was involved.

That last point alone makes this a big deal. With US export controls squeezing China’s access to high-end NVIDIA hardware, Zhipu basically proved that you can train a frontier-class model on domestic silicon and still compete with the best. GLM-5 scored above 50 on the [Artificial Analysis Intelligence Index](https://artificialanalysis.ai/models/glm-5), making it the first open-weights model to ever cross that threshold. It sits comfortably alongside Claude Opus 4.5, GPT-5.2, and Gemini 3.0 Pro — not behind them.

The technical underpinnings are genuinely interesting too. Zhipu built a custom async reinforcement learning framework called [Slime](https://github.com/THUDM/slime) that decouples inference, evaluation, and parameter updates into parallel pipelines, killing the idle-time bottleneck that plagues standard RLHF training. They also integrated DeepSeek Sparse Attention to keep inference costs reasonable despite the model’s massive size. [VentureBeat highlighted](https://venturebeat.com/technology/z-ais-open-source-glm-5-achieves-record-low-hallucination-rate-and-leverages) its record-low hallucination rate, which tracks with the model feeling noticeably more grounded in my testing compared to other open alternatives.

The community response has been strong. [BuildFastWithAI did a solid writeup](https://www.buildfastwithai.com/blogs/glm-5-released-open-source-model-2026), the [GitHub repo](https://github.com/zai-org/GLM-5) already has deployment guides for vLLM, SGLang, and other inference stacks, and API access through Z.ai comes in at roughly $1.00 per million input tokens — significantly cheaper than comparable proprietary models. If you’ve been waiting for an open model that can genuinely hold its own against closed-source heavyweights, GLM-5 might be exactly what you’ve been looking for.


Discover more from Top AI Product

Subscribe to get the latest posts sent to your email.



Leave a comment