Top AI Product

Every day, hundreds of new AI tools launch across Product Hunt, Hacker News, and GitHub. We dig through the noise so you don't have to — surfacing only the ones worth your attention with honest, no-fluff reviews. Explore our latest picks, deep dives, and curated collections to find your next favorite AI tool.


OpenAI MRC Protocol scales Ethernet to 131,000 GPUs — and gets every chip giant to sign on

On May 6, OpenAI, AMD, NVIDIA, Intel, Microsoft and Broadcom co-released MRC (Multipath Reliable Connection) and handed it to the Open Compute Project as an open spec. It’s not a product. It’s a networking protocol — and it’s pointed straight at InfiniBand’s last stronghold.

What MRC actually is

MRC is an extension of RoCE (RDMA over Converged Ethernet) baked into the new 800Gb/s NICs. A single GPU-to-GPU transfer can fan out across hundreds of paths, reroute around a dead link in microseconds, and skip the multi-tier Clos topology everyone uses today. OpenAI says you can wire up 131,000 GPUs with only two layers of Ethernet switches — fewer hops, less power, simpler control plane than the three- or four-tier setups it replaces.

Why six rivals shipped it together

MRC is already running in production: OpenAI’s GB200 cluster with Oracle in Abilene, Texas, and Microsoft’s Fairwater. NVIDIA’s own Spectrum-X supports it out of the box. The subtext is loud — six of the seven companies that matter in AI infra just agreed that frontier training fabrics should run on open Ethernet, not InfiniBand. NVIDIA, which owns InfiniBand, signed the paper anyway. That tells you which way the wind is blowing.


You Might Also Like


Discover more from Top AI Product

Subscribe to get the latest posts sent to your email.



Leave a comment