On May 6, OpenAI, AMD, NVIDIA, Intel, Microsoft and Broadcom co-released MRC (Multipath Reliable Connection) and handed it to the Open Compute Project as an open spec. It’s not a product. It’s a networking protocol — and it’s pointed straight at InfiniBand’s last stronghold.
What MRC actually is
MRC is an extension of RoCE (RDMA over Converged Ethernet) baked into the new 800Gb/s NICs. A single GPU-to-GPU transfer can fan out across hundreds of paths, reroute around a dead link in microseconds, and skip the multi-tier Clos topology everyone uses today. OpenAI says you can wire up 131,000 GPUs with only two layers of Ethernet switches — fewer hops, less power, simpler control plane than the three- or four-tier setups it replaces.
Why six rivals shipped it together
MRC is already running in production: OpenAI’s GB200 cluster with Oracle in Abilene, Texas, and Microsoft’s Fairwater. NVIDIA’s own Spectrum-X supports it out of the box. The subtext is loud — six of the seven companies that matter in AI infra just agreed that frontier training fabrics should run on open Ethernet, not InfiniBand. NVIDIA, which owns InfiniBand, signed the paper anyway. That tells you which way the wind is blowing.
You Might Also Like
- Gpt oss 120b Openai Finally Goes Open Source and its Worth the Wait
- Nvidia Nemotron 3 Super 120b Parameters 12b Active the Math Behind the Fastest Open Source Reasoning Model
- 13b Into Openai yet Microsoft Copilot Cowork Runs on Anthropic Claude
- Openai Parameter Golf 1m in Compute Credits for Squeezing a Language Model Into 16mb
- Tinygpu George Hotz got Apple to Sign an Nvidia gpu Driver for mac

Leave a comment