Anthropic just locked in $1.8 billion of inference capacity with Akamai over multiple years. One day earlier, they rented the entirety of Colossus 1 from xAI. Two massive compute deals in 48 hours.
What the deal actually is
This isn’t training compute. It’s pure inference — running Claude for paying users. Akamai brings something the xAI deal doesn’t: an edge network. Instead of stacking GPUs in one mega-datacenter, Anthropic is distributing Claude inference across Akamai’s globally spread infrastructure. Lower latency, closer to users, less concentration risk.
The contrast with the xAI route is the real story here. xAI = one giant supercluster. Akamai = thousands of edge nodes. Anthropic is hedging both bets at the same time.
Why this matters
Dario Amodei dropped the number that explains everything: Anthropic grew 80x in Q1 against a 10x plan. Annualized revenue is $30B. They literally cannot serve Claude fast enough.
Edge inference for an LLM provider this size is new territory. Most distributed inference today runs on centralized clouds. If Akamai’s edge can hold Claude latency at scale, the inference cost structure flips — coding agents and real-time apps suddenly play by different economics than chatbot-style usage.
You Might Also Like
- Anthropic x Spacex Colossus 1 Deal 220000 Gpus From Xais old Datacenter Live in 30 Days
- Anthropic Hits 30b run Rate 3x in Four Months and Signs 3 5gw Google Broadcom tpu Deal
- Hiveterm Bets on the Multi Agent Workspace Claude Codex and Gemini in one Terminal
- Ineffable Intelligence the Alphago guy Just bet a Billion Dollars That Llms Arent Enough
- Barracuda one dev Wrote a Cuda Compiler From Scratch so amd Gpus Could Join the Party

Leave a comment