TL;DR: Anthropic and Amazon have expanded their partnership to secure up to 5 gigawatts of compute capacity for Claude training and deployment, backed by a fresh $25 billion Amazon investment and a 10-year, $100 billion AWS spending commitment from Anthropic.

Announced on April 20, the expanded collaboration cements Anthropic's position as the most heavily resourced independent frontier AI lab currently operating. The 5-gigawatt figure is striking context: it matches the power draw of a mid-sized city and exceeds any single compute commitment a frontier AI company has publicly confirmed through one contract. Capacity begins coming online this quarter, with nearly 1 gigawatt of Trainium2 and Trainium3 instances expected to be available by the end of 2026.

  • Amazon is investing up to $25 billion in new capital — on top of the $8 billion already committed since 2023 — bringing its total exposure to Anthropic to as much as $33 billion.
  • Anthropic will spend more than $100 billion on AWS technologies over the next decade, covering current and future generations of Amazon's proprietary Trainium AI chips.
  • Anthropic's annualised revenue run rate has crossed $30 billion, up from approximately $9 billion at the close of 2025 — a tripling in roughly four months.

That revenue acceleration is the key context behind the infrastructure bet. At current adoption rates, Anthropic would exhaust available compute well before its next major training run. Locking in 5 gigawatts of capacity now is less a luxury than a logistical necessity, particularly as enterprise demand for Claude — especially in regulated sectors like legal, finance, and healthcare — continues to compress the company's planning timelines.

The deal also reshapes the competitive dynamics between the two dominant AI-cloud pairings. Microsoft's deep partnership with OpenAI and Amazon's expanding commitment to Anthropic are increasingly converging on the same structure: multi-year, multi-billion-dollar agreements that blur the line between investor and infrastructure provider. As both hyperscalers race to anchor a top-tier lab, the contest for AI supremacy is being fought as much through balance sheets and power agreements as through model benchmarks.