TL;DR: Anthropic has secured a 3.5-gigawatt deal for Google and Broadcom TPU capacity starting in 2027 as annualised Claude revenue crosses $30 billion — tripling its current compute footprint in a single agreement.
Anthropic this week confirmed an expanded infrastructure deal with Google and chip designer Broadcom, locking in access to approximately 3.5 gigawatts of next-generation Tensor Processing Unit (TPU) capacity expected to come online in 2027. The agreement, which builds on an existing partnership first expanded in early April, represents one of the largest AI compute commitments by a non-hyperscaler company on record.
- Anthropic currently draws roughly 1 gigawatt of TPU compute from Google's infrastructure in 2026; the new deal more than triples that figure for the following year
- Broadcom CEO Hock Tan confirmed the trajectory directly, telling investors that demand from Anthropic is "expected to surge in excess of 3 gigawatts of compute" in 2027
- Anthropic's annualised revenue run-rate has surpassed $30 billion, up from approximately $9 billion at the close of 2025, driven by accelerating enterprise adoption of Claude
- The majority of new compute capacity will be sited in the United States, consistent with Anthropic's pledged $50 billion investment in domestic AI infrastructure
The strategic significance extends beyond raw scale. By anchoring its infrastructure roadmap to Google's custom TPUs rather than Nvidia GPUs, Anthropic is making an explicit bet on purpose-built AI silicon as the preferred substrate for both training and inference at the frontier. Broadcom, as the designer of Google's TPU line, is positioned as a principal beneficiary — and, analysts note, as an emerging alternative to Nvidia's dominant GPU supply chain.
The deal adds competitive pressure across the board. OpenAI relies primarily on Microsoft Azure's Nvidia GPU clusters, while Meta has signed a separate expanded Nvidia deal covering millions of chips. Anthropic's ability to guarantee multi-gigawatt TPU availability through 2027 could prove decisive as training and inference runs scale toward sustained power draws that strain existing supply agreements. Infrastructure watchers will be tracking whether the arrangement accelerates similar compute lock-ins from other frontier labs over the coming quarters.