Anthropic Signs Compute Deal With Google & Broadcom
Anthropic has signed a new agreement with Google and Broadcom for multiple gigawatts of next-generation TPU capacity, with deployment expected to begin in 2027. This expansion of our compute infrastructure will power our frontier Claude models and help us serve extraordinary demand from customers worldwide.
What the Google and Broadcom TPU Deal Actually Covers
“This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure: we are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development,” said Krishna Rao, CFO of Anthropic. “We are making our most significant compute commitment to date to keep pace with our unprecedented growth.”
The vast majority of the new compute will be sited in the United States, making this a major expansion of our November 2025 commitment to invest $50 billion in strengthening American computing infrastructure. The partnership also deepens our existing work with Google Cloud — building on the increased TPU capacity announced last October — as well as our relationship with Broadcom.
Claude’s Revenue Hit $30 Billion — Here’s What’s Driving It
Demand from Claude customers has accelerated sharply in 2026. Our run-rate revenue has now surpassed $30 billion — up from approximately $9 billion at the end of 2025. When we announced our Series G fundraising in February, over 500 business customers were each spending over $1 million on an annualized basis. Today that number exceeds 1,000, doubling in less than two months.
That kind of growth doesn’t happen quietly. It’s the reason this compute commitment exists.
How Anthropic Is Scaling AI Infrastructure Without Betting on One Chip
We train and run Claude on a range of AI hardware — AWS Trainium, Google TPUs, and NVIDIA GPUs — which means we can match workloads to the chips best suited for them. But this isn’t just about flexibility. This diversity of platforms translates to better performance and greater resilience for customers who depend on Claude for critical work.
Amazon remains our primary cloud provider and training partner, and we continue to work closely with AWS on Project Rainier. Claude remains the only frontier AI model available to customers on all three of the world’s largest cloud platforms: Amazon Web Services (Bedrock), Google Cloud (Vertex AI), and Microsoft Azure (Foundry).
Whether every workload runs perfectly across all three at gigawatt scale — honestly, that’s something the industry hasn’t tested yet. What we can say is that the architecture is designed for it.
Why the Majority of This Compute Is Staying in the United States
This partnership is a direct continuation of Anthropic’s November 2025 pledge to invest $50 billion in American computing infrastructure. Siting the new capacity domestically isn’t incidental — it’s a deliberate part of how Anthropic is positioning its enterprise AI spending and growth strategy for U.S.-based customers and regulators alike. You’ve probably seen other AI companies make similar commitments; this one comes with a named partner, a named hardware type, and a named timeline.
FAQ
When will the new Anthropic TPU capacity from Google and Broadcom actually come online?
The agreement targets capacity coming online starting in 2027, so don’t expect this to change Claude’s performance next quarter. Anthropic already expanded its Google Cloud TPU capacity last October — so this is a continuation of an accelerating build-out, not a standing start. By 2027, the cumulative compute footprint should be meaningfully larger than anything Anthropic has run before.
Does this deal mean Anthropic is moving away from Amazon AWS?
No — Amazon stays Anthropic’s primary cloud and training partner, and Project Rainier is still active. What this partnership does is add Google TPUs and Broadcom chips to the mix, which lets Anthropic match specific workloads to the hardware best suited for them. Honestly, running on multiple platforms is a deliberate strategy, not a sign of tension with any one provider.
What does this mean for businesses already using Claude on AWS Bedrock or Azure?
Nothing changes for your existing setup. Claude remains available on all three major cloud platforms — Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Azure Foundry. More compute behind the scenes means better availability and fewer bottlenecks as demand keeps climbing.
How fast is Anthropic actually growing — is the $30 billion run-rate real?
The numbers check out and they’re striking. Run-rate revenue crossed $30 billion in 2026, up from roughly $9 billion at end of 2025. The number of enterprise customers spending over $1 million annually doubled from 500 to 1,000 in under two months. Nobody expected that pace — including, it seems, Anthropic’s own CFO, who called it “unprecedented growth.”
Can Anthropic realistically manage compute across AWS Trainium, Google TPUs, and NVIDIA GPUs at the same time?
It’s more complex than running a single-platform stack, but Anthropic says matching workloads to the right chip improves both performance and resilience. Whether that holds at gigawatt scale is an open question — nobody has done this at this size before. The diversity does act as an insurance policy: if one platform hits capacity issues, the others can absorb demand.









