Anthropic commits $200 billion to Google Cloud over five years for TPU chips
Anthropic, the AI lab behind the Claude family of models, signed a deal this week committing roughly $200 billion to Google Cloud over five years in exchange for compute capacity and chips, according to a report by The Information picked up across the tech press. The agreement, which begins in 2027 and runs through 2031, secures Anthropic about 5 gigawatts of compute, the kind of number that until recently was used to describe national-scale data center buildouts rather than a single customer's procurement. Alphabet, Google's parent, has agreed to invest at least $10 billion in Anthropic alongside the contract, with the figure rising to as much as $40 billion if Anthropic hits performance milestones inside the contract.
The chips at the center of the deal are not Nvidia GPUs, the graphics processing units that have powered most generative AI training to date. They are Google's own TPUs, or tensor processing units, custom silicon Google has been developing since 2015 specifically for the matrix math that neural networks run on. The newest TPU generation, designed in partnership with Broadcom, is purpose-built for the transformer architecture (the model design that underpins modern large language models including Claude and Gemini). Google says TPU performance per dollar runs roughly 40% to 50% lower than comparable Nvidia setups for transformer training workloads. For Anthropic, which is on track for $20 billion in server costs in 2026 alone, that discount is the kind of thing that decides whether the company is gross-margin profitable on inference (the cost of running a model to answer a user query) or not.
The strategic move underneath the headline number is Anthropic finally diversifying its compute base. Through 2024 and most of 2025 the company was effectively monogamous with Amazon Web Services and its Trainium chips, with a smaller TPU footprint at Google. The new commitment flips that ratio. Anthropic is now in the position of being the second-biggest TPU customer in the world after Google itself. Amazon stays in the picture, and the previously announced $50 billion AWS commitment continues, but the center of gravity for new Claude training runs is moving to Google's data centers in Ohio and Iowa. The dual-cloud setup also gives Anthropic leverage in the next round of AWS pricing negotiations, something OpenAI never had with its near-exclusive Microsoft Azure relationship.
For Google, the contract is a counterweight to a long-running narrative that the company is losing the cloud AI race to Microsoft and AWS. Alphabet's cloud backlog (the dollar value of contracts signed but not yet recognized as revenue) doubled in the most recent quarter to over $460 billion. The Anthropic deal alone accounts for more than 40% of that figure. It also validates the TPU economics in a way that internal Google use never could. If Anthropic builds the next Claude generation on TPUs and the resulting model is competitive with whatever OpenAI ships on Azure GPUs, the rest of the frontier-model market starts to take TPU procurement seriously, which is exactly the proof point Google needs to unlock new enterprise sales.
The skeptical read is about concentration risk on both sides. Anthropic is now committed to spending what is roughly five years of its current revenue on a single supplier's roadmap. If Google's next-generation TPU slips, or if a competing architecture (custom chips from Microsoft, Meta, or Amazon's Trainium 3) leapfrogs the TPU on price-performance, Anthropic is locked in. On the Google side, $200 billion of contracted revenue from a customer that is itself burning cash and dependent on continued venture funding rounds is a real counterparty exposure. Anthropic raised at a roughly $250 billion valuation earlier this year, but if the AI capex cycle cools the way some analysts now expect by late 2027, both ends of the deal get re-examined.
The butterfly effects ripple in three directions. Nvidia, which has been the default winner of every AI infrastructure announcement for two years, did not lose much on the news because its order book is already booked through 2027, but the long-tail story (the question of who will be buying H300s and Blackwells in 2028) got slightly worse. Broadcom, which manufactures the TPU silicon for Google, picked up roughly 6% on the week and is now valued north of $2 trillion. Power utilities in the regions where the new data centers are landing, particularly American Electric Power in Ohio, saw a quiet bid. The 5 gigawatts of compute capacity in the contract is roughly the electricity load of a mid-sized US state, and the grid interconnection queue is already the binding constraint on AI buildouts.
What to watch. Anthropic is expected to announce a new Claude generation, internally referenced as Claude Mythos, sometime in the third quarter; that release will be the first proof point of whether the TPU-trained models hit the performance bar Anthropic is targeting. Google's next earnings release on July 24 will give the first official commentary on backlog conversion and capex pacing. The US Center for AI Standards and Innovation announced this week that it will run pre-release evaluations on models from Google, Microsoft, and xAI; Anthropic is already in the program, and the speed at which CAISI clears Mythos for public release is now also a function of the Google relationship. And Nvidia's August quarterly print will be the first read on whether the broader AI infrastructure market is still growing or starting to digest.