TLDRs;
- Anthropic expands Google Cloud usage, planning up to 1 million TPUs by 2026.
- The multi-billion dollar deal will deliver over a gigawatt of compute capacity.
- Anthropic now serves over 300,000 business customers, with large accounts rising sharply.
- Multi-cloud setup includes Google TPUs, Amazon Trainium, and NVIDIA GPUs for AI training.
US-based AI startup Anthropic has announced a major expansion of its cloud computing infrastructure through a multi-billion-dollar agreement with Google Cloud.
As part of the deal, Anthropic plans to deploy up to one million Tensor Processing Units (TPUs) by 2026, significantly boosting its AI compute capacity. The expansion is expected to deliver over a gigawatt of power, underscoring Anthropic’s ambition to scale its AI operations to meet growing enterprise demand.
The startup, known for its Claude AI models, now serves more than 300,000 business clients. Over the past year, the number of large accounts has increased nearly sevenfold. This surge highlights the rapid adoption of AI solutions in enterprise settings and the strategic importance of robust cloud infrastructure.
Multi-Cloud Strategy Drives Flexibility
Anthropic’s AI training relies on a diverse combination of hardware, including Google TPUs, Amazon Trainium chips, and NVIDIA GPUs.
This multi-cloud strategy allows the company to optimize performance, improve cost efficiency, and maintain flexibility in large-scale model training. Anthropic continues to partner with Amazon on Project Rainier, a massive compute cluster spanning several US data centers, further strengthening its cross-cloud capabilities.
The deployment of TPUs in Google Cloud is particularly significant, though the company has not yet specified which generation of TPUs will be used. Google’s seventh-generation Ironwood TPU, for instance, can deliver up to 4,614 teraflops per chip with 192GB of high-bandwidth memory, a sixfold improvement over the previous Trillium generation. Whether Anthropic will use Ironwood or earlier TPU models remains unclear.
Tools for Multi-Hardware Optimization
Operating across multiple hardware platforms presents technical challenges, especially for enterprise AI teams and AI-native startups.
Anthropic leverages open-source tools like the OpenXLA compiler framework, which enables machine learning frameworks such as PyTorch to run efficiently on TPUs with minimal code changes. Additionally, the Portable JIT Runtime (PJRT) API provides standard interfaces across TPUs, Trainium, and GPUs, facilitating cross-platform model tuning and workload management.
Support from major players including Google, AWS, AMD, Intel, and NVIDIA ensures a collaborative ecosystem for AI hardware optimization. This backing allows Anthropic and other enterprises to implement orchestration, benchmarking, and profiling tools to maximize performance and efficiency across different compute environments.
Implications for Enterprise AI
Anthropic’s expanded partnership with Google Cloud positions the company as a leading provider of scalable AI infrastructure for enterprises worldwide. With enhanced compute capacity, cross-platform tools, and multi-cloud flexibility, Anthropic can better meet the demands of large-scale AI projects while maintaining efficiency and cost-effectiveness.
As AI adoption continues to accelerate, deals like this highlight the growing importance of strategic cloud partnerships and cutting-edge hardware for startups and enterprises aiming to stay competitive in the AI space.

