TLDRs
- Google launches TPU 8t and 8i chips for AI training and inference workloads
- New chips deliver 3x faster training and improved cost efficiency
- Google continues supporting Nvidia hardware despite in-house chip expansion
- Partnership with Nvidia strengthens cloud performance and AI infrastructure capabilities
Google Cloud has unveiled its latest generation of custom artificial intelligence chips, signaling a deeper push into in-house silicon while maintaining strong ties with industry leader Nvidia.
The company introduced two new tensor processing units (TPUs) under its eighth-generation lineup, the TPU 8t and TPU 8i.
The TPU 8t is designed specifically for training AI models, a process that requires vast computational power to teach systems how to recognize patterns and generate outputs. Meanwhile, the TPU 8i focuses on inference, the stage where trained models respond to real-world inputs, such as user prompts or automated tasks.
This split reflects a growing trend in AI infrastructure, where specialized hardware is tailored to distinct phases of machine learning to improve efficiency and performance.
Performance Gains and Cost Efficiency
Google claims the new TPU generation delivers significant improvements over its predecessors. According to the company, the chips can achieve up to three times faster model training speeds while offering roughly 80% better performance per dollar.
Another standout feature is scalability. Google says its infrastructure can link more than one million TPUs into a single cluster, dramatically expanding the computational capacity available to enterprise clients. This scale could enable faster development of large AI models while lowering energy consumption and operational costs.
By focusing on low-power, high-efficiency chip design, Google continues to differentiate its TPUs from traditional GPUs, even as both compete in the same broader AI hardware market.
Not Replacing Nvidia Yet
Despite these advancements, Google is not abandoning its reliance on Nvidia hardware. Instead, the company is positioning its custom chips as complementary to Nvidia’s widely used GPUs.
Google confirmed that its cloud platform will continue to support Nvidia’s latest chips, including the upcoming Vera Rubin architecture. This approach mirrors strategies from other hyperscalers like Microsoft and Amazon, which are also developing in-house silicon while maintaining partnerships with Nvidia.
The current reality is that Nvidia remains deeply entrenched in the AI ecosystem, with a dominant market position that continues to grow despite increasing competition.
Strategic Partnership Still Intact
Rather than signaling a direct rivalry, Google’s latest move highlights a dual strategy: build internal capabilities while strengthening external partnerships. The company revealed it is collaborating with Nvidia to enhance networking performance within its cloud infrastructure.
A key focus of this partnership is improving a networking technology known as Falcon, an open-source system originally developed by Google. By optimizing how Nvidia-powered systems communicate within Google Cloud, both companies aim to deliver better performance for AI workloads.
This collaboration underscores a broader industry dynamic where competition and cooperation coexist, particularly in the rapidly evolving AI space.


