TLDRs
- Google explores custom AI chips to improve efficiency and reduce reliance on Nvidia GPUs.
- Proposed chips include a memory processor and a new TPU design.
- Marvell partnership aligns with hyperscaler focus and diversifies its customer base.
- AI chip race intensifies as cloud giants invest in custom silicon solutions.
Google is reportedly deepening its push into custom artificial intelligence hardware, entering discussions with Marvell Technology to develop next-generation chips tailored for AI workloads.
The move signals a broader strategic shift as the tech giant looks to optimize performance, reduce costs, and lessen its dependence on third-party chip suppliers like Nvidia.
Push Toward Custom AI Silicon
At the center of the talks are two specialized chips designed to enhance how AI models are trained and deployed. One of the proposed designs is a memory processing unit that would complement Google’s existing tensor processing units (TPUs), improving how data is handled during computation-heavy AI tasks. The second chip under consideration is a new TPU variant, purpose-built to efficiently run advanced AI models.
This dual-chip approach highlights Google’s intent to tightly integrate compute and memory functions, an increasingly important factor as AI models grow more complex and data-intensive. By designing chips in-house or through strategic partnerships, Google can fine-tune performance for its own ecosystem, particularly within its cloud infrastructure.
Timeline Targets and TPU Expansion
Sources suggest that the memory-focused chip could reach the design completion stage by 2027, after which it would move into test production. While that timeline may seem distant, it reflects the complexity involved in designing cutting-edge semiconductor solutions.
Meanwhile, Google has already been accelerating its use of TPUs across its services. These chips are becoming a cornerstone of its cloud offerings, helping businesses run AI workloads more efficiently. As demand for AI-driven applications continues to surge, TPUs are playing a growing role in driving cloud revenue growth.
Marvell’s Hyperscaler Strategy Alignment
For Marvell Technology, a potential partnership with Google would align closely with its focus on serving hyperscale cloud providers. The company has positioned itself as a key player in custom silicon and AI infrastructure, offering solutions that span from specialized accelerators to data center connectivity technologies.
A deal with Google could also help Marvell diversify its customer base. Currently, much of its business is tied to large-scale clients such as Amazon and Microsoft. Expanding into deeper collaboration with Google would reduce reliance on a narrower set of partners while strengthening its foothold in the fast-growing AI chip market.
Additionally, Marvell’s broader initiatives, including work on high-speed connectivity solutions and optical interconnect technologies, suggest it is preparing for the next generation of AI data centers, where efficient communication between chips is just as critical as raw processing power.
AI Chip Race Intensifies Globally
The discussions come amid a broader industry trend where major cloud providers are racing to develop custom silicon. Companies like Google are increasingly seeking alternatives to GPUs supplied by Nvidia, which have dominated AI workloads but come with high costs and supply constraints.
Custom chips offer a way to optimize performance for specific use cases, reduce long-term expenses, and gain greater control over hardware design. Google has already collaborated with Broadcom on chip development, and a potential tie-up with Marvell would further expand its ecosystem of hardware partners.
Beyond compute chips, the competition is extending into the broader data center stack, including networking, memory, and interconnect technologies. Marvell’s reported plans to acquire optical connectivity firms underscore how critical these components are becoming in scaling AI systems efficiently.


