Key Highlights
- Marvell shares climbed as high as 6.3% during premarket sessions following reports of discussions with Google regarding two innovative AI processors.
- The collaboration involves developing a memory processing unit to complement Google’s tensor processing unit (TPU), plus a dedicated TPU optimized for AI inference tasks.
- Google intends to complete the memory processor blueprint by next year before advancing to trial manufacturing phases.
- This partnership strengthens Google’s strategy to establish TPUs as viable alternatives to Nvidia’s GPU dominance.
- Alphabet’s Q1 2025 financial results, scheduled for April 29, could reveal additional details about AI chip infrastructure spending.
Marvell Technology (MRVL) experienced a significant premarket surge this past Sunday after The Information disclosed that Alphabet’s Google has entered discussions with the semiconductor firm to jointly engineer two cutting-edge AI processors.
Marvell Technology, Inc., MRVL
The equity rallied 6.3% by approximately 4:38 AM ET, delivering an energizing start to the trading week for investors.
Based on the publication’s sources—two individuals with direct knowledge of the negotiations—the firms are developing a memory processing unit (MPU) engineered to operate in tandem with Google’s established tensor processing unit (TPU) architecture. The second processor represents an entirely new TPU variant purpose-built for artificial intelligence inference operations.
The strategic timeline calls for finalizing the memory processor’s architecture within the next year, followed by transitioning into experimental production stages.
Google’s Silicon Strategy Expands
This collaboration represents part of a broader initiative. The search giant has been systematically constructing a comprehensive chip development ecosystem, partnering with technology leaders like Intel and Broadcom in addition to Marvell.
Throughout most of its existence, Google maintained TPUs exclusively for internal applications. This approach transformed in 2022 when its cloud infrastructure division assumed responsibility for external semiconductor distribution and began actively marketing TPUs to third-party clients.
Following that pivot, Google has accelerated both manufacturing capacity and commercial distribution channels. Last year marked another strategic evolution when it began delivering TPUs directly into customers’ proprietary data facilities—extending beyond cloud platform-only availability. This represents a substantial evolution in market approach.
Earlier this month, Google formally unveiled TorchTPU, an initiative designed to ensure native compatibility between its processors and PyTorch—the prevailing AI development framework. This development reduces migration barriers for engineers who have constructed their operational workflows around PyTorch and are evaluating alternatives to Nvidia’s ecosystem.
TPU commercialization has evolved into an increasingly significant revenue stream for Google Cloud as the corporation seeks to demonstrate that its artificial intelligence infrastructure investments are yielding tangible financial outcomes.
Nvidia’s Competitive Position
Nvidia maintains its commanding position in AI computational infrastructure, yet Google’s strategic maneuvers are intensifying competitive dynamics.
The Marvell collaboration enhances Google’s capabilities in the inference processor arena—a market segment where Nvidia has also been pursuing aggressive expansion. Nvidia is purportedly engineering new AI inference processors incorporating technological innovations from Groq.
With Google, Marvell, Intel, and Broadcom all advancing similar objectives, the inference chip marketplace is experiencing rapid consolidation.
Google’s first-quarter financial disclosures are scheduled for April 29. Market analysts will scrutinize guidance regarding TPU production scaling ambitions, cloud infrastructure revenue trajectory, and strategic implications of the Marvell discussions for the company’s semiconductor development roadmap.


