Key Highlights
- Amazon Web Services will receive 1 million GPUs from Nvidia by the conclusion of 2027.
- Deliveries commence this year and extend through 2027.
- The agreement encompasses networking hardware, Groq inference chips, and upcoming Blackwell and Rubin architectures.
- AWS plans to deploy seven distinct Nvidia chips for AI inference operations.
- Both NVDA and AMZN shares rose in extended trading after the disclosure.
The Amazon Web Services partnership represents one of Nvidia’s most substantial single-customer semiconductor contracts to date. The arrangement becomes increasingly compelling as additional details emerge.
Ian Buck, Vice President at Nvidia, disclosed to Reuters that GPU deliveries totaling 1 million units will commence in 2025 and continue until 2027’s end. This schedule aligns precisely with CEO Jensen Huang’s forecast of a $1 trillion addressable market for Nvidia’s Blackwell and Rubin processor families during the identical timeframe.
The arrangement extends far beyond simple GPU volume. Amazon Web Services is acquiring Nvidia’s comprehensive hardware ecosystem, including Spectrum-X and ConnectX networking infrastructure. This development carries particular significance since AWS has traditionally relied on proprietary networking solutions. Incorporating Nvidia’s networking technology into its facilities represents a strategic departure.
Amazon Web Services Commits Fully to Nvidia for Inference Tasks
AI inference — the computational process enabling AI models to generate outputs and execute tasks — forms the foundation of this partnership’s technical framework. Amazon Web Services intends to utilize seven distinct Nvidia chips for managing inference operations.
Buck stated directly: “Inference is hard. It’s wickedly hard. To be the best at inference, it is not a one chip pony. We actually use all seven chips.”
The Groq processors, unveiled by Nvidia recently after its $17 billion licensing arrangement with an AI semiconductor startup, constitute one component of this inference framework. They operate in conjunction with six additional Nvidia processors to provide what the manufacturer characterizes as industry-leading inference capabilities.
Amazon Web Services will also implement Nvidia’s Blackwell processors and is anticipated to integrate the forthcoming Rubin platform upon availability. Neither Nvidia nor Amazon has revealed the monetary terms of this partnership.
Both companies’ shares experienced modest gains during Thursday’s after-hours session following the announcement. NVDA had declined approximately 1% during regular trading, while AMZN dropped around 0.5%.
Amazon Continues In-House Chip Development Simultaneously
Amazon engineers its own AI semiconductors, including the Trainium2 processing unit. Nonetheless, the company continues relying on Nvidia for the most intensive computational requirements. These two strategies appear mutually supportive rather than contradictory.
The partnership demonstrates ongoing substantial capital allocation toward AI infrastructure among leading cloud computing providers. AWS isn’t abandoning its custom hardware — instead, it’s supplementing them with Nvidia equipment for particular high-intensity applications.
The Nvidia-AWS partnership was initially revealed this week without specific timeline details. Buck’s Thursday statements to Reuters delivered the most comprehensive information to date: transactions beginning in 2025, extending through late 2027, encompassing a diverse range of Nvidia offerings across computational processing, networking, and inference capabilities.


