TLDR
- OpenAI signed a $38 billion deal with Amazon Web Services to access hundreds of thousands of Nvidia GPUs through 2026 and beyond.
- Amazon stock jumped about 5% following the announcement on Monday morning.
- This marks OpenAI’s first major partnership with AWS and represents a move away from exclusive reliance on Microsoft’s cloud infrastructure.
- OpenAI will immediately begin running AI workloads on AWS data centers using Nvidia chips including Blackwell models.
- The deal adds to OpenAI’s recent string of cloud and chip partnerships totaling roughly $1.4 trillion in infrastructure commitments.
OpenAI signed a $38 billion deal with Amazon Web Services on Monday. The agreement gives the AI startup access to hundreds of thousands of Nvidia graphics processing units housed in AWS data centers.
Amazon stock climbed approximately 5% after the announcement. The partnership marks OpenAI’s first major contract with the cloud computing market leader.
OpenAI will begin running workloads on AWS infrastructure immediately. The company plans to scale to full computing capacity under the agreement by the end of 2026.
The deal uses existing AWS data centers in the United States initially. Amazon will build out additional infrastructure for OpenAI in the coming years.
Dave Brown, vice president of compute and machine learning services at AWS, confirmed the capacity is completely separate. Some of that capacity is already available and OpenAI is using it now.
This partnership represents a shift for OpenAI. Until earlier this year, Microsoft served as the company’s exclusive cloud provider after first backing OpenAI in 2019 with a total investment of $13 billion.
In January, Microsoft moved to a right of first refusal arrangement for new requests. Last week, Microsoft’s preferential status expired under newly negotiated commercial terms with OpenAI.
Expanding Cloud Partnerships
OpenAI has been on a dealmaking spree lately. The company announced roughly $1.4 trillion worth of buildout agreements with Nvidia, Broadcom, Oracle, and Google.
Some skeptics warn these deals point to an AI bubble. Questions have emerged about whether the country has enough power and resources to fulfill these promises.
OpenAI will still spend heavily with Microsoft. The company committed to purchasing an incremental $250 billion of Azure services last week.
For Amazon, this deal matters because of its rivalry with Anthropic. Amazon has invested billions in Anthropic and is building an $11 billion data center campus in New Carlisle, Indiana exclusively for Anthropic workloads.
AWS CEO Matt Garman said the breadth and immediate availability of optimized compute shows why AWS is positioned to support OpenAI’s workloads. The infrastructure will power both ChatGPT’s real-time responses and training of next-generation models.
Path to Going Public
The current agreement explicitly covers Nvidia chips including two Blackwell models. Potential exists to incorporate additional silicon later, including Amazon’s custom-built Trainium chip.
Companies like Peloton, Thomson Reuters, Comscore, and Triomics already use OpenAI models on AWS. They apply them to tasks ranging from coding to scientific analysis.
Brown said OpenAI is a customer of AWS under this deal. OpenAI committed to buying compute capacity and AWS is charging for that capacity.
The AWS agreement is another step in OpenAI’s preparation to eventually go public. By diversifying cloud partners and locking in long-term capacity, OpenAI signals independence and operational maturity.
CEO Sam Altman acknowledged in a recent livestream that an IPO is “the most likely path” given OpenAI’s capital needs. CFO Sarah Friar framed the recent corporate restructuring as a necessary step toward going public.
Amazon reported more than 20% year-over-year revenue growth at AWS last week, beating analyst estimates. AWS completed a massive AI data center project last Wednesday and plans to provide Anthropic with one million custom AI chips by the end of 2025.


