TLDR
- Microsoft locked in access to 100,000+ Nvidia GB300 chips via $19.4 billion Nebius partnership announced September 8
- Total neocloud spending exceeds $33 billion across Nebius, CoreWeave, Nscale, and Lambda providers
- Strategy allows Microsoft to reserve its own data centers for customer-facing AI services while using external capacity for internal projects
- Scott Guthrie confirms Microsoft operates in “land-grab mode” to avoid AI capacity constraints
- First AI models under Mustafa Suleyman were trained at CoreWeave’s Oregon facility
Microsoft is making a massive bet on external computing infrastructure. The company secured access to over 100,000 Nvidia GB300 chips through a $19.4 billion agreement with neocloud provider Nebius Group NV.

The deal was announced September 8 but lacked specific details at the time. Sources familiar with the arrangement reveal the chips will power Microsoft’s internal large language model development and consumer AI assistant projects.
This represents just one piece of Microsoft’s broader infrastructure strategy. The company has committed more than $33 billion to neocloud providers including Nebius, CoreWeave, Nscale, and Lambda.
These neoclouds are smaller infrastructure companies specializing in AI-focused computing power rentals. Microsoft’s willingness to rely on relative newcomers for critical infrastructure marks a shift in cloud computing strategy.
Why Microsoft Needs External Computing Power
Scott Guthrie oversees Microsoft’s cloud operations. He described the company’s approach as operating in “land-grab mode” for AI capacity.
The reason is straightforward. Microsoft faces serious data center capacity constraints.
Building new facilities takes considerable time. Neoclouds have already solved logistical challenges around power procurement and chip availability.
Renting from these providers accelerates Microsoft’s timeline. The strategy also frees up Microsoft’s own data centers for customer-facing AI services, which generate revenue.
Microsoft’s first foundation AI models built under consumer AI chief Mustafa Suleyman were trained at a CoreWeave data center near Portland, Oregon. This confirms the company is running substantial workloads on external infrastructure.
The financial structure offers advantages beyond speed. Renting capacity allows Microsoft to categorize certain expenses as operational rather than capital costs.
Bernstein analyst Mark Moerdler noted this affects cash flow, tax treatment, and profit reporting. The arrangement provides more flexibility than owning all infrastructure outright.
Microsoft Outpaces Cloud Competitors
Microsoft isn’t limiting neocloud use to model training. Recent agreements with Nscale in the UK and Norway support AI service delivery in those markets.
The company previously rented Oracle capacity to power the AI-enhanced Bing search engine.
Competitors haven’t announced similar neocloud partnerships at Microsoft’s scale. Amazon hasn’t disclosed comparable deals.
Google rents some CoreWeave capacity for OpenAI-related work, according to Reuters. But the volume appears smaller than Microsoft’s commitments.
Guthrie explained why Microsoft needs more capacity. Demand comes from products like GitHub Copilot and from OpenAI’s ChatGPT, which serves hundreds of millions of users.
“Amazon doesn’t have anywhere near that scale. Nor does Google, in terms of total users with intensity, especially during business hours,” Guthrie stated.
Continued Infrastructure Investment
Microsoft continues building its own data centers despite the neocloud spending. The company announced a second development phase Tuesday for a Racine, Wisconsin facility.
The expansion will bring total utility power capacity to at least 900 megawatts. That rivals the output of a typical nuclear reactor.
Microsoft paused several projects earlier this year during a computing needs evaluation. Guthrie said the company will keep adjusting infrastructure plans based on demand patterns and regulatory requirements.