TLDRs;
- Leaked documents reveal OpenAI paid Microsoft $866M in 2025 revenue share, nearly double the prior year.
- Estimates suggest OpenAI generated over $4.3B in the first nine months of 2025 alone.
- Inference spending soared to $8.7B in 2025, significantly outpacing OpenAI’s reported revenues.
- Rising compute costs highlight escalating pressure on AI firms and boost demand for cost-optimization tools.
Newly surfaced internal documents have shed light on the financial relationship between OpenAI and Microsoft, revealing a dramatic jump in the revenue share paid to the tech giant in 2025.
According to leaked files cited by tech commentator Ed Zitron, OpenAI transferred approximately $865.8 million to Microsoft during the first three quarters of the year, nearly double the $493.8 million recorded in 2024.
While neither OpenAI nor Microsoft has publicly confirmed the specifics of their commercial arrangements, the leaked information aligns with long-standing speculation that Microsoft receives roughly 20% of OpenAI’s revenue.
Back-of-the-envelope calculations based on that percentage suggest OpenAI generated at least $2.5 billion in 2024 and more than $4.3 billion between January and September 2025. Some analysts believe the totals could be significantly higher, depending on how contractual thresholds and product tiers are structured.
Infrastructure costs outpace income
But the financial picture is more complex beneath the surface. The leaked documents also indicate that OpenAI’s operational spending, specifically the cost of inference, or running its AI models in production, has ballooned. The company spent an estimated $3.8 billion on inference in 2024, a figure that skyrocketed to $8.7 billion through the third quarter of 2025.
This surge places OpenAI in a peculiar position: despite rising revenues and unprecedented product adoption, its expenses for delivering AI outputs appear to exceed what it earns.
The mismatch has prompted industry observers to question the sustainability of OpenAI’s current economics, especially given CEO Sam Altman’s prior claims that annual revenue was “well more than” $13 billion, numbers that stand in tension with the amounts implied by Microsoft’s revenue-share receipts.
Multi-cloud strategy expands
While Microsoft Azure remains OpenAI’s primary infrastructure backbone, the company has diversified its compute footprint across multiple cloud providers.
Partnerships with CoreWeave and Oracle have been active for some time, while more recent deals include engagements with AWS and Google Cloud. This growing multi-cloud strategy is aimed at ensuring capacity, lowering risk, and potentially improving pricing leverage as competition among cloud providers intensifies.
Sources familiar with the relationship say Microsoft also shares revenue with OpenAI from both Bing and the Azure OpenAI Service, though the precise amounts remain undisclosed. These additional revenue flows highlight the depth of the companies’ interdependence and underscore just how intertwined the two operations have become.
Cost pressure sparks new opportunities
The spiraling cost of AI inference has become a defining challenge for OpenAI and the broader industry. As enterprise adoption grows and users demand increasingly capable models, compute bills climb at an even faster rate. This dynamic is now fueling a wave of startups focused on AI FinOps, building tools that help companies monitor, optimize, and reduce their inference costs.
These FinOps platforms, alongside efficiency-focused LLM gateways that route traffic to the cheapest or fastest available model, are rapidly gaining traction. For cloud providers and infrastructure startups, the mounting financial strain faced by AI developers represents a lucrative opening.
Vendors capable of simplifying model deployment across multiple clouds, or helping teams shift workloads away from the priciest providers, are well positioned to capitalize on the moment.


