TLDRs:
- OpenAI raises paid user compute margins to 70%, still not profitable.
- Talks with Amazon could bring $10B investment and chip migration.
- Competition from Google and Anthropic pressures OpenAI’s AI growth strategy.
- Efficiency improvements may not offset high infrastructure and data costs.
OpenAI has reportedly increased its compute margin for paid customers to 70% as of October 2025, up from 52% at the end of 2024. Compute margin represents the revenue share left after deducting the costs of running AI models. While this is a substantial improvement, the company has yet to post an overall profit, as operational and infrastructure costs remain exceptionally high.
Despite the margin gains, OpenAI’s financial picture remains complex. By July 2025, ChatGPT had roughly 35 million paid subscribers, generating an estimated $12 billion in annual recurring revenue (ARR). However, the firm recorded losses of $11.5 billion during the same period, indicating that improved margins alone may not sufficiently cover the costs of compute and data center operations.
Annualized revenue projections suggest the company could reach $20 billion by late 2025, though slower user growth, from 42% in early 2025 to 13% by September, puts additional pressure on profitability.
Amazon Deal Could Shift AI Costs
In response to rising operational costs, OpenAI is reportedly in early discussions with Amazon to secure at least $10 billion in funding. As part of the deal, the company may migrate its AI workloads to Amazon Web Services’ (AWS) advanced chips, potentially reducing per-token inference costs and boosting overall efficiency. If completed, this move could also value OpenAI at over $500 billion.
AWS Trainium2 chips, part of the second-generation AI training suite, are reported to deliver 30–40% better price performance than traditional GPU-based instances such as the EC2 P5e and P5en. Meanwhile, Trainium3, the third-generation model, can produce up to five times more output tokens per megawatt than its predecessor, enabling cost-effective training of large-scale frontier AI models. This shift may also attract enterprise clients seeking improved ROI on AI workloads.
Competitive Pressures Intensify
OpenAI faces increasing competition from industry rivals including Google and Anthropic. While OpenAI’s compute margins for paid accounts are reportedly higher than Anthropic’s, the latter has been noted for greater efficiency in server spending. Meanwhile, the majority of ChatGPT users remain on the free tier, highlighting a significant challenge in converting scale into sustainable revenue.
This competitive landscape underscores the importance of margin improvements, particularly as overall user growth slows. Enterprises evaluating AI solutions will likely compare per-token costs, server efficiency, and performance benchmarks, giving OpenAI incentives to optimize infrastructure further.
Benchmarking and Enterprise Adoption
Cloud consultancies and system integrators are increasingly offering Large Language Model (LLM) benchmarks using AWS chips. Open-source tools such as vLLM, a high-throughput inference engine, and lm-evaluation-harness, a benchmarking suite, are helping enterprises evaluate the performance of inference workloads.
By transitioning to AWS Trainium chips, OpenAI could demonstrate cost efficiency and improved energy performance, critical factors for enterprise adoption. These metrics are likely to be pivotal as the firm navigates investor scrutiny, high operational costs, and mounting competitive pressures.
Conclusion
OpenAI’s recent margin gains demonstrate its ability to improve operational efficiency, but profitability remains out of reach due to high infrastructure and compute costs. A potential $10 billion Amazon partnership could alleviate some pressure, while strategic chip migration may enable enterprise clients to benefit from lower inference costs.
As competition intensifies and user growth moderates, OpenAI’s ability to balance scale, cost, and revenue will define its path toward long-term sustainability.


