TLDRs;
- OpenAI plans to buy 26GW of AI chips, spending hundreds of billions even as it projects major losses.
- Nvidia will invest up to $100B in OpenAI, with the firm buying chips and Nvidia gaining equity.
- TSMC’s limited capacity could slow OpenAI’s chip acquisition plans, creating industry bottlenecks.
- Analysts see OpenAI’s high-risk bet as either a visionary move or a potential bubble trigger.
OpenAI, the San Francisco-based creator of ChatGPT, is embarking on one of the most aggressive spending sprees in tech history.
The company has reportedly committed to acquiring 26 gigawatts of advanced data processors from Nvidia, AMD, and Broadcom in less than a month, an order that would consume as much power as 20 nuclear reactors.
Industry experts, including Gil Luria of D.A. Davidson, estimate that OpenAI’s chip ambitions could require “hundreds of billions of dollars” in total spending. Despite generating around US$13 billion in revenue this year, OpenAI does not expect to turn a profit until 2029, forecasting billions in losses as it pours resources into expanding its AI infrastructure.
The company’s strategy highlights its determination to stay ahead in the rapidly escalating AI arms race, one defined by the need for faster, more efficient chips to train ever-larger models.
Nvidia and AMD Deepen Financial Ties
Nvidia, the dominant force in AI chipmaking, is reportedly preparing to invest up to US$100 billion in OpenAI over several years.
The investment structure is unusual, OpenAI will use the funds to buy Nvidia’s hardware, while Nvidia gains an equity stake in OpenAI, tying both companies’ fortunes even closer together.
AMD, not to be left behind, has also offered OpenAI an option to acquire equity, signaling the semiconductor industry’s eagerness to align with one of AI’s fastest-rising giants. Such arrangements blur the lines between vendor and investor, creating what analysts describe as an “AI mutual dependency” , a world where chip supply and startup growth are tightly intertwined.
Infrastructure Bottlenecks Loom Ahead
Even with vast funding, OpenAI’s ambitions face logistical limits. The global Chip-on-Wafer-on-Substrate (CoWoS) capacity, crucial for assembling advanced AI chips, cannot easily scale to match OpenAI’s 26GW commitment.
By 2026, global CoWoS production is projected to reach just 1.3 million 12-inch wafers, with Nvidia alone expected to reserve nearly half. Taiwan Semiconductor Manufacturing Company (TSMC), the primary chip fabricator for Nvidia and AMD, could reach 88,000 to 93,000 wafers per month by late 2026. But much of its 3nm and 5nm capacity is already locked by Apple, Qualcomm, and MediaTek, leaving limited headroom for OpenAI’s orders.
Experts warn that even modest disruptions in supply chains could delay OpenAI’s expansion, a scenario that could ripple across the global AI ecosystem.
Betting on the Future of AI
OpenAI’s aggressive chip spending strategy contrasts sharply with rivals like Google and Meta, both of which fund AI research through profitable advertising empires. For OpenAI, whose business model still leans on licensing deals and cloud partnerships, such front-loaded spending is a gamble on scale and time.
Some analysts warn the AI boom could turn into a speculative bubble, inflated by massive hardware and data center investments. Others, however, view OpenAI’s move as a long-term play that could cement its dominance once AI-driven applications become ubiquitous across industries.
In essence, OpenAI is wagering that its early and costly commitment to cutting-edge chips will pay off, not just in performance, but in market leadership.