TLDRs:
- Nvidia CEO Jensen Huang assures Blackwell chip supply can meet global demand despite high usage.
- Vera Rubin AI systems expected to generate $35 billion, driving data center market growth.
- Nvidia’s China business faces limits due to U.S. export restrictions on advanced chips.
- System integration and rack-scale deployments remain the main constraints for peak performance users.
Nvidia CEO Jensen Huang has clarified that the company has enough Blackwell chips to meet the growing global demand for AI and data center applications.
Earlier reports of “sold out” inventory had raised concerns in the market, but Huang explained to Bloomberg Television Thursday that the term referred to customers fully utilizing existing stock rather than an actual shortage. The company has been carefully managing its supply chain to maintain a steady flow of these high-performance processors.
“We have planned our supply chain incredibly well,” Huang said. “We have a bunch of Blackwells to sell.”
Strong third-quarter results support Nvidia’s supply capabilities, and the company anticipates surpassing its long-term goal of US$500 billion in cumulative sales from new chips and systems. Huang highlighted that the increased capabilities of Nvidia’s products are helping the company capture a larger portion of spending in hyperscale data centers.
Vera Rubin Chips Expected to Boost Revenue
Looking ahead, Nvidia is preparing to roll out its Vera Rubin generation of AI chips. Huang projected that this upcoming line could generate approximately US$35 billion in revenue from the roughly US$55 billion per gigawatt of AI computing capacity that data centers typically invest in.
While the Blackwell chips form the core computational engine, the remainder of the cost is allocated to power, cooling, networking, and integration.
The Vera Rubin platform is poised to reshape data center deployments, offering significantly higher inference performance. However, full utilization of these systems depends on integrating advanced cooling and power solutions, which has created near-term deployment constraints. Suppliers such as Vertiv, Super Micro, and Semtech have adjusted their forecasts due to the complex requirements of large-scale rack systems.
China Sales Remain Restricted
Despite global demand, Nvidia’s operations in China continue to face limitations due to U.S. export controls on advanced computing chips.
Currently, the company does not expect to make any data center chip sales in the country. Huang noted ongoing discussions with both U.S. and Chinese officials about the possibility of resuming limited sales in the future, though no timeline has been established.
These restrictions highlight a broader challenge for Nvidia as it navigates geopolitical complexities while trying to meet surging demand for AI infrastructure worldwide.
System Integration Remains Key Bottleneck
While Nvidia can supply Blackwell chips, full-scale deployment is constrained by system integration requirements.
The GB200 NVL72 rack, which leverages Blackwell GPUs for peak AI performance, requires direct liquid cooling, high-speed networking, and sophisticated power management ICs. As a result, some buyers seeking maximum inference gains may face delays until full racks are available.
Even standard HGX servers using B200 GPUs can address near-term demand, but peak performance in large-scale deployments remains tied to complex integration efforts. With roughly 35,000 NVL72 racks expected to ship in 2025, infrastructure providers and integrators are playing a critical role in enabling Nvidia’s customers to access the full capabilities of its AI ecosystem.
That said, Nvidia’s assurances on Blackwell chip supply, combined with the upcoming Vera Rubin platform, signal the company’s continued dominance in AI and data center markets. While system integration challenges and geopolitical constraints remain, the tech giant appears well-positioned to capitalize on surging global demand for high-performance computing.


