TLDRs:
- Nvidia CEO Jensen Huang insists AI expansion remains sustainable despite market concerns.
- Rapid GPU adoption fuels growth, with older CPU systems shifting to AI chips.
- Hyperscalers plan $300 billion in 2025 AI infrastructure investment, signaling long-term confidence.
- Data centers adapt with liquid cooling and retrofits to handle increased AI workloads.
Nvidia CEO Jensen Huang addressed concerns of an AI bubble during the company’s Q3 2025 earnings call on November 19, emphasizing the durability of the AI sector’s growth.
Investors and analysts had questioned whether the rapid influx of capital into AI-focused data centers could yield sustainable returns, with Nvidia at the heart of these concerns due to soaring demand for its graphics processing units (GPUs).
Huang argued that the adoption of AI infrastructure is not speculative but instead a natural response to evolving computing needs.
“”There’s been a lot of talk about an AI bubble. From our vantage point, we see something very different,” Huang said.
He added that emerging technologies, including so-called “agentic AI,” would further increase the need for computational power, positioning Nvidia as uniquely equipped to meet these demands.
GPU Demand Driving Infrastructure Shift
Nvidia’s GPUs have become central to AI workloads, replacing older CPU architectures in large-scale computing environments.
This transition is driven by the efficiency and speed of GPUs in handling complex AI tasks, including machine learning model training and inference. The growing reliance on GPUs underscores the tangible, utility-driven nature of the AI market, countering fears of speculative overvaluation.
Huang emphasized that GPU utilization and return on investment (ROI) will remain key indicators of market stability. While critics worry about stranded assets, Nvidia and its partners are betting on sustained activity to ensure hardware costs are amortized over standard 18-24 month periods.
Hyperscalers Ramp Up Spending
Major hyperscale cloud providers are poised to spend roughly $300 billion in 2025 on data centers and GPUs, with projections exceeding $1 trillion by 2027.
These investments reflect strong confidence in AI’s long-term prospects, though success depends on keeping GPUs fully utilized.
Profitability per unit of compute hinges on factors like token pricing, operational costs, and minimizing idle GPU time. Teams are also shifting focus from brute-force AI pre-training toward post-training and real-time inference, adding dynamic compute demands as models respond to queries. Meeting adoption targets is crucial, as underutilization could create financial risks for data center operators and GPU vendors alike.
Data Centers Prepare for AI Surge
The surge in GPU deployment has prompted significant upgrades in colocation and infrastructure facilities. Data centers are increasingly adopting liquid cooling systems to manage the heat generated by high-performance AI chips.
Liquid cooling adoption is expected to double from roughly 10% in 2024 to over 20% in 2025, aided by modular systems that scale efficiently without requiring full facility rebuilds.
Power availability and grid constraints are influencing site selection, favoring facilities that can increase capacity at existing locations. Suppliers like Schneider Electric and sustainable data center developers such as Edged are well-positioned to benefit from this wave, leveraging innovative cooling systems that reduce water usage while maintaining high performance.
These improvements not only support Nvidia’s GPU rollouts but also reinforce the infrastructure backbone essential for sustained AI growth.


