TLDRs;
- Nvidia and Foxconn unveil Vera Rubin servers to power gigawatt AI factories.
- 800V DC design could boost efficiency, reduce copper, and scale data centers.
- Liquid-cooled Vera Rubin servers attract bids for direct-to-chip retrofits.
- Over 50 partners, including CoreWeave and Oracle, back Nvidia’s AI platform.
Nvidia and Foxconn have joined forces to introduce the Vera Rubin NVL144 AI factory servers, a move set to redefine high-performance computing for large-scale artificial intelligence workloads.
The unveiling took place at the OCP Global Summit, Monday, where Nvidia showcased the open-architecture MGX rack system designed to support expansive AI infrastructure. The collaboration underscores a growing industry trend toward energy-efficient, high-density AI data centers.
Vera Rubin NVL144 Unveiled at OCP Summit
The NVL144 servers form the backbone of Nvidia’s ambitious “gigawatt AI factories” initiative, aiming to deliver unprecedented AI compute power at scale.
Featuring 576 Rubin Ultra GPUs per rack and support for Nvidia Kyber, these systems are built for modularity, enabling flexible deployment in diverse data center environments.
Foxconn is among the primary builders, while more than 50 partners, including Lambda, Nebius, CoreWeave, Oracle Cloud Infrastructure, and Together AI, contribute to designing 800V direct current (VDC) data centers compatible with the platform.
800V Infrastructure Promises Greater Efficiency
One of the key innovations of the Vera Rubin servers is the 800V DC infrastructure, which Nvidia claims can deliver 150% more power through the same copper, potentially removing hundreds of kilograms of copper busbars and generating significant cost savings.
Industry experts caution that final adoption will depend on approvals from the International Electrotechnical Commission (IEC), Underwriters Laboratories (UL), and National Electrical Code (NEC), particularly regarding safety in accessible data center environments.
Despite these hurdles, partners like Foxconn and CoreWeave are actively designing facilities around this high-voltage architecture, signaling strong industry interest.
Liquid Cooling Spurs Industry Innovation
Another standout feature of the NVL144 is its 45°C full liquid cooling system, designed to handle the extreme heat output of dense AI workloads.
This system has sparked interest in direct-to-chip (D2C) retrofits, allowing vendors to remove up to 80% of chip-level heat. Mechanical, electrical, and plumbing (MEP) engineering firms, along with liquid cooling OEMs, are now exploring opportunities for pilot projects and requests for proposals (RFPs).
The ROI for such systems is estimated at 2–4 years, making early procurement appealing for forward-looking data center operators.
Ecosystem Partners Support Next-Gen AI
Nvidia’s push extends beyond hardware, emphasizing ecosystem collaboration. Vertiv has unveiled reference architectures supporting the NVL144, while HPE has announced product support.
Cloud providers like Oracle and CoreWeave are building on the 800V design, while AI startups and enterprise clients explore integration with Kyber, Nvidia’s rack system designed to optimize GPU density and minimize copper usage.
The coordinated efforts signal a holistic approach to scaling AI infrastructure, combining power efficiency, modularity, and collaborative innovation.


