TLDRs;
- Foxconn and Nvidia will open Taiwan’s largest GPU supercomputing center by 2026, powered by Blackwell GB300 chips.
- The facility becomes Asia’s first GB300-based AI data center, designed for high-density liquid-cooled GPU clusters.
- Nvidia promotes compute-rental models as Foxconn ramps up AI rack production to 1,000 units per week.
- Taiwan’s cooling-tech market accelerates as next-gen systems target extremely low PUE for energy-intensive GPU workloads.
Foxconn and Nvidia are teaming up to build a US$1.4 billion supercomputing hub in Taiwan, an ambitious project set to go live in the first half of 2026.
The facility, announced during Foxconn’s recent tech event, is positioned to become Taiwan’s most powerful GPU cluster to date and a cornerstone of the country’s fast-expanding AI infrastructure strategy.
The new supercomputing center will be powered by Nvidia’s cutting-edge Blackwell GB300 chips, marking the first time this next-generation GPU architecture will anchor a full-scale AI data center in Asia. Neo Yao, CEO of Visonbay.ai — Foxconn’s newly established AI supercomputing division, said the project underscores Taiwan’s commitment to scaling high-performance computing at a global level.
Asia’s First GB300 Facility
According to Foxconn, the upcoming center will not only be the largest GPU cluster in Taiwan but also the first GB300 AI data center in Asia.
This leap places Taiwan ahead of regional competitors as demand for next-gen compute explodes across sectors such as robotics, cloud AI, autonomous systems, and foundation-model training.
The GB300 racks are expected to support 800 Gb/s per GPU and 130 TB/s of NVLink bandwidth, pushing the boundaries of interconnect performance. Foxconn’s production expertise is also a major component of the rollout, the company is Nvidia’s primary manufacturer of AI server racks, a partnership that has deepened as global orders for GPU clusters accelerate.
Liquid Cooling Becomes the New Standard
One of the most important aspects of the new facility is its role in stress-testing advanced liquid-cooling technology in Taiwan. Each GB300 NVL72 rack draws approximately 120 kW, a massive thermal load that requires next-generation cooling systems to maintain efficiency.
Vendors aim for Power Usage Effectiveness (PUE) near 1.05, with partial PUE around 1.01, using two-phase direct liquid cooling (DLC). The project follows Chief Telecom’s successful 45-day retrofit of a legacy data center, signaling how quickly modern cooling solutions can be deployed in Taiwan.
The country’s data-center-cooling market is projected to grow at a 12.32% CAGR through 2031, driven by rising rack-density requirements and the shift toward GPU-heavy compute environments.
Rental Compute Models Take Center Stage
During the event, Nvidia vice president Alexis Bjorlin highlighted a trend gaining momentum: renting compute resources instead of building private infrastructure. With AI hardware becoming more complex and significantly more expensive, flexible access to GPU clusters could give enterprises a more predictable and scalable path to high-performance computing.
Foxconn is well-positioned to meet this demand. The company currently manufactures about 1,000 AI racks per week, and shipments of GB300 systems are expected to jump by 300% in Q3 2025. This level of production allows Foxconn to serve not only hyperscalers but also emerging AI companies seeking fractional GPU access.
Foxconn also plans to invest US$2–3 billion annually into AI technologies, reinforcing its strategy to transition from a traditional electronics manufacturer into a global AI infrastructure powerhouse.


