TLDR
- Advanced Micro Devices and Celestica revealed a partnership to build the Helios rack-scale AI infrastructure system
- Celestica takes responsibility for research, design, and production of scale-up networking switches within the platform
- These networking switches will link AMD’s upcoming Instinct MI450 Series GPUs for massive AI computing clusters
- The Helios system leverages Open Compute Project Open-Rack-Wide specifications and Ultra Accelerator Link over Ethernet technology
- Customer shipments of AMD Helios are planned for late 2026; AMD shares gained approximately 1% in premarket trading Monday
Advanced Micro Devices has joined forces with Celestica (CLS) to deliver an innovative rack-scale AI infrastructure solution. Named Helios, this platform targets massive AI training and inference operations across cloud computing, enterprise, and research facilities.
Celestica’s involvement encompasses research and development, engineering, and production of scale-up networking switches that form the backbone of the AMD Helios system. The switches conform to the Open Compute Project Open-Rack-Wide specification — an open architecture standard increasingly adopted by hyperscale data center operators.
Advanced Micro Devices, Inc., AMD
The networking components within these switches are designed to facilitate ultra-fast interconnection among AMD’s upcoming Instinct MI450 Series GPUs. The platform utilizes Ultra Accelerator Link over Ethernet for scale-up communication, which is critical for maintaining high-speed data transfer between GPU clusters.
A rack-scale AI system treats the complete rack—rather than separate servers—as a unified computing module. This approach integrates GPUs, high-bandwidth networking, and liquid cooling technology into one cohesive infrastructure. The architecture is optimized for training large language models efficiently at enterprise scale.
“Helios represents a new blueprint for AI infrastructure,” said Forrest Norrod, AMD’s executive vice president and general manager of Data Center Solutions. He said it enables customers to deploy AI with the performance, efficiency, and flexibility needed for next-generation workloads.
Steven Dorwart, senior vice president at Celestica, said deploying AI at scale requires infrastructure that can be delivered quickly and consistently. Celestica’s role in Helios leans on its existing strengths in data center design, engineering, and supply chain.
AMD’s Market Performance and Analyst Outlook
AMD commands a market capitalization near $315 billion and has delivered a 92% gain over the trailing twelve months amid surging demand for AI computing infrastructure. The Helios platform announcement reinforces AMD’s expanding presence in the data center GPU sector.
UBS maintains a Buy recommendation on AMD with a $310 price objective, highlighting revenue expansion opportunities extending through 2027. The investment bank has identified potential for AMD to secure a third major hyperscale cloud provider as a data center customer, with Microsoft mentioned as a probable contender.
Wolfe Research similarly rates AMD as Outperform, emphasizing the company’s server business traction and its AI accelerator product pipeline as fundamental growth catalysts.
Additional Strategic Agreements
Separate from Helios, AMD recently finalized a multi-year licensing arrangement with Adeia Inc., obtaining rights to Adeia’s semiconductor intellectual property collection while resolving all pending legal disputes between the organizations.
Avalon GloboCare has also secured membership in AMD’s AI Developer Program, granting the company access to AMD’s development tools and technical resources for artificial intelligence projects.
AMD shares advanced roughly 1% in premarket trading Monday after the Helios platform announcement. Celestica stock jumped approximately 3% during the same trading period.
The AMD Helios platform is expected to reach customers in late 2026.


