TLDRs;
- AMD strengthens AI push through strategic sovereign data center partnership with TCS in India.
- Helios platform delivers rack-scale AI computing with 72 MI455X accelerators per system.
- AMD competes with Nvidia using higher memory capacity and open infrastructure design approach.
- India deal supports AMD’s long-term vision for sovereign, large-scale AI data centers.
AMD is deepening its artificial intelligence ambitions through a strategic partnership with Tata Consultancy Services (TCS), aimed at building sovereign AI data center infrastructure in India.
The collaboration comes at a time when global demand for AI computing capacity is surging, and governments are increasingly prioritizing domestic control over critical AI infrastructure.
The deal centers on AMD’s upcoming Helios GPU-based platform, which is expected to begin global customer deployments in the second half of 2026, including India. The initiative positions AMD as a stronger competitor in the rapidly expanding AI infrastructure market, currently dominated by Nvidia.
Advanced Micro Devices, Inc., AMD
Helios platform scales compute
At the core of AMD’s strategy is the Helios rack-scale system, a high-performance computing platform designed for next-generation AI workloads. According to AMD senior director for data center GPU product marketing Mahesh Balasubramanian, a single Helios rack integrates 72 MI455X accelerators and delivers up to 2.9 exaflops of FP4 compute performance.
This design reflects a broader industry shift from standalone GPUs to full rack-scale systems that combine compute, memory, cooling, and networking into tightly integrated units. Each rack can draw more than 120 kilowatts of power, making AI infrastructure development as much a physical engineering challenge as a computing one.
Competing in AI hardware race
AMD’s expansion comes amid intensifying competition with Nvidia, which currently controls more than 80% of the GPU market. Nvidia’s upcoming Vera Rubin POD architecture is expected to deliver up to 3.6 exaflops of performance, slightly ahead in raw compute capability.
However, AMD is betting on a different competitive angle. The MI455X accelerators feature 432 GB of High Bandwidth Memory (HBM4), significantly higher than Nvidia’s 288 GB. This memory advantage could allow larger AI models to run within a single system node, reducing complexity and improving efficiency for enterprise-scale deployments.
AMD’s approach also emphasizes open standards, positioning itself as an alternative to Nvidia’s more proprietary ecosystem.
TCS partnership drives infrastructure vision
The partnership with TCS is central to AMD’s sovereign AI strategy. Together, the companies are developing a Helios-based reference architecture for AI data centers in India, designed to support up to 200MW of capacity. The project aligns with broader national efforts to build localized AI infrastructure, reducing dependence on foreign-controlled cloud systems.
TCS’s broader infrastructure initiative, including its HyperVault unit, is targeting gigawatt-scale capacity development and has attracted up to ₹18,000 crore (about $2.1 billion) in combined commitments with global investment partners. The initiative reflects growing collaboration between hyperscale integrators and chipmakers to build large-scale AI factories.
Even as AMD strengthens its partnership with TCS, analysts expect enterprise buyers to diversify across multiple vendors to mitigate supply chain risk and optimize pricing leverage in an increasingly competitive AI hardware market.
Market outlook and deployment timeline
While AMD has confirmed that Helios deployments will begin in the second half of 2026, some industry observers suggest timelines could shift depending on competing product cycles. Nvidia is also expected to accelerate its roadmap, potentially influencing broader AI infrastructure rollout schedules into 2027.
Still, AMD’s focus on rack-scale systems, memory density, and sovereign infrastructure partnerships signals a clear long-term strategy to gain ground in the AI compute race.


