TLDR
- AWS will integrate Nvidia’s NVLink Fusion technology into its future Trainium4 AI chip with no release date announced
- Amazon launched new Trainium3 servers on December 2, 2025, offering four times more computing power and 40% less energy use than previous generation
- AWS introduced AI Factories, dedicated AI infrastructure in customer data centers for faster model training
- Nvidia’s NVLink technology now adopted by Intel, Qualcomm, and AWS to create faster chip connections
- New Trainium3 servers contain 144 chips each and are available immediately, competing on price performance with Nvidia
Amazon Web Services dropped major AI hardware news at its annual Las Vegas conference on December 2, 2025. The cloud computing giant announced plans to use Nvidia’s crown jewel technology in future chips while simultaneously rolling out new servers available today.
AWS will adopt Nvidia’s NVLink Fusion technology in its upcoming Trainium4 chip. The company hasn’t specified a release date for Trainium4. NVLink creates high-speed connections between different types of chips, making it one of Nvidia’s most valuable technologies.
This partnership puts AWS in the same camp as Intel and Qualcomm, who have already adopted NVLink. Nvidia has been actively pushing other chip makers to use its connection technology. The move helps AWS build larger AI servers that can communicate faster with each other.
Fast communication between servers matters for training large AI models. These models require thousands of machines working together. Slow connections create bottlenecks that waste time and money.
New Servers Available Now
AWS didn’t just talk about future plans. The company released new Trainium3 servers on December 2, 2025. Each server packs 144 chips inside.
The performance numbers tell an interesting story. These new servers deliver four times more computing power than AWS’s previous AI generation. At the same time, they use 40% less power.
Dave Brown, AWS vice president of compute and machine learning services, didn’t share exact performance figures. He made it clear AWS plans to compete on price. The company wants customers to choose Trainium3 because it delivers better value than competitors, including Nvidia.
“We’ve got to prove to them that we have a product that gives them the performance that they need and get a right price point,” Brown told Reuters. The goal is making customers say they want to use AWS chips.
AI Factories Enter the Picture
The Nvidia partnership brings another new offering called AI Factories. These are dedicated AI infrastructure setups inside customer data centers. Customers get exclusive access to hardware configured for speed and readiness.
AI Factories aim to solve a problem many companies face. Building and maintaining AI infrastructure takes expertise and resources. AWS wants to handle the complexity while customers focus on their AI models.
Nvidia CEO Jensen Huang praised the partnership in a statement. He said Nvidia and AWS are creating the compute fabric for the AI industrial revolution. The technology should bring advanced AI to companies worldwide, according to Huang.
The announcements came during AWS’s week-long cloud computing conference in Las Vegas. The event draws around 60,000 attendees. Amazon also plans to showcase updated versions of its Nova AI model, which first appeared last year.
AWS made the Trainium3 servers available immediately on December 2, 2025, with each server containing 144 chips and delivering four times the computing power of previous generations while consuming 40% less energy.


