TLDR
- Broadcom TPUs priced at $10,500-$15,000 offer cost savings compared to Nvidia’s $40,000-$50,000 Blackwell processors
- Anthropic ordered $21 billion in TPUs while Meta Platforms explores switching from Nvidia chips
- UBS forecasts 3.7 million Broadcom TPU shipments in 2026 with AI revenue reaching $60 billion
- Nvidia acquired Groq inference technology for $20 billion to counter growing TPU competition
- Cisco enters market with Silicon One G300 chip promising 28% faster AI computing performance
The artificial intelligence chip industry is seeing increased competition as Broadcom and Cisco launch products aimed at challenging Nvidia’s market position. Price differences and specialized features are driving companies to evaluate alternatives.
Broadcom’s Tensor Processing Units, designed in collaboration with Google, cost between $10,500 and $15,000 per unit. This represents a fraction of Nvidia’s Blackwell chip pricing, which ranges from $40,000 to $50,000. The cost advantage has attracted attention from major technology companies.
AI startup Anthropic has committed to two large TPU orders totaling $21 billion. Meta Platforms is in discussions about adopting the processors, The Wall Street Journal reported. These deals signal a broader market shift as TPUs become available to customers beyond Google.
UBS analyst Timothy Arcuri estimates Broadcom will deliver 3.7 million TPUs this year. That figure is projected to surpass five million units in 2027. The analyst described demand as accelerating rapidly.
TPUs Excel at Inference Tasks
Tensor Processing Units perform best at AI inference, where models produce outputs and answers. Nvidia chips maintain an edge in model training. Performance benchmarks indicate training times of 35 to 50 days on Nvidia GPUs versus approximately three months on TPUs.
The inference market is expanding quickly. Mizuho analysts report inference currently accounts for 20% to 40% of AI workloads. They expect this share to reach 60% to 80% within five years. This growth pattern favors processors optimized for inference operations.
Broadcom anticipates generating $60 billion in AI revenue during 2026. The company projects revenue will climb to $106 billion in 2027. Nvidia expects roughly $300 billion in data center sales for fiscal 2027. TPU pricing is forecast to rise toward $20,000 per unit over the next few years.
Nvidia Responds with Strategic Acquisition
Nvidia purchased a nonexclusive license for technology from Groq, an AI hardware startup focused on inference. The transaction cost $20 billion, including compensation packages for Groq employees joining Nvidia. The move aims to strengthen Nvidia’s capabilities in the growing inference segment.
Cisco Debuts Networking Chip
Cisco Systems unveiled its Silicon One G300 switch chip designed for massive AI data centers. The product will launch in the second half of 2026. Taiwan Semiconductor Manufacturing Company will manufacture the chip using 3-nanometer process technology.
The chip handles communication between AI training and inference systems across hundreds of thousands of connections. Martin Lund, executive vice president at Cisco, explained the chip includes features that prevent network congestion during data traffic surges. The system reroutes data automatically within microseconds when problems occur.
Cisco states the chip can accelerate certain AI computing jobs by 28%. The company emphasizes total network efficiency rather than individual component performance. Lund noted that network issues arise regularly when managing tens or hundreds of thousands of simultaneous connections.
The networking sector has become a critical battleground in AI infrastructure development. Nvidia’s recent system launch included networking chips competing with Cisco offerings. Broadcom markets its Tomahawk chip series in the same category.
All three companies are pursuing opportunities in the $600 billion AI infrastructure spending surge. Each targets distinct market segments spanning training, inference, and networking infrastructure.


