TLDR
- Cisco Systems introduced Silicon One G300 chip Tuesday targeting the $600 billion AI infrastructure market
- The 3-nanometer chip accelerates AI computing by 28% and cuts energy use by 70% in liquid-cooled systems
- Launch scheduled for second half of 2026 to compete directly with Nvidia and Broadcom networking solutions
- Chip features automatic microsecond-level data rerouting to prevent network bottlenecks in massive data centers
- New technology powers upcoming Cisco N9000 and 8000 systems for AI workloads
Cisco Systems made waves Tuesday with a new product aimed squarely at the AI infrastructure market. The company unveiled its Silicon One G300 switch chip for massive data center operations.
The announcement puts Cisco in the ring with Nvidia and Broadcom. These companies are competing for dominance in the $600 billion AI infrastructure spending surge.
The G300 chip enables AI systems to communicate across hundreds of thousands of network links. Cisco set a second half 2026 release date for the product.
Taiwan Semiconductor Manufacturing Co will build the chip using cutting-edge 3-nanometer manufacturing. This technology gives the G300 its performance advantages over older designs.
Performance Breakthroughs
The chip delivers a 28% speed improvement for certain AI computing tasks. This boost comes from smart data routing that happens automatically.
Martin Lund leads Cisco’s common hardware group as executive vice president. He told Reuters the chip reroutes data around problems within microseconds.
Networks with tens of thousands or hundreds of thousands of connections face regular traffic issues. The G300 tackles these problems before they slow down operations.
Cisco built “shock absorber” features into the chip. These prevent network crashes when massive data spikes hit the system.
Energy consumption drops by approximately 70% for 100% liquid-cooled systems. This efficiency gain matters as data centers consume enormous amounts of power.
The chip uses Intelligent Collective Networking technology. This allows training and delivery systems to talk efficiently across sprawling networks.
Market Competition Intensifies
Networking emerged as a crucial front in the AI infrastructure wars. Nvidia’s latest system unveiled last month included its own proprietary networking chip.
Broadcom entered the space with its Tomahawk chip series. The product targets the same customers and use cases as Cisco’s offering.
Cisco describes Silicon One as the most scalable and programmable unified networking architecture available. The platform serves AI, hyperscaler, data center, enterprise, and service provider segments.
The G300 will drive new Cisco N9000 and 8000 systems. These products push the boundaries of what’s possible in AI data center networking.
Lund emphasized Cisco’s focus on total network efficiency. The approach looks at the entire system rather than just individual component speed.
Problems happen constantly in networks connecting hundreds of thousands of AI chips. Without proper management, these issues cascade into major slowdowns.
The G300’s automatic rerouting prevents small problems from becoming big ones. The chip makes these decisions faster than any human operator could react.
Tech companies are pouring money into AI infrastructure at record levels. The $600 billion spending boom reflects how critical these systems have become.
Cisco’s timing puts it up against established competitors already serving customers. Both Nvidia and Broadcom have shipping products in this category.
The second half 2026 launch gives Cisco time to refine the technology. It also means waiting while competitors potentially gain more market share.
The chip handles peak demand without breaking stride. This reliability proves essential for companies running AI workloads 24/7.


