Recently, Cisco released a new router ASIC aimed at helping data center operators overcome power and capacity limitations, connecting existing data centers into a unified computing cluster. This router, named Cisco8223, has a transmission speed of 51.2Tbps and uses Cisco's self-developed Silicon One P200 ASIC. Combined with 800Gbps coherent optical technology, Cisco states that this platform can support connections as far as 1,000 kilometers.
Image source note: The image is AI-generated, and the image licensing service provider is Midjourney
By connecting a sufficient number of routers, Cisco claims that this architecture can theoretically achieve a total bandwidth of over three exabytes per second, enough to connect the largest AI training clusters. Such a network could even support multi-site deployments with millions of GPUs, but achieving this level of bandwidth would be quite costly, requiring thousands of routers.
For customers who do not need such high speeds, Cisco says these routers can support up to 13Tbps of bandwidth in smaller two-tier networks. This high-speed cross-data center network has attracted the attention of several major cloud service providers, including Microsoft and Alibaba. Cisco told us that these companies are evaluating the potential deployment of these chips.
Cai Dengsi, Head of Network Infrastructure at Alibaba Cloud, said: "This new routing chip will enable us to expand into the core network, replacing traditional rack-mounted routers with a set of P200-driven devices. This transition will significantly enhance the stability, reliability, and scalability of our data center interconnection network."
Cisco is not the only network vendor joining the distributed data center trend. Earlier this year, Nvidia and Broadcom also launched their own cross-data center network ASICs. Similar to the P200, Broadcom's Jericho4 is a 51.2Tbps switching chip primarily designed for high-speed data center network architectures. Broadcom states that this chip can connect data centers over distances of more than 100 kilometers at speeds exceeding 100Tbps.
Although these switching and routing ASICs can help data center operators overcome power and capacity limitations, latency remains a continuous challenge. We often think that the speed of light is instantaneous, but it is not. Data packets transmitted between two data centers located 1,000 kilometers apart take about five milliseconds to reach their destination, not to mention the additional delay required during the signal transmission process.
Nevertheless, a study by the Google DeepMind team earlier this year showed that many latency issues can be overcome by compressing models during the training process and strategically scheduling communication between two data centers.
Key Points:
🌐 The new Cisco8223 router has a transmission speed of 51.2Tbps, connecting data centers into a unified computing cluster.
💡 It can support up to three exabytes per second of bandwidth, suitable for large-scale AI training needs.
🚀 Major cloud service providers such as Alibaba are considering adopting this technology to enhance network stability and reliability.