The new Jericho chip can connect up to 32,000 GPUs concurrently and promises shorter job completion times for AI workloads. Credit: Shutterstock Broadcom’s new networking chip, called the Jericho3-AI, is designed to connect supercomputers and features a high-performance fabric for artificial intelligence (AI) environments. Broadcom has three switch families: the high-bandwidth Tomahawk switch platform, which is used primarily within data centers; the lower bandwidth Trident platform, which offers greater programmability and deeper buffers, making it more suited for the edge; and the Jericho line, which sits somewhere between the other two and is best suited for low latency interconnects. Jericho3-AI is targeted at AI and machine-learning backend networks where the switch fabric handles spraying of traffic on all network links and reordering of that traffic before delivering to the endpoints. It also has built-in congestion management capabilities for load balancing and minimizing network congestion. The Jericho3-AI has a top throughput of 28.8Tb/s. It has 144x SerDes lanes operating at 106Gbps PAM4 and supports up to 18 800GbE/36 400GbE/72 200GbE network-facing ports. Beyond speeds and feeds, Jericho3-AI features improved load balancing over the prior generation to ensure maximum network utilization and congestion-free operation. No packet jitter and a high radix (meaning support for a large number of ports in the switch) allow Jericho3-AI to connect to 32,000 GPUs collectively. The Jericho3-AI fabric is designed to lower the time spent moving data around during AI training and inference jobs. AI training can take weeks if not months, and it requires an awful lot of data to be moved around the network. Broadcom claims that because of its performance improvements, the Jericho3-AI chip reduces the cost of running AI workloads, thus making it so valuable that it effectively pays for itself. “The benchmark for AI networking is reducing the time and effort it takes to complete the training and inference of large-scale AI models. Jericho3-AI delivers significant reduction in job completion time compared to any other alternative in the market,” Ram Velaga, senior vice president and general manager of the core switching group at Broadcom, said in a statement. Jericho3-AI is now available to qualified customers. Related content news High-bandwidth memory nearly sold out until 2026 While it might be tempting to blame Nvidia for the shortage of HBM, it’s not alone in driving high-performance computing and demand for the memory HPC requires. By Andy Patrizio May 13, 2024 3 mins CPUs and Processors High-Performance Computing Data Center news CHIPS Act to fund $285 million for semiconductor digital twins Plans call for building an institute to develop digital twins for semiconductor manufacturing and share resources among chip developers. By Andy Patrizio May 10, 2024 3 mins CPUs and Processors Data Center news HPE launches storage system for HPC and AI clusters The HPE Cray Storage Systems C500 is tuned to avoid I/O bottlenecks and offers a lower entry price than Cray systems designed for top supercomputers. By Andy Patrizio May 07, 2024 3 mins Supercomputers Enterprise Storage Data Center news Lenovo ships all-AMD AI systems New systems are designed to support generative AI and on-prem Azure. By Andy Patrizio Apr 30, 2024 3 mins CPUs and Processors Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe