The A3 supercomputer's scale can provide up to 26 exaFlops of AI performance, Google says. Google Cloud announced a new supercomputer virtual-machine series aimed at rapidly training large AI models. Unveiled at the Google I/O conference, the new A3 supercomputer VMs are purpose-built to handle the considerable resource demands of a large language model (LLM). “A3 GPU VMs were purpose-built to deliver the highest-performance training for today’s ML workloads, complete with modern CPU, improved host memory, next-generation Nvidia GPUs and major network upgrades,” the company said in a statement. The instances are powered by eight Nvidia H100 GPUs, Nvidia’s newest GPU that just begin shipping earlier this month, as well as Intel’s 4th Generation Xeon Scalable processors, 2TB of host memory and 3.6 TBs bisectional bandwidth between the eight GPUs via Nvidia’s NVSwitch and NVLink 4.0 interconnects. All together, Google is claiming these machines can provide up to 26 exaFlops of power. That’s the cumulative performance of the entire supercomputer, not each individual instance. Still, it blows away the old record for the fastest supercomputer, Frontier, which was just a little over one exaFlop. According to Google, A3 is the first production-level deployment of its GPU-to-GPU data interface, which Google calls the infrastructure processing unit (IPU). It allows for sharing data at 200 Gbps directly between GPUs without having to go through the CPU. This result is a ten-fold increase in available network bandwidth for A3 virtual machines compared to prior-generation A2 VMs. A3 workloads will be run on Google’s specialized Jupiter data center networking fabric, which the company says “scales to tens of thousands of highly interconnected GPUs and allows for full-bandwidth reconfigurable optical links that can adjust the topology on demand.” Google will be offering the A3 in two ways: customers can run it themselves or as a managed service where Google handles most of the work. If you opt to do it yourself, the A3 VMs run on Google Kubernetes Engine (GKE) and Google Compute Engine (GCE). If you go with a managed service, the VMs run on Vertex, the company’s managed machine learning platform. The A3 virtual machines are available for preview, which requires filling out an application to join the Early Access Program. Google makes no promises you will get a spot in the program. Related content news High-bandwidth memory nearly sold out until 2026 While it might be tempting to blame Nvidia for the shortage of HBM, it’s not alone in driving high-performance computing and demand for the memory HPC requires. By Andy Patrizio May 13, 2024 3 mins CPUs and Processors High-Performance Computing Data Center news CHIPS Act to fund $285 million for semiconductor digital twins Plans call for building an institute to develop digital twins for semiconductor manufacturing and share resources among chip developers. By Andy Patrizio May 10, 2024 3 mins CPUs and Processors Data Center news HPE launches storage system for HPC and AI clusters The HPE Cray Storage Systems C500 is tuned to avoid I/O bottlenecks and offers a lower entry price than Cray systems designed for top supercomputers. By Andy Patrizio May 07, 2024 3 mins Supercomputers Enterprise Storage Data Center news Lenovo ships all-AMD AI systems New systems are designed to support generative AI and on-prem Azure. By Andy Patrizio Apr 30, 2024 3 mins CPUs and Processors Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe