Google Cloud says its new C3 virtual machine instances deliver performance gains up to 20% over its previous generation C2 instances. Credit: CIS Google Cloud announced new virtual machines as part of its cloud platform based on Intel’s newest Xeon Scalable processors. In addition to the new C3 virtual machine series, Google also announced it is deploying Infrastructure Processing Units (IPU), which are designed to intelligently route network traffic and take the load of network data processing off the CPU. The company made the announcements at its Google Cloud Next ’22 conference held virtually. The IPU chip, formally known as the E2000, was co-designed by Google and Intel together and features 16 Arm Neoverse cores and 200GbE networking. C3 machine instances deliver performance gains of up to 20% over previous generation of C2 instances. They also benefit from a recent product launch called Hyperdisk, a block storage system that offers 80% higher IOPS per vCPU for data analytics and DBMS workloads when compared to other hyperscalers, All told, Google says C3 VMs with Hyperdisk deliver four times higher throughput and 10 times greater IOPS when compared with the previous C2 generation. It makes the C3 instances capable of offering high-performance computing (HPC) levels of performance, something cloud-based VMs are not known for. Intel launched the IPU last year. The concept is basically the same as what its silicon competitors (AMD, Nvidia, Marvell) call data processing units (DPUs), namely lightening the burden on CPUs. IPUs are designed to offload functions like routing network traffic, handling traffic analysis and storage and network virtualization. The new Xeon CPUs (codenamed Sapphire Rapids) have been the subject of considerable delay due to manufacturing problems. They were originally slated to launch in 2021, but Intel is still struggling to bring them to market. Gooogle Cloud’s C3 family of VMs are now available in private preview. General availability has not been announced. Related content news High-bandwidth memory nearly sold out until 2026 While it might be tempting to blame Nvidia for the shortage of HBM, it’s not alone in driving high-performance computing and demand for the memory HPC requires. By Andy Patrizio May 13, 2024 3 mins CPUs and Processors High-Performance Computing Data Center news CHIPS Act to fund $285 million for semiconductor digital twins Plans call for building an institute to develop digital twins for semiconductor manufacturing and share resources among chip developers. By Andy Patrizio May 10, 2024 3 mins CPUs and Processors Data Center news HPE launches storage system for HPC and AI clusters The HPE Cray Storage Systems C500 is tuned to avoid I/O bottlenecks and offers a lower entry price than Cray systems designed for top supercomputers. By Andy Patrizio May 07, 2024 3 mins Supercomputers Enterprise Storage Data Center news Lenovo ships all-AMD AI systems New systems are designed to support generative AI and on-prem Azure. By Andy Patrizio Apr 30, 2024 3 mins CPUs and Processors Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe