Intel has announced infrastructure-processing units aimed at cloud providers to give customers full control of the CPUs they lease. Credit: Dell There was a time when Intel was all-x86, all the time, everywhere. Not anymore. Last week Intel held its annual Architecture Day with previews of multiple major upcoming architectures beyond x86. For once, it’s not hyperbole when they say these are some of the “biggest shifts in a generation.” And it’s not just architectures or just more and faster cores, it’s new designs, whole new ways of doing things. Instead of just packing more cores onto a smaller die, Intel is switching to a new hybrid architecture that adds low-energy-draw cores, similar to what some ARM chip makers have been doing for years on mobile devices. Intel’s announcements covered client and server but we’ll stick with the server stuff here. Sapphire Rapids is the codename for Intel’s next-generation of Xeon Scalable processors and the first to feature the company’s Performance Core microarchitecture. Performance is a future architecture with emphasis on low latency and single-threaded core performance. A smarter branch predictor improves the flow of code in the instruction pipeline, and eight decoders enable more parallelism of code processing. A wider back-end adds ports for more and faster parallel processing. Sapphire Rapids will also offer larger private and shared caches, increased core counts, and support for DDR5 memory, PCI Express Gen 5, the next-generation of Optane memory, CXL 1.1 (Compute Express Link), and on-package High Bandwidth Memory (HBM). Sapphire Rapids will add several new technologies not used in previous generations of the Xeon Scalable processor, such as Intel Accelerator Interfacing Architecture (AIA) to improve signaling to accelerators and devices; Intel Advanced Matrix Extensions (AMX), a workload acceleration engine specifically for tensor processing used in deep learning algorithms; and Intel Data Streaming Accelerator (DSA), which is meant to offload common data movement tasks from the CPU. Introducing the IPU Intel also announced a trio of new Infrastructure Processing Units (IPU), designed around data movement specifically for the cloud and communications services. These IPUs are a mix of Intel Xeon-D processor cores, Agilex FPGAs and Intel Ethernet technologies. All are meant to reduce network overhead and increase throughput. IPUs are also designed to separate the cloud infrastructure from tenant or guest software, so guests can fully control the CPU with their software, while service providers maintain control of the infrastructure and root-of-trust. The first of the three is Oak Springs Canyon, which features Intel Xeon-D cores, an Agilex FPGA, and dual 100G Ethernet network interfaces. It supports Intel’s Open vSwitch technology and enables the offloading of network virtualization and storage functions like NVMe over fabric and RoCE v2 to reduce CPU overhead. Second is the Intel N6000 Acceleration Development Platform, codenamed Arrow Creek, a 100G SmartNIC designed for use with Intel Xeon-based servers. It features an Intel Agilex FPGA and Intel Ethernet 800 Series controller for high-performance 100G network acceleration. Arrow Creek is geared toward Communication Service Providers (CoSPs). Finally there is a new ASIC IPU, codenamed Mount Evans, a first of its type from Intel. Intel says it designed Mount Evans in cooperation with a top cloud service partners. Mount Evans is based on Intel’s packet-processing engine, instantiated in an ASIC. This ASIC supports many use cases like vSwitch offload, firewalls, and virtual routing, and emulates NVMe devices at very high IOPS rates by extending the Optane NVMe controller. Mount Evans features up to 16 Arm Neoverse N1 cores, with a dedicated compute cache and up to three memory channels. The ASIC can support up to four host Xeons, with 200Gbps of full-duplex bandwidth between them. This is only the beginning of the news out of Architecture Day. More will come. Related content news High-bandwidth memory nearly sold out until 2026 While it might be tempting to blame Nvidia for the shortage of HBM, it’s not alone in driving high-performance computing and demand for the memory HPC requires. By Andy Patrizio May 13, 2024 3 mins CPUs and Processors High-Performance Computing Data Center news CHIPS Act to fund $285 million for semiconductor digital twins Plans call for building an institute to develop digital twins for semiconductor manufacturing and share resources among chip developers. By Andy Patrizio May 10, 2024 3 mins CPUs and Processors Data Center news HPE launches storage system for HPC and AI clusters The HPE Cray Storage Systems C500 is tuned to avoid I/O bottlenecks and offers a lower entry price than Cray systems designed for top supercomputers. By Andy Patrizio May 07, 2024 3 mins Supercomputers Enterprise Storage Data Center news Lenovo ships all-AMD AI systems New systems are designed to support generative AI and on-prem Azure. By Andy Patrizio Apr 30, 2024 3 mins CPUs and Processors Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe