michael_cooney
Senior Editor

Intel flexes AI chops with Gaudi 3 accelerator, new networking for AI fabrics

Analysis
09 Apr 20244 mins
CPUs and ProcessorsGenerative AINetworking

At the Intel Vision 2024 event, Intel unveiled the Gaudi 3 accelerator along with AI-optimized Ethernet networking tech, including an AI NIC and AI connectivity chiplets.

Intel CEO Pat Gelsinger Gaudi 3 accelerator
Credit: Intel Corporation

Intel is stepping up its drive to build AI into corporate edge and data-center networks, servers and other devices. At its Intel Vision 2024 event, the chip giant announced plans for Ethernet network interface cards (NICs) based on the forthcoming Ultra Ethernet Consortium (UEC) specifications, new Xeon 6 processors with high-performance capabilities, and the Gaudi 3 AI accelerator, which is aimed at handling AI training and inference workloads in large-scale generative AI environments.

“Ultimately our goal is to deliver AI systems that span from the API to enterprise IoT and edge to the data center and make it easier for enterprises, in general, to adopt and effectively use AI,” said Sachin Katti, senior vice president and general manager of the network and edge group at Intel.

On tap is an array of Ethernet-based Intel AI NIC and AI connectivity chiplets for integration into XPUs, Gaudi-based accelerator systems, and a range of soft and hard reference AI interconnect designs for Intel Foundry, Katti said.

The Ethernet-based systems will adhere to the UEC open network fabric criteria that’s set to be released later this year. Intel is one of the founders of the UEC, which now includes more than 50 vendors that are developing technologies aimed at increasing the scale, stability, and reliability of Ethernet networks to satisfy AI’s high performance networking requirements. UEC specs will focus on a variety of scalable Ethernet improvements, including better multi-path and packet delivery options as well as modern congestion and telemetry features. 

In addition to the UEC support, Intel announced a plan – together with SAP, Red Hat, VMware, and others – to create an open platform to accelerate enterprise deployment of secure generative AI systems. The idea is to ensure that when enterprises build AI systems, they have open, standards-based networking connectivity so they can securely deploy AI models at scale, Katti said.

Intel also introduced Xeon 6 processors that it says will save significant energy and give enterprises the power to support AI training and capabilities such as retrieval-augmented generation (RAG), which involves retrieving data from an external knowledge base to build and support large language models (LLMs).

The Xeon 6 processors offer a 4x performance improvement and nearly 3x better rack density compared with second-generation Intel Xeon processors, Intel stated.

Taking aim at Nvidia and targeting large AI processing needs, Intel announced the Gaudi 3 AI accelerator chip, which it says is 40% on average more power efficient than similar Nvidia H100 chips. (Pictured above is Intel CEO Pat Gelsinger unveiling the Gaudi 3 accelerator at the Intel Vision event in Phoenix, Ariz.)

“The Intel Gaudi 3 AI accelerator will power AI systems with up to tens of thousands of accelerators connected through the common standard of Ethernet,” Intel stated. For example, 24 200-gigabit Ethernet ports are integrated into every Intel Gaudi 3 accelerator, providing flexible and open-standard networking.

Intel Gaudi 3 promises 4x more AI compute and a 1.5x increase in memory bandwidth over its predecessor, the Gaudi 2, to allow efficient scaling to support large compute clusters and eliminate vendor lock-in from proprietary networking fabrics, Intel stated.

The idea is that the accelerator can deliver a leap in performance for AI training and inference models, giving enterprises a choice in what systems they deploy when taking generative AI to scale, Katti said.

The Intel Gaudi 3 accelerator will be available to original equipment manufacturers in the second quarter of 2024 in industry-standard configurations of Universal Baseboard and open accelerator module (OAM). Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro are among the vendors that will implement Gaudi 3 in servers and other hardware. General availability of Intel Gaudi 3 accelerators is set for the third quarter of 2024.

Exit mobile version