Intel is focusing its efforts around the newly acquired Habana product line and moving to a single hardware architecture and software stack for data-center AI acceleration, winding down work on its Nervana AI chip. Credit: dny59 / Getty Images Well, that was short. Intel is ending work on its Nervana neural network processors (NNP) in favor of an artificial intelligence line it gained in the recent $2 billion acquisition of Habana Labs. Intel acquired Nervana in 2016 and issued its first NNP chip one year later. After the $408 million acquisition by Intel, Nervana co-founder Naveen Rao was placed in charge of the AI platforms group, which is part of Intel’s data platforms group. The Nervana chips were meant to compete with Nvidia GPUs in the AI inference training space, and Facebook worked with Intel “in close collaboration, sharing its technical insights,” according to former Intel CEO Brian Krzanich. For now, Intel has ended development of its Nervana NNP-T training chips and will deliver on current customer commitments for its Nervana NNP-I inference chips; Intel will move forward with Habana Labs’ Gaudi and Goya processors in their place. There are two parts to neural networks: training, where the computer learns a process, such as image recognition; and inference, where the system puts what it was trained to do to work. Training is far more compute-intensive than inference, and it’s where Nvidia has excelled. Intel said the decision was made after input from customers, and that this decision is part of strategic updates to its data-center AI acceleration roadmap. “We will leverage our combined AI talent and technology to build leadership AI products,” the company said in a statement to me. “The Habana product line offers the strong, strategic advantage of a unified, highly-programmable architecture for both inference and training. By moving to a single hardware architecture and software stack for data-center AI acceleration, our engineering teams can join forces and focus on delivering more innovation, faster to our customers,” Intel said. This outcome from the Habana acquisition wasn’t entirely unexpected. “We had thought that they might keep one for training and one for inference. However, Habana’s execution has been much better and the architecture scales better. And, Intel still gained the IP and expertise of both companies,” said Jim McGregor, president of Tirias Research. The good news is that whatever developers created for Nervana won’t have to be thrown out. “The frameworks work on either architecture,” McGregor said. “While there will be some loss going from one architecture to another, there is still value in the learning, and I’m sure Intel will work with customers to help them with the migration.” This is the second AI/machine learning effort Intel has shut down, the first being Xeon Phi. Xeon Phi itself was a bit of a problem child, dating back to Intel’s failed Larrabee experiment to build a GPU based on x86 instructions. Larrabee never made it out of the gate, while Xeon Phi lasted a few generations as a co-processor but was ultimately axed in August 2018. Intel still has a lot of products targeting various AI: Mobileye, Movidius, Agilex FPGA, and its upcoming Xe architecture. Habana Labs has been shipping its Goya Inference Processor since late 2018, and samples of its Gaudi AI Training Processor were sent to select customers in the second half of 2019. Related content news High-bandwidth memory nearly sold out until 2026 While it might be tempting to blame Nvidia for the shortage of HBM, it’s not alone in driving high-performance computing and demand for the memory HPC requires. By Andy Patrizio May 13, 2024 3 mins CPUs and Processors High-Performance Computing Data Center news CHIPS Act to fund $285 million for semiconductor digital twins Plans call for building an institute to develop digital twins for semiconductor manufacturing and share resources among chip developers. By Andy Patrizio May 10, 2024 3 mins CPUs and Processors Data Center news HPE launches storage system for HPC and AI clusters The HPE Cray Storage Systems C500 is tuned to avoid I/O bottlenecks and offers a lower entry price than Cray systems designed for top supercomputers. By Andy Patrizio May 07, 2024 3 mins Supercomputers Enterprise Storage Data Center news Lenovo ships all-AMD AI systems New systems are designed to support generative AI and on-prem Azure. By Andy Patrizio Apr 30, 2024 3 mins CPUs and Processors Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe