Nvidia testing of data-processing units in servers says the DPUs free up CPUs to do more of the work they are designed for and also reduce overall power draw by the servers. Credit: iStock The chip maker says tests of its BlueField-2 data-processing units (DPU) in servers results in significant power savings over servers that don’t use the specialized chips to offload tasks from the CPUs. The DPUs, or SmartNICs, take on certain workloads—packet routing, encryption, real-time data analysis—leaving the CPU free to process data. But Nvidia says they can also reduce power consumption. The four tests involved running similar workloads on servers with and without DPUs, and Nvidia concluded that even with the additional power draw by the DPUs, overall power consumption by the servers dropped. For example, one test found that when a DPU took on processing IPsec encryption, the server used 21% less power processing the task than when the CPU did it alone—525W with the DPU and 665W without. “I can’t speak for others,” said Ami Badani vice president of marketing and developer ecosystem strategy at Nvidia. “But for the workloads that we’ve tested, if you run those same workloads with a DPU in those servers, you would ultimately need fewer servers to run those same workloads.” In addition to Nvidia, competitors Intel, AMD, and Marvell also make DPUs. (Nvidia acquired its BlueField-2 DPU line with its acquisition of Mellanox in 2019.) The tests were run in cooperation with Ericsson, VMware, and an unnamed North American wireless carrier. The best-case results in the tests said offloading specific networking tasks to a BlueField DPU reduced power consumption by as much as 34%–up to 247 Watts per server. And that could reduce the number of servers needed in certain data centers, Nvidia says. How much that translates into dollar savings depends on the price of electricity and the power usage effectiveness (PUE) of the data center, Nvidia says. PUE is the ratio between the total power drawn by a data center and the amount used to power the networking gear within it. However, data centers cashing in by getting rid of servers is unlikely, Badan said. “In realitywhat will happen is instead of most enterprises saying, ‘I’m just going to return five servers that I didn’t need,’ most folks will repurpose those servers for other workloads,” she said. Still, the power savings could help organizations meet their green/ESG initiatives But if they do choose to save on servers, it could help enterprises with their environmental, social, and governance initiatives, Badani said. “Saving cores ultimately means saving servers, so you don’t need the capacity that you originally needed for those same workloads,” she said. Related content news High-bandwidth memory nearly sold out until 2026 While it might be tempting to blame Nvidia for the shortage of HBM, it’s not alone in driving high-performance computing and demand for the memory HPC requires. By Andy Patrizio May 13, 2024 3 mins CPUs and Processors High-Performance Computing Data Center news CHIPS Act to fund $285 million for semiconductor digital twins Plans call for building an institute to develop digital twins for semiconductor manufacturing and share resources among chip developers. By Andy Patrizio May 10, 2024 3 mins CPUs and Processors Data Center news HPE launches storage system for HPC and AI clusters The HPE Cray Storage Systems C500 is tuned to avoid I/O bottlenecks and offers a lower entry price than Cray systems designed for top supercomputers. By Andy Patrizio May 07, 2024 3 mins Supercomputers Enterprise Storage Data Center news Lenovo ships all-AMD AI systems New systems are designed to support generative AI and on-prem Azure. By Andy Patrizio Apr 30, 2024 3 mins CPUs and Processors Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe