Project Helix will see Dell and Nvidia combine their hardware and software infrastructure to help enterprises build and manage generative AI models on-premises. Credit: Dell Dell Technologies and Nvidia are jointly launching an initiative called Project Helix that will help enterprises to build and manage generative AI models on-premises, they said Tuesday. The companies will combine their hardware and software infrastructure in the project to support the complete generative AI lifecycle from infrastructure provisioning through modeling, training, fine-tuning, application development, and deployment, to deploying inference and streamlining results, they said in a joint statement. Dell will contribute its PowerEdge servers, such as the PowerEdge XE9680 and PowerEdge R760xa, which are optimized to deliver performance for generative AI training and AI inferencing, while Nvidia contribution to Project Helix, will be its H100 Tensor Core GPUs and Nvidia Networking to form the infrastructure backbone for generative AI workloads. Enterprises can pair this infrastructure with unstructured data storage, including Dell PowerScale and Dell ECS Enterprise Object Storage, the companies said. On the software front, Project Helix will offer Nvidia’s AI Enterprise software suite, which comes with its NeMo large language model framework and NeMo Guardrails software for building secure generative AI chatbots. Enterprises will be able to take advantage of Project Helix via Dell’s Validated Designs offering, which ships proven and tested configurations for particular use cases. The Validated Design offering based on Project Helix will be made available through traditional channels in the beginning of July 2023, the companies said, adding that the offering will follow an on-demand, pay-per-use flexible consumption model. In the last few months, Nvidia has consistently partnered with several technology companies such as Oracle, Google Cloud, and ServiceNow to provide services for developing AI and generative AI applications. And in March, the chip maker had said that it would make its DGX Pods, the computing modules that power ChatGPT, available in the cloud. Related content how-to Compressing files using the zip command on Linux The zip command lets you compress files to preserve them or back them up, and you can require a password to extract the contents of a zip file. By Sandra Henry-Stocker May 13, 2024 4 mins Linux news High-bandwidth memory nearly sold out until 2026 While it might be tempting to blame Nvidia for the shortage of HBM, it’s not alone in driving high-performance computing and demand for the memory HPC requires. By Andy Patrizio May 13, 2024 3 mins CPUs and Processors High-Performance Computing Data Center opinion NSA, FBI warn of email spoofing threat Email spoofing is acknowledged by experts as a very credible threat. By Sandra Henry-Stocker May 13, 2024 3 mins Linux how-to Download our SASE and SSE enterprise buyer’s guide From the editors of Network World, this enterprise buyer’s guide helps network and security IT staff understand what Secure Access Service Edge (SASE) and Secure Service Edge) SSE can do for their organizations and how to choose the right solut By Neal Weinberg May 13, 2024 1 min SASE Remote Access Security Network Security PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe