Cloud service provider Lambda is working to build a GPU cloud for AI workloads. Credit: Shutterstock AI cloud service provider Lambda has scored a $320 million cash infusion to build out its GPU-based services, which provide AI training clusters made up of thousands of Nvidia accelerators. Lambda is the latest cloud company to offer GPU processing – instead of the standard CPU processing – dedicated to all things AI, particularly inference and training. Vultr, CoreWeave and Voltage Park are all offering similar cloud GPU services. Lambda is preparing to deploy “tens of thousands” of Nvidia GPUs, including the current top-of-the-line H100 Hopper accelerators as well as Nvidia’s forthcoming G200 GPU accelerators, which are set to double the performance of the H100. Lambda is also looking to deploy Nvidia’s hybrid GH200 CPU/GPU superchips. Lambda’s stated mission is to build “the #1 AI compute platform in the world,” and to accomplish this, “we’ll need lots of Nvidia GPUs, ultra-fast networking, lots of data center space, and lots of great new software to delight you and your AI engineering team,” it said in a statement announcing the funding. The $320 million Series C funding is led by a number of venture funds, including B Capital, SK Telecom, T. Rowe Price Associates, Inc., and existing investors Crescent Cove, Mercato Partners, 1517 Fund, Bloomberg Beta, and Gradient Ventures, among others. “With this new financing, Lambda will accelerate the growth of our GPU cloud, ensuring AI engineering teams have access to thousands of Nvidia GPUs with high-speed Nvidia Quantum-2 InfiniBand networking,” the company said. This is undoubtedly music to Nvidia CEO Jensen Huang’s ears, as he has been pushing the notion of dedicated AI data centers, called AI factories, that are populated entirely with GPUs rather than the x86 CPUs found in traditional data centers. Additionally, on the most recent earnings call after Nvidia’s blowout quarter, Huang talked at length about the benefits of expanding GPU processing to other fields besides just AI in a move to muscle in on x86 territory. Founded in 2012, Lambda has been working with GPU systems since 2017, when it first started to experiment with transformer models. Lambda offers co-location services specifically designed for dense deployments as well as resells access to Nvidia’s DGX SuperPODs. The latter is likely to be Lambda’s bread-and-butter, as it is much cheaper to rent AI hardware than purchase and maintain it. This is leading to a rise in AI as a service, which allows customers to rent time on AI-ready equipment rather than buy their own. The real challenge for Lambda may be getting the hardware at all. TSMC is making chips as fast as it can, but demand is enormous and a backlog of several weeks and months remains. Related content news High-bandwidth memory nearly sold out until 2026 While it might be tempting to blame Nvidia for the shortage of HBM, it’s not alone in driving high-performance computing and demand for the memory HPC requires. By Andy Patrizio May 13, 2024 3 mins CPUs and Processors High-Performance Computing Data Center news CHIPS Act to fund $285 million for semiconductor digital twins Plans call for building an institute to develop digital twins for semiconductor manufacturing and share resources among chip developers. By Andy Patrizio May 10, 2024 3 mins CPUs and Processors Data Center news HPE launches storage system for HPC and AI clusters The HPE Cray Storage Systems C500 is tuned to avoid I/O bottlenecks and offers a lower entry price than Cray systems designed for top supercomputers. By Andy Patrizio May 07, 2024 3 mins Supercomputers Enterprise Storage Data Center news Lenovo ships all-AMD AI systems New systems are designed to support generative AI and on-prem Azure. By Andy Patrizio Apr 30, 2024 3 mins CPUs and Processors Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe