Helping the startup build an independent system to create foundation models may help solidify Dell’s spot alongside cloud computing giants in the race to power AI. Credit: Shutterstock Dell Technologies has agreed to invest $150 million to build a new high-performance computing cluster for Imbue, an independent AI research company that’s one of only a few building its own foundation models using its own computing cluster. Imbue already is using the cluster – powered by Dell PowerEdge XE9680 servers with Nvidia H100 Tensor Core GPUs – to train AI models and develop early prototype agents that can correct bugs in code and analyze lengthy documents. The company is a standout among independent AI labs in that it not only develops the models for the foundation of its AI but also trains them to have more advanced reasoning capabilities, according to Dell. The AI race among tech giants Indeed, since the launch of generative AI chatbot ChatGPT about a year ago, all of the top tech giants have been clamoring to establish themselves as power players in the rapidly growing AI space. Microsoft, Google, and Amazon also have partnered with AI startups through various investments and deals to provide cloud computing infrastructure to power AI models. While Dell has neither quite the star power nor the market capital of these rivals, it does have a solid hardware business that is now helping to build what potentially will be some of the most advanced AI models to date, putting the hardware company squarely at the forefront of AI research. “The purpose of technology is to drive human progress, and this often begins at the research level,” Jeff Boudreau, chief AI officer at Dell said in a press statement. “Dell technology will provide Imbue with the powerful engine to help unearth the next generation of impactful AI innovation.” Maintaining independence The decision to go with Dell rather than use a cloud computing platform from Microsoft, Google, or Amazon to develop its AI models sets Imbue apart from other AI research startups. Developing AI models consumes a significant amount of computing power, and it’s no small task to build and manage server clusters that can do this type of heavy lifting. Imbue’s system is managed by Voltage Park, a cloud computing provider that builds solutions for machine learning. Josh Albrecht, Imbue’s CTO, said in comments published in a report online that the company didn’t want to be locked into a technology provider, which would have happened had they gone with a cloud service provider like Google or Amazon. “This allows us to remain independent,” he said. Dell also helped the company deploy a custom cluster “much more quickly than other providers could have,” Albrecht said in a press statement. Imbue and Dell designed the system to include smaller clusters, which supports rapid experimentation on novel model architectures, the companies said. It also supports rapid networking into a large cluster, which will enable the startup to efficiently train large-scale foundation models, they said. Imbue has big plans for the high-level reasoning that it envisions for its AI, which includes giving it capabilities like knowing when to ask for more information, analyzing and critiquing its own outputs, or breaking down a difficult goal into a plan and then executing it. In the long term, this will mean the development of more capable, trustworthy AI agents that don’t require constant supervision from their users. This opens the door to a not-so-distant future where, for example, AI agents can plan a vacation on their users’ behalf, not simply generate travel ideas, according to Dell. The idea behind this is to let technology do more of humans’ basic tasks for them, freeing them up to spend their downtime however they’d like, the company said. Related content how-to Compressing files using the zip command on Linux The zip command lets you compress files to preserve them or back them up, and you can require a password to extract the contents of a zip file. By Sandra Henry-Stocker May 13, 2024 4 mins Linux news High-bandwidth memory nearly sold out until 2026 While it might be tempting to blame Nvidia for the shortage of HBM, it’s not alone in driving high-performance computing and demand for the memory HPC requires. By Andy Patrizio May 13, 2024 3 mins CPUs and Processors High-Performance Computing Data Center opinion NSA, FBI warn of email spoofing threat Email spoofing is acknowledged by experts as a very credible threat. By Sandra Henry-Stocker May 13, 2024 3 mins Linux how-to Download our SASE and SSE enterprise buyer’s guide From the editors of Network World, this enterprise buyer’s guide helps network and security IT staff understand what Secure Access Service Edge (SASE) and Secure Service Edge) SSE can do for their organizations and how to choose the right solut By Neal Weinberg May 13, 2024 1 min SASE Remote Access Security Network Security PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe