The demands of AI are too much for current IT infrastructure, according to 42% of respondents to an Equinix survey. As adoption of artificial intelligence (AI) technology accelerates, IT organizations are concerned that their existing infrastructure isn’t powerful enough to keep up. AI hardware – especially training hardware – is becoming more and more power hungry, according to Equinix, which just released its 2023 Global Tech Trends Survey. The power draw from traditional racks in a data center is between 5 kW and 10 kW per rack. But, increasingly, newer generations of GPU-based racks are pushing power draws north of 30 kW per rack, and in some cases as high as 72 kW per rack, according to Kaladhar Voruganti, senior technologist at Equinix. “So, definitely, it’s very hard to host this type of infrastructure in private data centers,” he said. In its research, Equinix found that 85% of the 2,900 IT decision-makers polled are already using AI or planning to use it across multiple key functions. Organizations are most likely to be using AI, or planning to do so, in IT operations (85%), cybersecurity (81%), and customer experience (79%). However, 42% of IT leaders surveyed believe their IT infrastructure is not fully prepared to meet the demands of AI technology. Power capacity isn’t the only concern. Cooling, too, is impacted by AI’s climbing power demands. Traditional air cooling involving heatsinks and fans only works effectively until roughly 30 kW per rack. After that, the fans simply can’t keep up, and liquid cooling is required. Most data centers are not equipped for liquid cooling and would have to be retrofitted at considerable expense. Along with hardware shortcomings, many companies are dealing with a shortage of skilled IT professionals. There has always been a shortage of people with analytics skills, for example, and that has only been exacerbated with the explosion in popularity and use of AI. AI as a service Companies lacking in the proper hardware to do AI training have two options: make a massive investment in hardware or turn to cloud service providers for AI-as-a-service, which most of the top cloud service providers now offer. Rather than make the million-dollar investment in hardware, an enterprise could upload the data to be processed and the cloud service provider can do the heavy lifting. The enterprise could take the trained data models back when the processing is done. Customers often will opt for end-to-end solutions from AI vendors in the cloud, especially initially, “because they make it easy for the customers with a simple button,” Voruganti said. But variable cloud costs – which enterprises incur with each read or write to cloud-based data, or with every data extraction, for example – may cause IT teams to reconsider that approach. Voruganti said he’s seeing companies choose to place foundation models with different cloud service providers based on their areas of expertise. So, one provider might offer strong vision-based models, while another might have better general language or large language models. “There is increasingly desire for people to leverage these models from different clouds,” Voruganti said. “And [if they can maintain their own control over their data], like a neutral location, then they can bring the model to where the data is, and then customize it.” Related content news High-bandwidth memory nearly sold out until 2026 While it might be tempting to blame Nvidia for the shortage of HBM, it’s not alone in driving high-performance computing and demand for the memory HPC requires. By Andy Patrizio May 13, 2024 3 mins CPUs and Processors High-Performance Computing Data Center news CHIPS Act to fund $285 million for semiconductor digital twins Plans call for building an institute to develop digital twins for semiconductor manufacturing and share resources among chip developers. By Andy Patrizio May 10, 2024 3 mins CPUs and Processors Data Center news HPE launches storage system for HPC and AI clusters The HPE Cray Storage Systems C500 is tuned to avoid I/O bottlenecks and offers a lower entry price than Cray systems designed for top supercomputers. By Andy Patrizio May 07, 2024 3 mins Supercomputers Enterprise Storage Data Center news Lenovo ships all-AMD AI systems New systems are designed to support generative AI and on-prem Azure. By Andy Patrizio Apr 30, 2024 3 mins CPUs and Processors Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe