AI, machine learning, and high-performance computing are creating cooling challenges for data center owners and operators. As rack densities increase and temperatures rise, more data centers are finding ways to add liquid cooling to their facilities. Credit: Nataliya Sdobnikova Growing adoption of artificial intelligence and other power-intensive workloads, along with regulatory pressure to reduce energy consumption, is driving a slow but steady transition to liquid cooling in data centers. Today, 22% of data centers are using liquid cooling, according to IDC analyst Sean Graham. A decade of growth is anticipated: The global data center liquid cooling market was estimated at $2 billion in 2022 and is expected to grow at a compound annual growth rate of 15% between 2023 and 2032, according to Global Market Insights. Reasons for the spike in interest include: Data centers are energy hogs: According to the International Energy Agency (IEA), data center electricity consumption in the U.S. is expected to increase from around 200 TWh in 2022, which is approximately 4% of the country’s electricity demand, to almost 260 TWh in 2026, when it will account for 6% of total electricity demand. Cooling accounts for a sizeable portion of energy use: Cooling requirements account for 40% of a data center’s electricity demand, according to the IEA. IT is more on the hook for consumption: Uptime Institute found that 33% of data center owners and operators are very concerned about improving energy efficiency for facilities equipment – higher than any other issue. As sustainability becomes an increasingly important concern, the percentage of companies implementing a data center infrastructure sustainability program will rise from about 5% in 2022 to 75% by 2027, Gartner predicts. Server rack density has been increasing: Although a typical server rack is between 4 and 10 kilowatts, densities in high performance environments can be as high as 20 to 30 kilowatts, according to McKinsey. Density is also a consideration for edge computing, where space is at a premium. Air cooling limits Air cooling starts to reach its limits once you get over 20 kilowatts per rack. Typical air-cooling systems top out at 15 to 20 kilowatts, and row-based cooling with containment goes up to 30 kilowatts. Liquid cooling supports rack densities of 20 kilowatts and up, according to IDC’s Graham. Around 25% of all data centers had racks of 20 kilowatts or above in 2022 – and 5% had racks with densities over 50 kilowatts, according to the Uptime Institute. And the larger the data center, the more likely it is to see an increase in rack power. Some data center operators can’t use dense racks because they can’t cool them with air, says Zachary Smith, board member of the Sustainable and Scalable Infrastructure Alliance and former global head of edge infrastructure services at Equinix. Instead, they’d distribute that same workload across multiple racks, he says. As AI workloads proliferate, this is creating real problems. Generative AI, in particular, has been moving extremely fast, bringing rapid changes to the industry. “And data centers are built as infrastructure that lasts for decades,” Smith says. “So, you’ve got a mismatch. The cycles of the data center industry versus the computing side are not in sync.” Another challenge created by AI is that AI workloads require the fast movement of a lot of data, which leads to efforts to consolidate computing power. “So, data centers are pushed to be more dense,” Smith says. Hybrid cooling likely to dominate Of course, switching over an entire center is very costly – and usually unnecessary. There are still plenty of workloads that do just fine with air cooling. As a result, most data centers that deploy liquid cooling in the near future will probably do so in a hybrid way. This is particularly true for multi-tenant facilities, says Smith. “Even if you were to build a brand-new data center today, if you’re in a multi-tenant environment like Equinix and Digital Realty, you don’t get to dictate what your customers are bringing in,” he says. “So, you really have to prepare for a diversity of densities and technologies.” Single-tenant data centers, such as those run by hyperscalers or large enterprises, can dictate their own infrastructure, he says. But data centers typically have long lifespans. Companies don’t want to tear out expensive equipment before the end of its lifecycle. That means most data center operators will be adopting a practical approach to liquid cooling, rather than a purist one, Smith says. Today, most data centers are cooled with air, a hierarchy of fans that move air across individual chips, around servers, and, eventually, out of the entire building. It’s a loud, costly, and inefficient system. Liquids can transfer heat more efficiently than air, says Smith. “That’s why in your car you have a radiator with some sort of liquid to move heat,” he says. “It’s more efficient.” Rear door liquid cooling There are several ways to add liquid cooling to an existing air-cooled data center. One of the simplest is to install rear door heat exchanges on individual server racks. A condenser unit at the back of the rack delivers the cold liquid and removes the hot liquid. “It replaces the door at the back,” says Smith. “You don’t touch the servers.” A more advanced version can replace the cooling unit for an entire cage or row of server racks. “That’s where you can pipe liquid to all the racks,” he says. “This doesn’t require upgrading the whole building, just a rack or a cage. One tenant can be liquid cooled with rear door heat exchanges, and the next tenant can be completely air cooled.” This approach creates the least disruption for the IT environment, he says. According to IDC’s Graham, rear door heat exchanges can handle rack densities of 20 to 80 kilowatts per rack. “Rear door heat exchangers are great products for retrofitting existing data centers for high density workloads,” he says. This is the liquid cooling technology Graham sees most often, with the biggest challenge being the additional plumbing and piping that’s required. “It hits the sweet spot of cooling data center racks without doing any modifications to the servers,” he says. “That’s what makes it very appealing to folks implementing high density racks. It does require new piping, but in terms of relative effort, it’s the path of least resistance.” Direct-to-chip liquid cooling Direct-to-chip cooling systems are even more efficient, according to Graham, able to handle rack densities from 50 to 100 kilowatts. This approach to liquid cooling brings the liquid all the way inside the individual server, but it requires changes to the computing hardware and is complex to set up and to maintain. “You take out the heat sink and put a plate on it, and the plate is fed by a liquid mechanism,” says Smith. The liquid can stay a liquid, or it can change into steam. The latter is known as a dual-phase system. Different liquids have different boiling points, Smith says. In a single-phase system, the liquid is pumped through a loop in order to remove the heat from the servers. In a double-phase system, it moves by itself. “Gas travels faster than liquid, creating a self-regulating pressure,” says Smith. “If you have a leak, it won’t flow the other way – and it uses less energy because you’re not running pumps.” And the hot steam can do double duty for generating energy, for heating, or other uses in the building, he says. Immersion cooling Another approach to liquid cooling is immersion cooling. “That’s where you take the computer and submerse it in a vat of liquid, a non-conductive liquid like oil,” says Smith. “It looks like a deep fryer. You’re putting a server into a deep fryer.” The specific liquids used are ones that conduct heat but aren’t flammable, he adds. “This got popularized with cryptominers,” he says. “But it’s very heavy — a lot of data centers aren’t meant to handle this much weight.” There’s also the problem of what to do if there’s a leak, he says. “And how would you plug in the network cable? And where does the switch go? All kinds of other things make it a niche application right now.” There’s a lot of opportunity for manufacturers to rethink the fully immersed form factor. “Maybe they could look like Nintendo cartridges that are encased in liquid, but you never see the liquid,” Smith says. “There are new form factor designs that are fully immersed without vats of oil sitting around.” According to IDC’s Graham, immersion cooling is the most efficient of the three approaches, able to handle densities from 50 to 250 kilowatts per rack. For brand new, fully liquid-cooled data centers, all the air cooling infrastructure can be eliminated, saving space and money, says Joe Capes, CEO at LiquidStack, an immersion cooling vendor. In a hybrid approach, the immersion cooling systems can be used in a new high-density zone or in a modular deployment, he says. The benefit is three times the amount of compute density in the same amount of space. Maxim Serezhin, founder and CEO at Standard Power, a colocation company, says that liquid cooling is now a key differentiator in the market for his firm. Standard Power uses technology from LiquidStack, he says, for both immersion and direct-to-chip cooling. Customers using liquid cooling include those with AI, high-performance computing, crypto and other data-intensive workloads, he says. “We help them adopt the liquid cooling technology that best suits their needs.” Obstacles to adoption There are still some substantial obstacles to the adoption of liquid cooling, the lack of standards being one of the primary ones, in addition to safety worries and lack of training. “There’s a lot of opportunity here for standards, and that is what we’re working with at SSAI [the Sustainable and Scalable Infrastructure Association],” says Smith. For example, a particular chemical might be permitted in data center cooling in the United States, but not in Germany. And different vendors might have couplers in different sizes and different mechanisms. “Everybody can put anything they want in data centers,” says Smith. “There are no standards about shape, size, form factor, where you put cables. There’s no standard that server manufacturers can plug into in an efficient way.” The same is true for liquid cooling, he says. “Is it on the left side or the right side? Would it snap it or does it screw in?” Instead, there are multiple incompatible vendor solutions, he says. The industry needs a common set of standards to accelerate the deployment of liquid cooling. That’s not the only obstacle to adoption. Another is a lack of trained data center staffers who can handle the technology. “From a data center operator perspective, the big question isn’t ‘Do we support liquid?’ but ‘Are we comfortable with the operational side?'” Smith says. “What is the regulatory side where you are? How do you communicate with customers about what types of liquids you allow in? What happens if it breaks – is it your SLA or their SLA?” Another issue is that of physical space, says Holger Mueller, analyst at Constellation Research. “Traditionally, you have rows of servers open in the front, and all the important stuff is in the back, where you can put in some infrastructure and do some cooling,” he says. “But the space there wasn’t set up to put in another cooling system.” Under the sea Some companies have experimented with full immersion. Literal full immersion – in the ocean. Since more than half of the world’s population lives within 120 miles of a coast, putting data centers underwater could, potentially, revolutionize the industry. In 2018, Microsoft sank an entire data center 117 feet deep into the sea off the coast of Scotland. Two years later, Microsoft pulled the data center back up and discovered that its underwater servers were eight times more reliable than those on land, possibly due to the fact that the atmosphere in the data center was filled with nitrogen rather than air, and there weren’t any people around to bump into things and jostle components. Since then, though, there’s been no further news from Microsoft on the subject, possibly because of the logistical challenges of putting a data center under water. Other companies, including Subsea Cloud and Chinese company Highlander, are also getting into the game. Subsea planned to have its first commercial data centers running at the end of 2022, but, as of this October, it still hadn’t sunk any racks. Highlander, however, opened its first commercial undersea data center in Hainan in late 2022, with China Telecom one of its first customers. The problem with underwater data centers, says Smith, is that people have to be able to get into them to switch out equipment. “So it hasn’t been too popular. Whether the data center is under the ocean or up in space – or even just in Wisconsin – we don’t have good mechanisms as an industry for efficiently installing and removing equipment.” Related content how-to Compressing files using the zip command on Linux The zip command lets you compress files to preserve them or back them up, and you can require a password to extract the contents of a zip file. By Sandra Henry-Stocker May 13, 2024 4 mins Linux news High-bandwidth memory nearly sold out until 2026 While it might be tempting to blame Nvidia for the shortage of HBM, it’s not alone in driving high-performance computing and demand for the memory HPC requires. By Andy Patrizio May 13, 2024 3 mins CPUs and Processors High-Performance Computing Data Center opinion NSA, FBI warn of email spoofing threat Email spoofing is acknowledged by experts as a very credible threat. By Sandra Henry-Stocker May 13, 2024 3 mins Linux how-to Download our SASE and SSE enterprise buyer’s guide From the editors of Network World, this enterprise buyer’s guide helps network and security IT staff understand what Secure Access Service Edge (SASE) and Secure Service Edge) SSE can do for their organizations and how to choose the right solut By Neal Weinberg May 13, 2024 1 min SASE Remote Access Security Network Security PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe