The nature of their design makes CPUs run hotter than ever, and one AMD executive says heat density is unlikely to decrease with future chips. Credit: Shutterstock CPU temperatures are a sore spot for enterprises and enthusiasts alike, and new releases from the big three chipmakers (Intel, AMD, and Nvidia) are continuing the trend of soaring temperatures and high power draws. And it looks like it’s going to stay that way. At an event held in Korea for AMD’s client CPU products, David McAfee, vice president and general manager of client channel business, was asked about the high temperatures of the current Ryzen chips, which get very hot even though they consume less power than their Intel counterparts. McAfee said AMD and its manufacturing partner TSMC are putting a lot of effort into process technology. “As more advanced processes are used in the future, we believe that the current phenomenon of high heat density will be maintained or further intensified. Therefore, it will be important to find a way to effectively eliminate the high heat density generated by such high-density chiplets in the future,” he said (translated from Korean). The discussion was centered around clients, but it has relevance to the server side as well. AMD uses the chiplet design of multiple smaller chips with a high-speed interconnect rather than one large piece of silicon. Because they are separate physical chips and are spread out on the PC board, they inherently generate more heat. Heat is becoming an increasingly problematic issue for data center operators. Heat, in general, and increased density, in particular, are forcing data center operators into different types of cooling, such as liquid cooling, because traditional air cooling is simply not sufficient anymore. Design is also a factor. As transistors get smaller and more are packed into the same space as before, heat is a natural byproduct. It’s just physics, as one AMD contact told me. This problem will inevitably show up on the server side. High-end desktop products from both Intel and AMD almost demand liquid cooling; air cooling is simply not viable anymore. And that’s on desktop processors, which have six, eight, or maybe 12 cores. AMD is making server processors with 192 cores. That means a whole lot more transistors packed into a small space that need to be cooled. Instinct MI300 now shipping AMD’s answer to Nvidia’s data center GPUs is the Instinct line of repurposed GPUs for high-performance computing and artificial intelligence. The world’s fastest supercomputer, Frontier, is powered by AMD Epyc CPUs and Instinct MI250 GPU cards, and it runs at faster than one Exaflops of performance. Now AMD informs us that the successor to the MI250, the MI300A, has begun shipping to the help build the Department of Energy’s El Capitan Exascale Supercomputer, located at the Lawrence Livermore National Laboratory in Northern California. The announcement was made by CEO Lisa Su on a recent call with Wall Street analysts to discuss AMD’s quarterly earnings. This supercomputer is expected to hit 2 exaflops when completed next year. The MI300 was announced in January at CES, of all places. There are two products in the family, the MI300A and MI300X. The MI300A is a combination CPU+GPU, much like Nvidia’s Grace Hopper superchip. All told, it consists of nine 5nm chiplets that’s are stacked on top of four 6nm chiplets for I/O with a total of 146 billion transistors, making it one of the biggest chips ever made. The MI300X is a strictly GPU product and is meant for cloud acceleration, similar to Nvidia’s Hopper H100 GPU accelerator. The MI300X has a power draw of 750W and includes eight GPU chiplets, 192GB of HBM3 memory, and 5.2TB/s of memory bandwidth. Su said AMD expects its data center GPU revenue to be $400 million in the fourth quarter of 2023, with that number hitting over $2 billion in 2024. Contrast that with Nvidia, which did $10 billion in the last quarter in data center sales, and it’s pretty obvious that this is a one horse race. For now, AMD still has bragging rights. It has the fastest supercomputer in the world, and it will likely have the fastest when El Capitan goes live, too. Related content news High-bandwidth memory nearly sold out until 2026 While it might be tempting to blame Nvidia for the shortage of HBM, it’s not alone in driving high-performance computing and demand for the memory HPC requires. By Andy Patrizio May 13, 2024 3 mins CPUs and Processors High-Performance Computing Data Center news CHIPS Act to fund $285 million for semiconductor digital twins Plans call for building an institute to develop digital twins for semiconductor manufacturing and share resources among chip developers. By Andy Patrizio May 10, 2024 3 mins CPUs and Processors Data Center news HPE launches storage system for HPC and AI clusters The HPE Cray Storage Systems C500 is tuned to avoid I/O bottlenecks and offers a lower entry price than Cray systems designed for top supercomputers. By Andy Patrizio May 07, 2024 3 mins Supercomputers Enterprise Storage Data Center news Lenovo ships all-AMD AI systems New systems are designed to support generative AI and on-prem Azure. By Andy Patrizio Apr 30, 2024 3 mins CPUs and Processors Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe