Benchmarks aren't the same as real world use, but they can give a good idea of what’s to come, and Nvidia's Hopper GPU performance is impressive. Nvidia has released performance data for its forthcoming Hopper generation of GPUs, and the initial benchmarks are tremendous. The metrics are based on MLPerf Inference v2.1, an industry-standard benchmark that analyzes the performance of inferencing tasks using a machine-learning model against new data. Nvidia claims its Hopper-based H100 Tensor Core GPUs delivered up to 4.5x greater performance than its previous A100 Ampere GPUs. (Read more about Hopper: Nvidia unveils a new GPU architecture designed for AI data centers) It’s a remarkable jump in just one generation. For comparison, CPU benchmarks often grow 5% to 10% from one generation to the next. Nvidia’s performance leap comes with a caveat, however. The 450% boost came on a single benchmark; there were a total of six benchmarks run. The other benchmarks yielded at or below two-fold improvements. Still, a doubling of performance in one generation is impressive. The top gains came on the BERT-Large benchmark, which measures natural-language processing of the BERT AI model developed by Google and used in Google’s search engine, among other things. Nvidia says the BERT performance leap is due to Hopper’s Transformer Engine, which is specifically designed to accelerate training transformer models. Ampere isn’t the only older Nvidia technology getting trounced. The company also benchmarked Jetson AGX Orin, its Ampere-based SoC for robotics and edge systems and a replacement for the Jetson AGX Xavier processor. In those tests, Orin ran up to 5x faster than Xavier while delivering an average of 2x better energy efficiency. But I’m not writing the Ampere A100 obituary just yet. Thanks to improvements in Nvidia’s AI software, it is saying MLPerf figures for the Ampere have advanced by 6x since the A100 was first benchmarked two years ago. Orin is available now. Hopper, which was first introduced in March, is due later this year. Related content news High-bandwidth memory nearly sold out until 2026 While it might be tempting to blame Nvidia for the shortage of HBM, it’s not alone in driving high-performance computing and demand for the memory HPC requires. By Andy Patrizio May 13, 2024 3 mins CPUs and Processors High-Performance Computing Data Center news CHIPS Act to fund $285 million for semiconductor digital twins Plans call for building an institute to develop digital twins for semiconductor manufacturing and share resources among chip developers. By Andy Patrizio May 10, 2024 3 mins CPUs and Processors Data Center news HPE launches storage system for HPC and AI clusters The HPE Cray Storage Systems C500 is tuned to avoid I/O bottlenecks and offers a lower entry price than Cray systems designed for top supercomputers. By Andy Patrizio May 07, 2024 3 mins Supercomputers Enterprise Storage Data Center news Lenovo ships all-AMD AI systems New systems are designed to support generative AI and on-prem Azure. By Andy Patrizio Apr 30, 2024 3 mins CPUs and Processors Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe