Intel is looking to offer a unified CPU, GPU, and platform architecture that Nvidia doesn’t have but AMD does. Credit: Martyn Williams/IDG Third time’s the charm? Intel is hoping so. It released details of its Xe Graphics Architecture, with which it plans to span use cases from mobility to high-performance computing (HPC) servers – and which it hopes will succeed where its Larrabee GPU and Xeon Phi manycore processors failed. It’s no secret Intel wants a piece of the high-performance computing HPC action, given that it introduced the chip and other products at it Intel HPC Developer Conference in Denver, Colo., this week just ahead of the Supercomputing ’19 tradeshow. Intel bills the Xe Graphics Architecture card as its first “exascale graphics card,” based on a new 7nm architecture called “Ponte Vecchio.” Intel is splitting the Xe Architecture into three designs for different segments: data center, consumer graphics cards, and AI use-cases; integrated graphics for processors; and the high-tier Xe HPC for high performance computing. Rather than use the large die for its graphics chips the way Nvidia and AMD do it, Intel is using the Multi-Chip Module (MCM) design that breaks up one big chip into smaller “chiplets” that are connected via a high speed fabric. This is how AMD designed the Ryzen and EPYC CPUs, something Intel initially pooh-poohed but is since adopting for its Xeons. These modules also use other packaging technology advancements such as Foveros 3D chip-packaging technology, which allows for 3D stacking of dies and mixing of CPU, AI, and GPU processors, High Bandwidth Memory (HMB) and Embedded Multi-Die Bridge (EMIB) technology to tie the HBM packages to the compute die. Xe Architecture cards will also come with a new scalable fabric called XE Memory Fabric (XEMF), which ties compute and memory together with a coherent memory interface that Intel claims will allow Xe to scale to thousands of nodes. Kevin Krewell, principal analyst with Tirias Research, noted that this is not a brand-new graphic architecture, it’s an evolution from Intel’s integrated GPU technology that has been a part of its consumer Core CPUs for several years and has been steadily maturing. “This is a design that is more like traditional graphics. [Intel is starting] with their traditional integrated graphics cores and building on top of that. Larrabee tried a GPU built on a CPU. In this case they are building a ground up GPU with GPU-like features and not trying to do anything too weird. And now they’ve got a real GPU guy running the group,” he said. Krewell is referring to Raja Koduri, senior vice president of the company’s Core and Visual Computing Group. Koduri has one hell of a resume. He was the brains behind AMD integrating CPU and GPU cores on one die (yet another example of AMD leading) and then went to Apple where he pioneered the . So if Intel can’t get graphics right with this guy there is no hope. Taking Aim At CUDA One thing that has been a huge boon to Nvidia is its CUDA language for programming GPUs. Intel is taking aim and then some at CUDA with its new OneAPI programming model, which Intel designed to simplify programming across not only its GPU but CPU, FPGA, and AI accelerators as well. This means applications can move seamlessly between Intel’s different types of compute architectures. If an application is best processed on an FPGA, then it will be processed there. Same for CPUs, GPUs, and AI accelerators. If that’s not enough, Intel has the Data Parallel C++ Conversion Tool to take CUDA code and port it to OneAPI. If they pull this off, it would be a huge advantage over Nvidia, since CUDA is only for its GPUs. Interestingly, Intel said OneAPI will be open-source and will also work with other vendors’ hardware, although they didn’t say whose. If it ends up ported to AMD’s platform, well, that would be entertaining. “It’s more than a shot at CUDA because they want to replace CUDA,” said Krewell. “OneAPI is a very ambitious program, trying to combine all of the different processor elements under one umbrella API. So it’s a very aggressive program and they are building it out piece by piece. But right now it’s at version 0.5. CUDA is at version ten. So they’ve got a ways to catch up.” Related content news High-bandwidth memory nearly sold out until 2026 While it might be tempting to blame Nvidia for the shortage of HBM, it’s not alone in driving high-performance computing and demand for the memory HPC requires. By Andy Patrizio May 13, 2024 3 mins CPUs and Processors High-Performance Computing Data Center news CHIPS Act to fund $285 million for semiconductor digital twins Plans call for building an institute to develop digital twins for semiconductor manufacturing and share resources among chip developers. By Andy Patrizio May 10, 2024 3 mins CPUs and Processors Data Center news HPE launches storage system for HPC and AI clusters The HPE Cray Storage Systems C500 is tuned to avoid I/O bottlenecks and offers a lower entry price than Cray systems designed for top supercomputers. By Andy Patrizio May 07, 2024 3 mins Supercomputers Enterprise Storage Data Center news Lenovo ships all-AMD AI systems New systems are designed to support generative AI and on-prem Azure. By Andy Patrizio Apr 30, 2024 3 mins CPUs and Processors Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe