GPUs have overshadowed FPGAs in terms of AI use potential, but that may change with technology like DeepSeek. Credit: Timofeev Vladimir / Shutterstock In a world gone mad over GPUs for AI acceleration, FPGA processors are seeing limited adoption in the data center for genAI workloads, with their primary use still in embedded markets and edge AI workloads. There have been some extremely expensive acquisitions related to FPGA — Intel invested $16.7 billion when it bought FPGA maker Altera in 2016, while AMD spent upwards of $35 billion in 2020 to acquire Xilinx, but whether they are getting a return on investment is debatable. For the fourth quarter, revenue for AMD’s embedded segment (which is where the FPGA business is slotted) came in at $923 million, which is down 13% compared to the year-earlier quarter. And for the full year, segment revenue was $3.6 billion, which is down 33% from the prior year. Intel didn’t fare much better. For the fourth quarter, Altera delivered revenue of $429 million, which is up 4% sequentially. For Q1, it expects Altera revenue to be down sequentially. FPGAs, notable because they can be reprogrammed for new processing tasks, seem to have lost their luster in the mania around generative AI. GPUs are all the rage, or in some cases, custom silicon specifically designed for inferencing. FPGAs certainly have their uses. Both Intel and AMD use their FPGAs for high-end networking cards. Other uses include industrial robotics and factory automation; healthcare for surgical robots and medical diagnostic equipment; and in automotive for ADAS safety systems. But AI is not happening. “I think AI and genAI helped kind of push away focus from leveraging [FPGA]. And I think there were already moves away from it prior to [the genAI revolution] that put the pedal to the metal in terms of not looking at the FPGAs at the high end. I think now it’s [all about] DeepSeek and is kind of a nice reset moment,” said Alvin Nguyen, senior analyst with Forrester Research. The DeepSeek effect One reason DeepSeek AI rattled Wall Street so hard is the Chinese company achieved performance comparable to ChatGPT and Google Gemini but without the billions of dollars’ worth of Nvidia chips. It was done using commercial, consumer grade cards that are considerably cheaper than their data center counterparts. That means all might not be lost when it comes to FPGA. “After DeepSeek showing that you could use lower power devices that are more commonly available, [FPGA] might be valuable again,” said Nguyen. But, he adds, “it’s not going to be valuable for all AI workloads like the LLMs, where you need as much memory, as much network bandwidth, as much compute, in terms of GPU as possible.” Nguyen suggests that DeepSeek shows you don’t necessarily need billions of dollars of cutting-edge Nvidia GPUs, you can get away with an FPGA or a CPU or use consumer-grade GPUs. “I think that’s kind of a nice ‘aha’ moment from an AI perspective, to show there’s a new low bar that’s being set. If you can throw CPUs with a bunch of memory, or, in this case, if you can look at FPGAs and get something very purpose built, you can get a cluster of them at lower cost.” But Bob O’Donnell, president and chief analyst with TECHpinions, disagrees with the comparison. “FPGAs are used in a whole bunch of different applications, and they’re not really a one-to-one compare against GPUs. They’re kind of a different animal,” he said. The problem with FPGAs has always been that they’re extraordinarily hard to program and they’re extremely specialized. So there are very few people who really know how to leverage these things. But for the people who do, there’s no replacement, and they’re not typically used for the same kinds of tasks that GPUs are used for, he said. The jury is still out on whether Intel got its money’s worth. But O’Donnell feels that AMD did, because AMD’s neural processing unit (NPU) for AI acceleration in its CPUs comes from Xilinx technology. ”That was the idea, to take some of the IP that Xilinx had and help integrate it into a PC. In fact, AMD was the first have any kind of NPU. They were way ahead of the game,” O’Donnell said. “We’ve said countless times that we’re in the beginning stages of this GenAI movement and that AI isn’t a one size fits all concept for compute,” said an AMD spokesperson. “Our FPGA IP power our Ryzen AI NPUs, which are important for GenAI at a consumer perspective. As well, AMD FPGAs and adaptive SoCs are widely deployed for edge AI workloads in embedded markets.” O’Donnell said it’s still up to for debate whether DeepSeek’s claims of using low-end hardware are actually true. “But it’s fair to say that it raised the conversation of, yes, you can run powerful models on much less hardware than we were told we needed,” O’Donnell said. Is that an opportunity for FPGA? “I don’t know that it is, though, because the bigger problem with all of this stuff is running is the software that runs on those GPUs,” he said. “It’s all about the software, not the actual chips. And there’s no equivalent that I’m aware of at all that lets you take these big models and run them on FPGAs.” So can FPGAs fit in to this brave new world of generative AI? The jury is still out SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe