Americas

  • United States

GPUs: Designed for gaming now crucial to HPC and AI

Feature
May 15, 20188 mins
CPUs and ProcessorsData CenterServers

How did a chip meant for gaming become so vital in enterprise computing? Some people thought outside the box.

gpu ai gaming
Credit: Getty Images

It’s rare to see a processor find great success outside of the area it was intended for, but that’s exactly what has happened to the graphics processing unit (GPU). A chip originally intended to speed up gaming graphics and nothing more now powers everything from Adobe Premier and databases to high-performance computing (HPC) and artificial intelligence (AI).

GPUs are now offered in servers from every major OEM plus off-brand vendors, but they aren’t doing graphics acceleration. That’s because the GPU is in essence a giant math co-processor, now being used to perform computation-intensive work ranging from 3D simulations to medical imaging to financial modeling

GPUS vs. CPUs

Because of their single-purpose design, GPU cores are much smaller than cores for CPUs, so GPUs have thousands of cores whereas CPUs max out at 32. With up to 5,000 cores available for a single task, the design lends itself to massive parallel processing.

Wherever an application was begging for parallel processing, that’s where GPU computing took off, said Jon Peddie, president of Jon Peddie Research, which follows the graphics market.

“In the past, parallel processing was done with huge numbers of processors like an x86, so they were very expensive and difficult to program. The GPU as a dedicated single-purpose processor offered much greater compute density, and it’s been exploited in many math acceleration tasks,” he said.

Applications that support GPUs

GPU use in the data center started with homegrown apps thanks to a language Nvidia developed called CUDA. CUDA uses a C-like syntax to make calls to the GPU instead of the CPU, but instead of doing a call once, it can be done thousands of times in parallel.

As GPU performance improved and the processors proved viable for non-gaming tasks, packaged applications began adding support for them. Desktop apps, like Adobe Premier, jumped on board but so did server-side apps, including SQL databases. The GPU is ideally suited to accelerate the processing of SQL queries because SQL performs the same operation – usually a search – on every row in the set. The GPU can parallelize this process by assigning a row of data to a single core.

Brytlyt, SQream Technologies, MapD, Kinetica, PG-Strom and Blazegraph all offer GPU-accelerated analytics in their databases. Oracle has said it is working with Nvidia but nothing appears firm yet. Microsoft does not support GPU acceleration on SQL Server.

GPUs and high-performance computing (HPC)

GPUs have also found a home in HPC, where many tasks like simulations, financial modeling and 3D rendering also run well in a parallel environment. According to Intersect 360, a market research firm that follows the HPC market, 34 of the 50 most popular HPC application packages offer GPU support, including all of the top 15 HPC apps.

This includes the chemistry apps GROMACS, Gaussian and VASP, ANSYS and OpenFOAM for fluid dynamics, Simulia Abaqus for structural analysis and WRF for weather/environment modeling.

“We believe GPU computing has reached a tipping point in the HPC market that will encourage continued increased in application optimization,” the analysts said in their report.

GPU computing examples

The rapidly emerging market for GPUs is AI and machine learning, which are massively parallel problems. “Lots of enterprises and CIOs are seeing how they can use deep learning for solving their problems. Some are dabbling; others are more into it. But it’s now across-the-board – people are seeing how deep learning can help them, and it’s across-the-board – they need a GPU server,” said Sarosh Irani, director of product management for the GPU server group at hardware vendor Supermicro.

AI works if you have enough samples of something you are trying to get smarter about. The AI system learns to recognize something –  like what cancer cells look like – but to do so requires a lot of data and that needs to be processed to learn good from bad. As correlations are discovered, algorithms can be created that leads to an analysis.

For example, Italian energy company Eni and U.S.-based Stone Ridge Technology were able to process oil reserve models in less than a day instead of ten. Using 3,200 NVIDIA Tesla GPUs and Stone Ridge’s ECHELON software for GPU-based petroleum reservoir simulation, it processed 100,000 reservoir models in about 15 and a half hours, a task which would take ten days using legacy hardware and software. Each individual model simulated 15 years of production on that reserve in an average of 28 minutes.

Modeling oil reservoirs is no small task. Reserves are found by bouncing sound waves off the Earth’s surface and looking for echoes that indicate an oil reserve. Then the reflected wave data is turned into images that geoscientists can use to determine if a reservoir prospect contains hydrocarbons and where the hydrocarbons are located within the image. This determines whether or not it is worth it to drill for oil in that reserve. All of this requires heavy mathematical processing, which GPUs specialize in.

GPU makers: Nvidia and AMD

Like CPUs, the GPU market is down to two players, Nvidia and AMD. In the consumer gaming space, it’s fairly competitive, about a 60/40 split between the two with Nvidia in the lead, according to Jon Peddie Research.

In the data center, though, it’s not even close. Peddie reports Nvidia has a 90 percent share to AMD’s 10 percent. This is due to Nvidia spending the better part of two decades seeding and supporting data-center and other non-gaming use of GPUs.

GPUs and CUDA programming

In the early 2000s, some Stanford University researchers began to delve into the programmable and parallel nature of the GPU. Nvidia hired them to create the CUDA programming language that would allow developers to write applications in C++ that used the GPU for acceleration.

“I give Nvidia a lot of credit. They set up at universities all over the world, hundreds of them, to teach CUDA.  So when a student graduated, they were pre-trained CUDA developers and set the foundation for getting CUDA into industries as we know it today,” said Peddie.

One of the Stanford professors in the CUDA team was Ian Buck, now vice president of Nvidia’s Accelerated Computing business unit. He said CUDA was intended to be easy to learn and use. “Anyone who knew C or Fortran, I could teach CUAD in a day. We realized early on we did not want to create a whole new programming language that required you to learn something new,” he said.

So applications already running on CPUs could be parallelized relatively quickly. The main change with CUDA was instead of calling a function once, like a sort routine, you called it thousands of times and each core executed it. But CUDA is for Nvidia GPUs only. To program an AMD GPU you must use a library called OpenCL which has nowhere near the support that CUDA has.

GPUs and power

GPUs are a strong alternative to CPUs in raw performance, but there is a direct link between performance and power use, and the two remain connected. GPUs have a maximum power draws of 300 watts. A CPU averages in the low hundred watts, although the new Skylake generation of Xeons can draw over 200 watts.

In the end, the GPUs make up for it because of their scale. Because they can do the work of dozens of CPUs, you need fewer GPUs to do the same amount of work. Nvidia says its new DGX-2 GPU server systems draws 1/18th the power of a traditional CPU HPC cluster to do the same work.

For Supermicro, that means it has to design a product that uses GPUs from the ground up. “If I put a 300 watt GPU in a system there may not be enough power capacity or thermal capacity. With eight GPUs, definitely not. So I need a custom box around it,” said Irani.

And if you have a power-constrained data center, that can be a problem. Not everyone has the luxury of building data centers the size of a football stadium next to a river for hydro power and cooling. David Rosenberg, a data scientist in the Office of the CTO at Bloomberg, loves how GPUs can reduce compute jobs that would take a year on CPUs down to a weekend with GPUs.

But he also often has situations where one whole cabinet has just one or two racks of GPUs because they consume all the power the cabinet can provide.

“We’re constantly looking at power,” he said. “If we put 500 GPUs in a data center, that’s a lot of other computers that can’t be there. GPUs are more power efficient for the compute they provide than CPUs. It’s just that they are doing so much more compute than CPUs that they end up using a ton of power.”

Andy Patrizio is a freelance journalist based in southern California who has covered the computer industry for 20 years and has built every x86 PC he’s ever owned, laptops not included.

The opinions expressed in this blog are those of the author and do not necessarily represent those of ITworld, Network World, its parent, subsidiary or affiliated companies.

More from this author