Americas

  • United States
joab_jackson
U.S. Correspondent

Epic Interconnect Clash! InfiniBand vs. Gigabit Ethernet

News
Dec 03, 20125 mins
NetworkingTelecommunications Industry

InfiniBand makes inroads in enterprises, while Ethernet makes high performance computing waves

A few years back, picking the protocol to link your computers together into a network was a no-brainer. The servers in a mid-sized data center were wired together using Ethernet. And if you wanted to connect many nodes into a single high performance computer (HPC), you went with InfiniBand.

Read Network World’s other tech arguments.

BACKGROUND: High-speed Ethernet planning guide 

These days, the choice is blurrier. The two protocols are encroaching on each other’s turf, engaging in showdowns for the honor of networking the larger data centers. The latest incarnations of Ethernet, Gigabit Ethernet, is perfectly capable of supporting larger HPC systems, while InfiniBand is being increasingly used in the performance sensitive enterprise data center.

 
BulletApple iOS vs. Google Android: It comes down to security
BulletTablet smackdown: iPad vs Surface RT in the enterprise
BulletCisco, VMware and OpenFlow fragment SDNs
BulletCloud computing showdown: Amazon vs. Rackspace (OpenStack) vs. Microsoft vs. Google
BulletCisco Catalyst 6500 vs. Cisco Nexus 700

One rumble to watch is Top500, the twice-annual compilation of the world’s fastest supercomputers. In the latest compilation, released in November, InfiniBand served as the primary interconnect for 226 of the top 500 systems. Gigabit Ethernet was used on 188 systems.

A grounding in performance stats is always helpful to enjoy any epic battle: Today, for network aggregation points, there is 100 Gigabit Ethernet, in which each port of a 100 Gigabit Ethernet card can transfer data at 100Gbps. Less expensive 1, 10 and 40 Gigabit Ethernet network cards are also available for servers and switches. Answering our insatiable need for ever more bandwidth, the Ethernet Alliance has commenced working on 400 Gigabit Ethernet.

The current version of InfiniBand, FDR (Fourteen Data Rate), offers 56Gbps (or 14Gbps per channel, hence the FDR title). The next generation, EDR (Enhanced Data Rate), arriving next year, will offer 100Gbps.

But the numbers tell only part of the story. InfiniBand offers advantages such as a flatter topology, less intrusion on the server processor and lower latency. And Ethernet offers near ubiquity across the market for networking gear.

The power of Ethernet is that it is everywhere, from laptops to the largest data center switches, says Ethernet Alliance Chairman John D’Ambrosia. “There are a multitude of [companies] providing Ethernet solutions. You have a common IP that goes across multiple applications,” he says.

Such ubiquity ensures interoperability as well as the lowest costs possible from a large pack of fiercely competing vendors. “You buy something, plug it in and, guess what? It just works. People expect that with Ethernet,” D’Ambrosia says. “You can start putting things together. You can mix and match. You get competition and cost-reductions.”

InfiniBand was introduced in 2000 as a way to tie memory and processors of multiple servers together so tightly that communications among them would be as if they were on the same printed circuit board. To do this, InfiniBand is architecturally sacrilegious, combining the bottom four layers of the OSI (Open Systems Interconnection) networking stack — the physical, data link, network and transport layers — into a single architecture.

“InfiniBand’s goal was to improve communication between applications,” says Bill Lee, co-chair of the InfiniBand Trade Association Marketing Working Group, subtly deriding Ethernet’s “store-and-forward” approach.

Unlike Gigabit Ethernet’s hierarchical topology, InfiniBand is a flat fabric, topologically speaking, meaning each node has a direct connection to all the others. InfiniBand’s special sauce is RDMA (Remote Direct Memory Access), which allows the network card to write and read data on a server, eliminating the need for the server processor to conduct this work itself.

InfiniBand quickly gained favor in HPC systems, and as mentioned above, the technology is now creeping into the enterprise. Oracle, for instance, uses InfiniBand as a performance edge for its Exadata and Exalogic data analysis appliances. Microsoft added direct support for RDMA to its newly released Windows Server 2012.

One enterprise user of InfiniBand is the Department of Veterans Affairs. The U.S. federal agency’s informatics operation runs on about 200 servers, which communicate via InfiniBand. “We do a lot of data transfer,” says Augie Turano, a solutions architect at the VA. Databases are moved around quite a bit among the servers so they can be analyzed by different applications. “Being able to move the data at InfiniBand speeds from server to server has been a big boost for us,” Turano says.

The Ethernet Alliance’s D’Ambrosia is undaunted by InfiniBand’s performance perks, however. He figures Ethernet will catch up. “We like competition from other technologies, because it makes us realize we can always keep improving,” he says.

While Ethernet was first used to connect small numbers of computers, successive versions of the specification were tailored for larger jobs, such as serving as the backplane for entire data centers, a job for which it quickly became a dominant player. In this same way, a number of technologies — such as iWarp and RoCE (RDMA over Converged Ethernet) — have been started so Gigabit Ethernet can compete directly with InfiniBand by reducing latency and processor usage.

“Ethernet evolves. That’s what it does,” D’Ambrosia says. Watch out InfiniBand! A formable competitor lurks in the data center!