The Ethernet protocol connects LANs, WANs, Internet, cloud, IoT devices, Wi-Fi systems into one seamless global communications network.
Ethernet is one of the original networking technologies, having been invented 50 years ago. And yet, because of the simplicity by which the communications protocol can be deployed and its ability to incorporate modern advancements without losing backwards compatibility, Ethernet continues to reign as the de facto standard for computer networking. As artificial intelligence (AI) workloads increase, network industry giants are teaming up to ensure Ethernet networks can keep pace and satisfy AI’s high performance networking requirements.
At its core, Ethernet is a protocol that allows computers (from servers to laptops) to talk to each other over wired networks that use devices like routers, switches and hubs to direct traffic. Ethernet works seamlessly with wireless protocols, too.
Its ability to work within almost any environment has led to its universal adoption around the world. This is especially true because it allows organizations to use the same Ethernet protocol in their local area network (LAN) and their wide-area network (WAN). That means that it works well in data centers, in private or internal company networks, for internet applications and almost anything in between. It can even support the most complex forms of networking, like virtual private networks (VPNs) and software-defined networking deployments.
Ethernet has no problem handling bandwidth-intensive applications such as video streaming or voice over IP applications. And on the other end, its simplicity also enables it to work with very tiny, relatively unsophisticated devices such as those that make up the Internet of Things (IoT), without any special configuration required.
How does Ethernet work?
Ethernet works by breaking up information being sent to or from devices, like a personal computer, into short pieces of different sized bits of information called frames. Those frames contain standardized information such as the source and destination address that helps the frame route its way through a network.
And because computers on a LAN typically shared a single connection, Ethernet was built around the principal of CSMA/CD, or carrier-sense multiple access with collision detection. Basically, the protocol makes sure that the line is not in use before sending any frames out. Today, that is far less important than it was in the early days of networking because devices generally have their own private connection to a network through a switch or node. And because Ethernet now operates using full duplex, the sending and receiving channels are also completely separate, so collisions can’t actually occur over that leg of their journey.
Other than when encountering a collision situation, there is no error correction in Ethernet, so communications need to rely on advanced protocols to ensure that everything is being transmitted perfectly. However, Ethernet still provides the basis for most internet and digital communications, and also integrates easily with most higher-level protocols, so that is almost never an issue these days.
Colorful cables are a key component of Ethernet networks
You can’t talk about Ethernet without also talking about cables, since Ethernet is designed for wired networking. And as such, simplifying the cables the protocol uses also helped to push the standard into widespread adoption.
The original standard had Ethernet working with coaxial cable, the kind that is used to support cable television. Coaxial cable is robust and capable of carrying a lot of bandwidth over its thick internal copper wire. But there are tradeoffs. Coax is heavy, difficult to work with, not very flexible and also expensive. That is one reason why most home cable TV installations require a special technician to set them up, while home networking usually doesn’t.
Ditching coaxial, Ethernet switched over to the use of the twisted pair cables which still drive wired networking today. Twisted pair cables used with Ethernet are inexpensive to deploy and also quite flexible, meaning that they can be snaked around corners, inside walls, over ceiling tiles or almost anywhere else to connect servers, routers, hubs, devices and endpoints.
And in a rather ingenious move, most companies that manufacture Ethernet cables decided a long time ago to forgo the standard gray color scheme and instead release them in a rainbow of colors. Besides sprucing up server rooms and data centers, the color coding made it so that IT technicians could group their network connections visually into groups for quick troubleshooting.
The standard plug on both ends of twisted pair cables, which is very similar to the same kind of connection used by wired telephone systems, also makes it easy to click the cables into any device that supports Ethernet connectivity. Most of the time, simply plugging in a device and attaching it to a network using one of those cables is the only step required to immediately gain connectivity. All of the backend routing of packets and data is then handled by Ethernet and other advanced protocols like Spanning Tree.
The longtime standard for Ethernet cables is called Category 5, which is commonly referred to as Cat 5. The Cat 5 standard has been in use since 2001. A normal Ethernet cable supports speeds up to 100 Mb/sec. While the primary function of the cables is to support Ethernet networking, they also work with many telephone and video applications.
A slightly more advanced cable called Category 5e is also used today for faster Ethernet applications. Category 5e cables are targeted at 100Mb/sec Ethernet, but their design also lets them support higher speeds, such as Gigabit Ethernet, while still using the same ports for universal connectivity.
Who invented Ethernet?
The original Ethernet standard was created in 1973 by Xerox PARC engineers Robert Metcalfe and David Boggs, and was inspired by a project being conducted at the University of Hawaii, called ALOHAnet. Primitive by today’s standards, it could only achieve 2.94 Mb/sec in raw speed, but it was one of the first times that computers were actually linked into a network.
Outside of a university setting, the public would not see Ethernet until 1980, when Xerox made it available to everyone. By then there were other competing standards such as Token Ring, ARCNET and others. But Metcalfe, who had since left the company to found 3Com, convinced many of the major industry players, including Digital Equipment Corporation (DEC), Intel and Xerox, to work with 3Com to push Ethernet as a unified standard.
As part of that agreement, Xerox dropped its trademark of the Ethernet name, allowing any company to use Ethernet with its products. Bandwidth and throughput was also increased to 10 Mb/sec, which was more than enough to handle most networking tasks at the time, with room to spare. All of that helped Ethernet to become the dominant standard worldwide.
Ethernet turns 50; Turing award for Metcalfe
Ethernet turned 50 years old on May 22, 2023. A few months prior to the big anniversary, on March 22, the Association for Computing Machinery awarded Bob Metcalfe the prestigious A. M. Turing Award for inventing and commercializing Ethernet. The Turing award carries a $1 million prize and was presented at a ceremony June 10 in San Francisco.
Metcalfe said he remembers that day in 1973 very clearly. “I was sitting in Building 34 (at Xerox PARC), at a Selectric typewriter, typing a summary of my thoughts on how networks should work, and then I hard-drew the diagrams. I wrote the memo on the Orator ball on the Selectric, which was sans serif because I liked that font.”
Boggs, the co-creator of Ethernet, died last year and Metcalfe has fond memories of their partnership. “He and I were the Bobbsey Twins,” he told Network World. “We were wonderfully complementary; I being the more articulate of the two and he being the more detailed oriented. Together we built this thing, and I miss him. He was a good friend.”
Ethernet’s bright (and fast) future
The simplicity of the Ethernet standard as well as its ability to support faster speeds while also remaining backwards compatible has allowed the protocol to grow alongside many technical advancements. Today, almost any computer or computing device can support speeds up to one gigabit per second. Commonly called Gigabit Ethernet, when you compare its raw speed of 1Gb/sec (which is 1,000 Mbit/s) with the original 10 Mbit/s standard that early Ethernet supported, it’s easy to see how far the protocol has come.
Gigabit Ethernet probably provides more than enough bandwidth for home networks and most offices. At that speed, even intensive bandwidth applications like video streaming or playing online video games operate flawlessly, even with multiple users on the same network. But Ethernet can do more.
The IEEE Ethernet Working Group approved specifications for 200 Gb/sec and 400 Gb/sec Ethernet several years ago. It’s mostly data centers, internet service providers (ISPs) and specialized organizations like Network Operations Centers that would be most interested in those kinds of speeds. A few cloud service providers and others say they are working with 400 Gb/sec speeds in some capacity, although full adoption of the standard seems to be hung up on certain elements like new cabling requirements (the current Cat 5 and Cat 5e cables don’t support those speeds), backwards compatibility issues for devices and increased power consumption requirements within data centers.
Technically, the specification for 800 Gb/sec Ethernet also exists, but nobody is currently using it outside of a test environment. And the interesting thing about Ethernet is that because it is such an open protocol, there is no reason to think that even the 800 Gb/sec speeds are anywhere near the theoretical maximum.
In fact, research is being done in order set the groundwork for a 1.6 terabyte per second standard. Speeds like that will probably only be useful in highly specific applications. For example, a corporation or government entity could possibly backup their network data to an offsite location very quickly. If they had 500 terabytes of data to send, that process could be completed in under 14 minutes.
Next up: Consortium looks to supersize Ethernet for AI infrastructures
AI workloads are expected to put unprecedented performance and capacity demands on networks, and a handful of networking vendors have teamed up to enhance Ethernet technology in order to handle the scale and speed required by AI.
In July 2023, AMD, Arista, Broadcom, Cisco, Eviden, HPE, Intel, Meta and Microsoft announced the Ultra Ethernet Consortium (UEC), a group hosted by the Linux Foundation that’s working to develop physical, link, transport and software layer Ethernet advances. There are concerns that today’s traditional network interconnects cannot provide the required performance, scale and bandwidth to keep up with AI demands, and the consortium aims to address those concerns.
The UEC wrote in a white paper that it will further an Ethernet specification to feature a number of core technologies and capabilities including:
- Multi-pathing and packet spraying to ensure AI workflows have access to a destination simultaneously.
- Flexible delivery order to make sure Ethernet links are optimally balanced; ordering is only enforced when the AI workload requires it in bandwidth-intensive operations.
- Modern congestion-control mechanisms to ensure AI workloads avoid hotspots and evenly spread the load across multipaths. They can be designed to work in conjunction with multipath packet spraying, enabling a reliable transport of AI traffic.
- End-to-end telemetry to manage congestion. Information originating from the network can advise the participants of the location and cause of the congestion. Shortening the congestion signaling path and providing more information to the endpoints allows more responsive congestion control.