What’s remarkable is that Ethernet technology even has “new directions” after four decades! The longevity of Ethernet is unprecedented in the tech industry. And in spite of the standards-based and proprietary interconnects that have been developed to address the requirements of high performance computing, 148 of the TOP500 supercomputers (30 percent) as of June 2015 were using Ethernet as their primary network interconnect.
Far from moribund, Ethernet today is arguably undergoing the most dynamic period in its history gauged by the number of major new specifications under discussion and development. The IEEE has completed specs for both 40GbE and 100GbE, the former now in production, and the latter shipping today for service provider carriers and sampling for data centers. And the IEEE is working on a 400GbE specification. The IEEE also recently added the IEEE P802.3 25Gb/s Ethernet Task Force. Specific to Ethernet over twisted pairs terminated in the ubiquitous RJ-45 jack, the IEEE has initiated study groups for 25GBASE-T and something called “Next Generation Enterprise Access” BASE-T to address the requirement for 2.5GbE and 5GbE that can run on the existing CAT 5e or 6 cabling in office ceilings connecting wireless access.
For four decades, Ethernet advanced on a “powers-of-ten” model from an initial 10 Mbps to 100 to 1GbE to 10GbE. Part of why that worked was that the ratified IEEE Ethernet speeds kept well ahead of most market requirements. Bob Metcalf himself started this trend by defining the first Ethernet speed of 2.94 Mbps in 1973, (a rate derived from the Xerox Alto system clock — overkill in 1973 except for Xerox laser printers). To put that in perspective, 1973 was four years before the first Apple II computer became available and eight years before the launch of the first IBM PC. The “DIX” group (Digital Equipment, Intel, and Xerox) increased the rate to 10Mbps in 1980 in their proposal to the IEEE, who stayed with that rate in the first IEEE Ethernet standard (1983). But moving an entire Ethernet ecosystem — adapters, switches, cables, connectors, etcetera — to a new speed is expensive for everyone. The “powers-of-ten” model helped control those costs.
What changed? Well, I believe that Ethernet simply got too successful for the powers-of-ten model. By that, I mean that the volumes got large enough for some specific requirements at more fine-grained speeds to warrant infrastructure upgrades to support those speeds. And the volumes are large. Intel, for example, just celebrated shipping its one-billionth Ethernet controller. That’s a lot of Ethernet! And it continues to grow fast.
So, what are the market segments that are large enough to put the advancement of Ethernet into hyper-drive to speeds well beyond 10GbE? First, let’s say who it isn’t: enterprise data centers. Though 10GbE controllers have been in production since 2003, 12 years later, enterprise is about mid-point in the transition from 1GbE to 10GbE, as reported by multiple market researchers. Rather, it is the cloud service providers primarily who have led the charge to higher speeds.
The Ethernet Alliance has published a 2015 Ethernet Roadmap that charts these speeds, both IEEE standards and future directions. The first speeds beyond 10GbE came as a pair, 40GbE and 100GbE ratified in June 2010 as IEEE 802.3ba. This, in fact, represents the IEEE’s first break with the decades-old “powers-of-ten” advancement model, with the introduction of 40GbE. But 40GbE as a downstream switch port made a lot of sense in the context of broadly deployed 10GbE servers. First, of course, a 40GbE downstream switch port could connect directly to a 40GbE adapter for high-volume traffic, e.g., connection to a storage area network (SAN). But also, a 40GbE downstream port could connect via a 1:4 “breakout cable” to four 10GbE servers.
The significance of breakout cables is that they effectively quadruple the number of servers that can be connected to the 40GbE downstream ports on a switch. The benefits go beyond the obvious economic; quadrupling the number of servers connecting to a single switch increases the efficiency of “East-West” traffic. Historically, data center switch architectures have been hierarchical, aka “North-South.” Adjacent servers may have to transmit their packets up and down the hierarchy to communicate even when they’re near each other. Applications typical of cloud service providers require extensive amounts of this East-West traffic. The result is to “flatten” the data center and make it look a bit more like an HPC cluster than the traditional hierarchical network.
The next stop of the “powers-of-ten” roadmap for downstream switch ports was 100GbE, just starting to come to market now in late 2015. But what the customers for high-speed Ethernet solutions recognized was that they could connect only two 40GbE servers to a single 100GbE downstream port, wasting switch bandwidth and breaking the compelling model of getting up to four connections from a single downstream switch port. What the market demanded was an Ethernet server connection at 25GbE, and the “25 Gigabit Ethernet Consortium” was born to drive consensus on market requirements for Ethernet at that speed and an IEEE 802.3 working group mentioned above to drive a specification. The Ethernet Alliance is also actively involved, for example in staging a plugfest held at the UNH InterOperability Lab in June 2015 to explore interoperability with early silicon and connectors and provide the IEEE with real data to help inform the final specification. 50GbE is also under exploration by the same community at the present time.
At the Technology Exploration Forum sponsored by the Ethernet Alliance and entitled “The Rate Debate” in October 2014, it was suggested that the industry consider that the same arguments for 100/25GbE apply equally well to 200/50GbE. That is, the same logic would suggest the next speed bump for downstream switch ports should be 200GbE connecting to up to four 50GbE server connections. Much industry discussion is underway now on this suggestion. To all appearances, it is looking like high-volume usage models for every faster Ethernet is driving a transformation away from Ethernet’s long-standing “powers-of-ten” roadmap to a more incremental “powers-of-two.” The economics are in place today to drive accelerated adoption of ever-increasing Ethernet speeds even under a powers-of-two model. Beneficiaries will be all the users of Ethernet who will find solutions from 100MbE to 400GbE and beyond that best match their performance and cost requirements. And the Ethernet Alliance will continue its role in educating the marketplace about Ethernet options and helping to drive interoperable solutions.
David Fair serves on the board of directors of the Ethernet Alliance. At Intel, he is responsible for driving demand for Intel’s storage over Ethernet (NAS, iSCSI, & FCoE) and RDMA over Ethernet (iWARP) technologies. He also serves on the board of directors for the Ethernet Storage Forum of SNIA.