Last week, at the ISC High Performance 2015 conference in Frankfurt Germany, there was one announcement that might have escaped your attention. For the first time, EDR 100Gb/s solutions appeared on the TOP500 list of supercomputers. This is significant because it marks the transition from networks constructed around 40 and 56Gb/s fabrics to ones of 100Gb/s.
Likewise, in the Ethernet market, hyper-scale data centers are also making the transition from 40Gb/s to 100Gb/s networks. Accton announced their first switch with 100Gb/s ports, the AST712-32X, at Facebook’s OCP Summit in March; Mellanox announced the industry’s first 32-port non-blocking 100GbE Open Ethernet Switch in June; and, just last week, Cisco joined the party with the announcement of a 6.4Tb/s switch, the Nexus3232C.
• CONFERENCE AGENDA ANNOUNCED: The highly-anticipated educational tracks for the 2015 R&D 100 Awards & Technology Conference feature 28 sessions, plus keynote speakers Dean Kamen and Oak Ridge National Laboratory Director Thom Mason. Learn more.
Contributing to the 100Gb/s transition is the advent of silicon photonics transceivers. At lower speeds, VCSEL-based transceivers were widely used for HPC and data center interconnects. VCSEL solutions were low-cost; they supported reaches of 300m; and, they integrated well into small package form factors, like the QSFP. But, with every increase in VCSEL speed, the reach decreases. At 100Gb/s this has become a major problem. On OM3, the most widely deployed version of multi-mode fiber, the reach for 100Gb/s VCSEL transceivers is limited to 70m, not nearly enough for larger HPC clusters and hyper-scale data centers. OM3 fiber could be replaced with an optimized OM4 fiber, but this is expensive and the reach only improves modestly to 100m.
Silicon photonics to the rescue
Silicon photonics-based transceivers use low-cost, single mode fiber rather than the multimode fiber. With fiber losses measuring only 0.3dB per km, “out of the box” transceivers easily support links of 2km. There is no hard limit for 2km, but 2km is more than enough for the medium-size to hyper-scale size data centers. Larger, more complex networks can be built. Even campus arrangements are easily within reach.
There are other, more traditional, long reach solutions. But the lasers are expensive; the packages are hand assembled, and they are high cost.
Silicon photonics offer quite significant advantages over other traditional single mode solutions:
- Low power: 100Gb/s fits the same 3.5W QSFP footprint as 40Gb/s transceivers
- Low cost: elimination of hundreds of subcomponents reduces assembly cost
- High volume: uses electronics style assembly process
When hyper-scale data centers like Amazon, Facebook, Google and Microsoft convert to 100Gb/s networks, they will need hundreds of thousands of transceivers at once. Traditional technologies either have the reach problem (VCSELs) or they simply just do not scale (traditional Long Reach).
Silicon photonics products easily scale, because they are built with high-volume automated assembly processes. Scaling volume depends upon adding equipment rather than hiring and training people. The automation saves costs, improves yields and reduces waste. Hundreds of thousands or even millions of transceivers are possible.
Faster, more powerful networks enables more packets, more video, more chats and, of course, more data center revenue. Scaling to 100Gb/s is just the first step in the transition to networks of the future where silicon photonics will power exascale datacenters.
Arlon Martin is Senior Director of Marketing at Mellanox.