In a demonstration organized by the International Center for Advanced Internet Research (iCAIR) at Northwestern University and the Advanced Internet Research group of the University of Amsterdam, researchers utilized computing clusters in San Diego and Amsterdam to give an example of how computation doesn’t need to be localized within a data center, but can instead be migrated across geographical distances to run applications and external clients with minimal downtime.
You might be asking yourself—wait, isn’t that just “the cloud”? The cloud has been omnipresent for several years now, so why are research institutions just now looking into it?
Well, they’re not. At least, not anymore—that investigation took place back in 2005, well before the cloud was a fixture of infrastructure conversations as it is today. In 2005, there was no significant smartphone penetration, the iPhone was still two years into the future and there were no major application platforms. In other words, research and education (R&E) organizations anticipated the need for this kind of seamless virtual machine migration before anyone else did.
We often don’t take time to think about how long it takes for major advances in technology to be developed and then seen by the average person. Whether you are reading about the latest technology announcement, attending a presentation or seeing a demo at a trade show, the innovation involved has undergone numerous tests and simulations before it gets to that point. While we all assume that R&E organizations are working on the latest and greatest innovations—that’s what they do in a nutshell, right?—what many don’t realize is how far ahead of the curve some of the technologies they’re developing are.
One such pioneering project was the National Science Foundation Network (NSFNET), which was designed to promote advanced research and education networking in the United States. It got off to a relatively modest start in 1986 with connections among five NSF university-based supercomputer centers, mostly for the purposes of load-balancing and file transfers. It quickly grew in size and importance, though, as an industry shift has taken place from solely high-performance computing to high-performance networking. NSFNET was a precursor to a lot of the education networks that would follow; most notably, ESnet, the Energy Science Network, which was created as a result of what was learned from NSFNET’s innovations.
In fact, in direct correlation with these series of government-funded computer networking efforts we’ve seen the modern internet take shape through NSFNET, including the development of the first modern Web browser and the advancement of supercomputing capabilities. As researchers began performing more data-intensive, collaborative academic research, it became clear that commodity internet connections were not adequate. There was a need for special performance in the way of increased capacity and flexibility in the network in order for some of the research that was planned to even be possible. Thus, research organizations set about to improve their networks (and in turn, our own, as those innovations have since made their way into the commercial sector).
A number of technology advancements we rely on today started in R&E circles like the ones seen around NSFNET. Global Environment for Network Innovations (GENI), one example, was a five-year project that began to look beyond R&E networking and helped bring into focus trends and initiatives that are staples of networking conversations today, like OpenFlow (an open standard to deploy innovative protocols in production networks), NFV (network functions virtualization, an initiative to virtualize the network services that are now being carried out by physical hardware), and more. GENI advanced virtual distributed computing and storage over high performance networks, and it made their services available to thousands of students and researchers who developed a huge array of new applications and services.
But what about the big network innovations in the next few years—what will they be focused on? Let’s look to the R&E organizations and see what they’re working on, and what discussions have taken place recently.
- Data Analytics – Once the challenges of bandwidth capacity are addressed, the question becomes: how do you get the maximum benefits from that bandwidth? There’s a lot of interest in the ability to automatically provision network operating systems and analytics with this new capacity, as the potential value of data analytics are significant—especially in a research setting that requires access to large databases for scientific investigations and research. This potential was demonstrated, for example, in the work Ciena has done with iCAIR at Northwestern University, the Electronic Visualization Laboratory at the University of Illinois at Chicago and the University of Amsterdam, showcasing the advanced measurement and analytics capabilities that could be enabled in a research network by a platform such as Blue Planet Analytics.
- Edge Computing – Edge computing refers to the trend of moving compute processes, applications, data and services away from centralized nodes to the extremities of a network in order to have these functions (particularly those that are time-sensitive) take place closer to the source of the data. This is instead of sending the data to a central server (often, a cloud server). With the implications of edge computing being reduced response times and lower network resource usage, many of today’s research proposals are focused on this topic; there’s a lot of excitement about what moving compute resources to the absolute edge (or at a smaller intermediary node, often called a cloudlet) will enable that was previously impossible.
- Network Architectures – If the systems of optical backhaul in place supporting wireless networking were to change fundamentally—such as in a significant upgrade—network architectures would have to change correspondingly to support them. There is a lot of discussion taking place now about the higher transmission speeds and higher densities that will be required in such a case, and what they would look like in practice. The emergence of 5G wireless with an increase of bandwidth approaching 1000 times what we currently have in mobile devices will force much more flexibility in optical backbone networks. Laboratory instruments currently hardwired to local computers will be connected wirelessly to cloud-based virtual resources, and network infrastructures aren’t ready for this yet. Instrumentation will be part of the network and the network will be the computer. It’s an example of one of the “things” in Internet of Things (IoT).
- IoT Networks – Speaking of IoT, it’s probably not a surprise that IoT is a huge area of research. Technology vendors, national government agencies and countless projects around the world are focused on the impact of IoT. That focus is not so much on the fact that many objects will be connected to the internet (we’ve known this since the term has been coined), but rather how they will connect.
One of the big questions that has popped up for those looking to heavily invest in IoT is: How many IP addresses do you have? The addressing of every “thing” on a network is going to be a significant challenge. Each device will require a power source and a computer just to do the addressing, as well as communications equipment to connect it to the internet. This creates numerous complexities that researchers are trying to iron out ahead of time. For example, what happens when you want to network every stop-sign, with a sensor counting every car that goes by; what’s the addressing scheme, how do the wireless sensors work? These are not easy answers, and researchers are hard at work figuring out how a “thing-heavy” network should be designed. These questions are driving a wide spectrum of activities, including advancements in computer science and development of computers that cost pennies rather than dollars. But even today, computers without networks are like cars without wheels—all souped-up and no place to go.
At the core of each of these innovations you can see the same need that R&E organizations addressed with NSFNET—the ability to share and move information. High-performance networking has met high-performance computing in a way that has, without some people realizing, shaped much of the communication networks we rely on today. As a leading network and strategy company, Ciena has been working with R&E organizations for almost 20 years and we will excitedly continue to collaborate with these R&E organizations to help shape new possibilities for the future.