Categories
Alan Weissberger Cloud Computing Software Defined Network

Outstanding Sessions at 2013 Hot Interconnects Conference

Introduction: #

In its 21st year in Silicon Valley, the Hot Interconnects Conference addresses the state of the art and futre technologies used in Data-Center Networking. The high performance communications aspects are also of interest to the Supercomputing community. The 2013 Hot Interconnects Conference Program can be found here.

We regret that we missed the first keynote on August 21st: Scale and Programmability in Google’s Software Defined Data Center WAN by Amin Vahdat of UCSD/Google. Those that attended that I spoke with were quite impressed. This article summarizes two excellent Hot Interconnect sessions this author did attend:

  1. Overview and Next Steps for the Open Compute Project by John Kenevey, Facebook and the Open Compute Project
  2. Networking as a Service (NaaS) by Tom Anderson, PhD and Professor at University of Washington (Tom received the prestigious IEEE 2013 Kobayashi Award after his talk)

1. Open Compute Project (OCP): #

Mr. Kenevey provided an overview of the Open Compute Project (OCP), a thriving consumer-led community dedicated to promoting more openness and a greater focus on scale, efficiency, and sustainability in the development of data center infrastructure technologies. A brief history of the project was presented as well as its vision for the future.

The Open Compute Project (OCP) goal is to develop technologies for servers and data centers that are referred to as “open hardware,” because they adhere to the model traditionally associated with open source software projects.  Read more here

The most intriquing new OCP mentioned was one to develop an open network switch, using silicon photonics from Intel and data center switching silicon from Broadcom. The open network switch was described as “disaggreagated” in that it’s functionality is distributed amongst “off the shelf” modules and/or silicon- in this case, from Intel and Broadcom. The Networking project will focus on developing a specification and a reference box for an open, OS-neutral, top-of-rack switch for use in the data center.

Intel was said to have been working on silicon photonics for 10 years (After the conference, Victor Krutul, Director, External Marketing for Intel’s Silicon Photonics Operation confirmed that via email.  Intel submitted a paper to Nature magazine in Dec 2003 titled, “A high-speed silicon optical modulator based on a metal–oxide–semiconductor capacitor“).   Facebook, which builds its own data center switching gear, has been working with Intel for 9 months on silicon photonics, which is NOT yet an announced product.

Intel’s Research web site states: “Intel and Facebook are collaborating on a new disaggregated, rack-scale server architecture that enables independent upgrading of compute, network and storage subsystems that will define the future of mega-datacenter designs for the next decade. The disaggregated rack architecture includes Intel’s new photonic architecture, based on high-bandwidth, 100Gbps Intel® Silicon Photonics Technology, that enables fewer cables, increased bandwidth, farther reach and extreme power efficiency compared to today’s copper based interconnects.” Read the news release here. Intel announced an Open Networking Platform this year, which may also be of interest to readers. It’s described here.


SiPh Addendum from Victor Krutul of Intel:

“Silicon Photonics is a whole new science and many technical challenges had to be solved, many which other scientists called impossible.  For example the 1G modulator referenced in the Nature paper was deemed impossible because the world record at the time was 20Mhz.  BTW, we announced 10G a year later and 40G 2 years after that.  Our esteemed Research team has been granted over 200 patents with 50-10 more in process.  As you can see a lot of heavy lifting.”


The OCP Networking group is also working with Broadcom to get them to contribute their chip design spec as “open source hardware.” The Network group hopes to have something implementable by year end 2013, Kenevey said. During the Q and A, John stated that the open network switch technology could be used for optical interconnects as well as storage area networking. That would be in addition to traditional data center switch- to- switch and switch- to- compute server connectivity.

In response to a question from this author, John said the OCP networking group focus is connectivity within the data center, and not to interconnecting multiple data centers – which would be a traditional “greenfield” deployment. Currently, the OCP Networking group  has no schedule for taking their specifications in this area to an official standards body (like IEEE or IETF). They think that would be very “resource draining” and hence slow their forward progress.


2.  Networking as a Service (NaaS): #

a] Overview

Professor Anderson’s main thesis was that there are many current Internet problems that ISPs can’t solve, because they only control a small piece of the Internet. “Today, an ISP offers a service that depends on trustworthiness of every other ISP on the planet,” he said.  A large majority of Internet traffic terminates on a different ISP network and transits several carrier networks on the way there.

Quality of service, resilience against denial of service attacks, route control, very high end-to-end reliability, etc are just a few of a long list of features that customers want from the Internet, but can’t have, except at enormous expense.  Many Internet problems result in outages, during which the customer has no Internet access (even if the problem is not with their ISP).

The figure below shows that 10% of Internet outages account for 40% of the downtime experienced on the Internet.

A slide that characterizes Internet outages.
Slide courtesy of Tom Anderson, University of Washington

Taking advantage of high performance network edge packet processing software, ISPs will be able to offer advanced services to both local remote customers, according to Tom. This is similar to how data center processing and storage are sold to remote users today- as a service. The benefit will be to unlock the knot limiting the availability of advanced end-to-end network services. In this new model, each ISP will only need to promise what it can reliably provide over its own network resources. The end-to-end data properties are proposed to be achieved by the end point customer (or end ISP) stitching together services from a sequence of networks along the end-to-end path.

b] Drilling down into NaaS motivation, architecture and functionality:

1. Motivation: ISPs are dependent on other ISPs to deliver the majority of Internet traffic to remote destinations:

  • To coordinate application of route updates
  • To not misconfigure routers
  • To not hijack prefixes
  • To squelch DDoS attacks

2. Several problems with the Internet often result in outages and poor performance.  Diagnosis is complicated by lack of ISP visibility into the entire WAN (i.e. end to end path). Internet problems are mainly due to:

  • Pathological routing policies
  • Route convergence delays
  • Misconfigured ISPs
  • Prefix hijacking
  • Malicious route injection
  • Router software and firmware bugs
  • Distributed denial of service

Yet, there are known technical solutions to all of these issues! A trustworthy network requires fixes to all of the above, according to Professor Anderson.

3. NaaS as a Solution:

NaaS (Network as a Service) is a way of constructing a network where ISP’s only promise what they can directly deliver through their own network. This would then have the potential to provide much better: security, reliability, worst case performance and be capable of incremental adoption (as opposed to today’s Internet).

Value added services (e.g. multicast, content-centric networking) might also be offered at lower costs under NaaS.

4. In the NaaS scenario, either the destination enterprise customer or end ISP would:

  • Stitch together end- to- end paths from the hops along the way
  • Based on advertised resources from each ISP
  • Portions of path may use plain old Internet

5. Why now for NaaS?

  • Distributing topology updates on a global scale is now practical. There is no longer an engineering need to do localized topology management.
  • In addition, high performance packet processing at the network edge (10 Gbps per core with minimum sized packets) is now possible, which makes the NaaS schema realizable today.
  • Finally, “ISPs have made considerable progress at improving reliability of their own internal operations, which is often, two orders of magnitude more reliable than the global Internet,” according to Tom.

6. NaaS Design Principles:

a) Agile and reliable ISPs

  • Flexible deployment of new functionality at the edge (key NaaS principle)

b) Each ISP promises only what it can guarantee through its own network

  • Packet delivery, QoS from PoP (Point of Presence) to PoP

c) Incentives for incremental adoption (please refer to the figure below)

Image depicts Incremental Adoption of Network as a Service.
Image Courtesy of Tom Anderson, University of Washington
  • Each ISP charges for its added services, without waiting for its neighbors to adopt NaaS principles

d) Security through minimal information exposure

  • Simpler protocols result in a smaller attack surface and hence better security threat mitigation

7. Proposed ISP network architecture and functionality:

  • Software processing at the edge, hardware switching in the core
  • Software packet processing: 10 Gbps per core on modern servers (min-sized packets) could be extended to ISP network edge processing
  • Fault tolerant control plane layer with setup/teardown circuits, install filters, etc.

Closing Comment and Follow Up Interview: #

We found it quite refreshing, that Professor Anderson didn’t mention the need for using SDN, NFV, NV or other forms of open networking buzz words in NaaS! Instead, the software networking aspect is simply to do much more transit packet processing at the ISP edge and hardware switching in the core. That seems like a better model for delivering Internet and WAN services to both business and residential customers.

In a follow up telephone interview after the conference, Tom disclosed that the NSF is “largely funding NaaS as a research project under its Future Internet Architecture (FIA) Program.”  Cisco and Google are also financing this research (as per the title page of Tom’s superb presentation).  The FIA program is called NEBULA and described here.

Four faculty members (from University of Washington and UC Berkeley) along with  several students are involved in the NaaS research project. The NaaS team has received “generally positive feedback from other researchers and ISPs,” Tom said during our phone call.

The NaaS research group plans to build and test a working prototype that would look like a small ISP network that encompasses the NaaS functionality described above. That work is expected to be completed and evaluated within two years. After that, the NaaS concept could be extended to inter-domain ISP traffic. We wish him luck and hope NaaS succeeds!

References: #

Other Hot Interconnect Sessions:

Video on Day 1 highlights of the Conference:

Author Alan Weissberger

By Alan Weissberger

Alan Weissberger is a renowned researcher in the telecommunications field. Having consulted for telcos, equipment manufacturers, semiconductor companies, large end users, venture capitalists and market research firms, we are fortunate to have his critical eye examining new technologies.

4 replies on “Outstanding Sessions at 2013 Hot Interconnects Conference”

Professor Anderson’s work is fascinating. Having looked through his slide deck, I would have understood only about 1%, if not for your good write-up, Alan. This is a complex topic and one that I am sure the Viodi View readers will find of interest.

A couple thoughts were triggered as I read through:

The issue of the trustworthiness of the ISPs reminds me of the Call Completion issue, which has continued on for a number of years now and seems to be a networking issue with regards to how various actors perform in the middle of a call or call set-up (which sometimes never happens):

http://www.viodi.tv/2012/08/09/call-me-you-might-have-to-call-again-if-i-live-in-rural-america/

I also wonder if some of the independent operators, particularly those that are bound together in statewide and regional networks might be able to participate in Professor Anderson’s research. Given their size and flexibility, they might be able to implement his work sooner than larger players.

NaaS seems to be only one aspect of NSF FIA NEBULA project. What are the others?
NEBULA’s principal Investigator is Jonathan Smith of University of Pennsylvania.
Collaborating Institutions are: Cornell University, Massachusetts Institute of Technology, Princeton University, Purdue University, Stanford University, Stevens Institute of Technology, University of California/Berkley, University of Delaware, University of Illinois/Urbana-Champaign, University of Texas, and the University of Washington. Press release: http://www.upenn.edu/pennnews/news/university-pennsylvania-awarded-75-million-nsf-contribute-nebula-next-internet-architecture

I’ve been intrigued by SiPh for quite some time, but wonder why big companies haven’t announced commercial products yet. Here’s an excellent assessment by Stephen Hardy of Lightwave:
The promise & perils of silicon photonics
http://www.lightwaveonline.com/articles/print/volume-30/issue-2/cover-story/the-promise-and-perils-of-silicon-photonics.html


In another article, Finisar’s Chairman said, ““I think there’s been a lot of gross exaggeration about the ‘threat’ of silicon photonics.”
http://optics.org/news/4/3/15


Victor Krutul of Intel: “In 10+ years, Intel has driven many, if not most, of the major research breakthroughs in silicon photonics. You can see evidence of this in the numerous articles about Intel’s work, including a few in Nature, and the awards (Mario Paniccia being named Scientist of the Year by R&D magazine in 2011, for example). It’s only been in the last two years that we’ve turned our attention to productizing the technology. And make no mistake: this is hard work and takes time.”


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.