Categories
Alan Weissberger Software Defined Network

Illustrations of Network Virtualization from VMware, ETSI NFV, and Intel Reference Designs

Introduction #

Our last article described VMware’s view of network virtualization, provided definitions and advantages of same, with different opinions expressed on the main advantage(s).  There was a reference to a follow on article which would illustrate various configurations of network virtualization.  We attempt to do that in this article by describing figures from VMware, ETSI NFV ISG, and Intel’s SDN/NFV reference designs.

Network Virtualization Illustrations

1. VMware #

The figure below (from VMware) shows a Leaf/Spine L3 fabric deployment in support of Network Virtualization. The fabric connects compute cabinets with Hypervisors (below), Infrastructure cabinets with controller nodes, and Edge cabinets that interface to the outside world (e.g. the Internet, private lines, IP VPNs, Carrier Ethernet, etc).

An image is depicted showing leaf-spine architecture for network virtualization.
Image Courtesy of VMWare.

Because the virtual network is decoupled from the Data Center switch fabric, the latter can be built without the former  complicating or restricting its design.  VMware believes that the most scalable, robust, and cost effective architecture (to date) for such a Data Center switch fabric is the Layer 3 (L3 or IP Network layer) Leaf/Spine fabric design shown above.

Such a L3 Leaf/Spine fabric is constructed using standard IP routing protocols (e.g. OSPF, IS-IS, BGP) between the Leaf and Spine switches. This fabric can be put together using commonly available IP networking equipment such as L3 switches that support IP forwarding and 1/10/40G Ethernet MAC framing.

A Leaf switch is connected to all Spine switches to provide multiple high bandwidth paths to any other rack. The Leaf switch selects a path for each new flow between any pair of Virtual Machines (VMs). That’s done in hardware at line rate (e.g. 1/10/40G b/sec). This path selection is referred to as Equal Cost Multi Path (ECMP), which is supported by any standard and commonly available L3 switch. The selected Spine switch receives the traffic from the Leaf and forwards to the destination Leaf based on IP routing (looking at the destination IP address in the tunnel headers).

The Hypervisor nodes, running a programmable vswitch, attach the Leaf switch like any standard server — with a Network Interface Card (NIC) that has an IP address.  The IP address on the NIC is used to dynamically build tunnels between other Hypervisors and Gateway nodes. The NSX Controller programs these tunnels dynamically as the environment changes.

Note that there is no special protocol between the Hypervisor and Leaf switch; just IP/Ethernet frames. If there was NIC Ethernet PHY bonding used on the Hypervisors, then Ethernet Link Aggregation Control Protocol (LACP) would be required.


2. ETSI Network Function Virtualization (NFV) Industry Specification Group (ISG) #

The ETSI NFV ISG’s charter is to issue recommendations that will be input into existing Standards Development Organizations (SDOs) like ITU-T, along with industry forum like the ONF.

The figure below illustrates the NFV reference architecture. This diagram serves as a starting point for the NFV Architecture Working Group, but has not yet been finalized. NFV is broken into broad functional domains including the Applications Domain (where Network Functions reside), and the underlying framework, consisting of the HyperVisor, Compute, Infrastructure Network, and Management and Orchestration domains. The NFV architecture explicitly is defined to be complementary to SDN. However, recognizing the early stage in the SDN life-cycle,  it is desirable to realize the benefits of NFV based on existing network architectures.

Image Courtesy of Open Networking Foundation

3. Intel Reference Designs #

Intel recently introduced three referenced designs targeted at SDN/NFV implementations for both the control and data planes.

The Intel® Open Network Platform Switch Reference Design
Codenamed “Seacliff Trail,” the Intel® Open Network Platform (ONP) Switch Reference Design is based on scalable Intel processors, Intel® Ethernet Switch 6700 series and Intel® Communications Chipset 89xx series, and is available now. The ONP Switch Reference Design will include Wind River Open Network Software (ONS), an open and fully customizable network switching software stack using Wind River Linux. Wind River ONS allows for key networking capabilities such as advanced tunneling as well as modular, open control plane and management interface supporting SDN standards such as OpenFlow and Open vSwitch. Common, open programming interfaces allow for automated network management, and coordination between the server switching elements and network switches enabling more cost-effective, secure, efficient and extensible services.

The Intel® Data Plane Development Kit (Intel® DPDK) Accelerated Open vSwitch
Network architectures have traditionally been optimized for large packet throughput to meet the needs of enterprise end-point applications. Intel is executing a project aimed at improving small packet throughput and workload performance that can be achieved on the Open vSwitch using the Intel DPDK. Intel is specifically re-creating the kernel forwarding module (data plane) to take advantage of the Intel® DPDK library. The Intel® DPDK Accelerated Open vSwitch is planned to initially be released with the Intel® ONP Server Reference Design in the third quarter of this year.

The Intel® Open Network Platform Server Reference Design

Image courtesy of Intel
Image courtesy of Intel

This server reference platform, codenamed “Sunrise Trail,” is based on the Intel® Xeon® processor, Intel 82599 Ethernet Controller and Intel Communications Chipset 89xx series. The ONP Server Reference Design enables virtual appliance workloads on standard Intel architecture servers using SDN and NFV open standards for datacenter and telecom. Wind River Open Network Software includes an Intel DPDK Accelerated Open vSwitch, fast packet acceleration and deep packet inspection capabilities, as well as support for open SDN standards such as OpenFlow, Open vSwitch and OpenStack. The project is in development now: the first alpha series is slated to be available in the second half of 2013.

Intel does NOT distinguish between SDN and NFV in their whitepaper, which describes how to use their three reference designs.  Intel’s  SDN/NFV architecture consists of four layers: orchestration, network applications, network controller, and node-as shown in the figure to the right.

These layers have been proposed by Intel and have not yet been accepted (or even submitted as a contribution) by ETSI NFV ISG or the ONF which is standardizing SDN-Open Flow. Further details are in the Intel whitepaper referenced above.


Conclusions: #

It should be quite obvious to users that VMware’s network virtualization is an actual implementation, while the ETSI NFV and Intel illustrations of network virtualization are high level architectural diagrams, i.e. at the concept stage.  That’s what one would expect in this very early process of standardizing network virtualization, with no solid specifications likely for at least one or two years.

Please read the comments underneath my last article for different opinions, perspectives and links to relevant ETSI NFV specification work.


Author Alan Weissberger

By Alan Weissberger

Alan Weissberger is a renowned researcher in the telecommunications field. Having consulted for telcos, equipment manufacturers, semiconductor companies, large end users, venture capitalists and market research firms, we are fortunate to have his critical eye examining new technologies.

8 replies on “Illustrations of Network Virtualization from VMware, ETSI NFV, and Intel Reference Designs”

Thanks for this post, which clarifies your previous article on VMware’s Network Virtualization software. Are there any other alternatives to the L3 Leaf-Spine fabric for the Data Center physical network? How about IEEE 802.1ad (Ethernet) Provider Bridging, which operates at L2-MAC sublayer? Will the VMware NV software work with that and/or other DC Physical network fabrics?

I’ve requested VMware to read & reply to your comment. I don’t know the answer. However, at VMUG-SV on May 1st, VMware’s Milin Desai said that their NV software modules communicate with the Physical Network (switch fabrics) below via L3-ECMP, which implies the Physical Network must be L3 (IP network layer) aware.

In addition to IEEE 802.1ad Provider Bridging, there is also the IETFs TRILL, various versions of MPLS-Ethernet as well as proprietary implementation of DC switching/bridging.

VMware Network Virtualization will work with any DC fabric design. The only requirement is that hypervisors can can communicate with IP, which is possible on an L2 or L3 fabric.

Thanks Alan for this continuing series. As I wrote in the comment on one your other articles, you should write the book on this topic.

This article reinforces your earlier comment that it is still early. It will be interesting to see which of the approaches described above become the “standard”.

To reiterate what I wrote in an earlier article: “The ETSI NFV ISG will NOT produce any standards. Instead, they will focus on whitepapers and contributions to ITU-T SG13 work on Future Networks.” http://viodi.com/2013/04/23/service-provider-sdn-network-virtualization-and-the-etsi-nfv-wg/
ETSI NFV ISG has these WGs: Architecture of Virtualized Infrastructure, Mgt & Orchestration, Performance & Portability, Reliability & Availabilty, Security, Software Architecture, Steering Committee. Each of these WGs may be accessed from:
http://portal.etsi.org/nfv/Table_ToRs_WGs.asp and more info at:
http://portal.etsi.org/portal/server.pt/community/NFV/367

Other than ONF’s Open Flow, there are no specifications for SDN. And there will be no standards coming from ETSI NFV ISG- just white papers and contributions to ITU-T SG13 Future Networks activity.
The article you referenced correctly states: “Discovery, configuration, orchestration, provisioning, performance monitoring and fault isolation in an SDN architecture requires standards, but standards bodies are behind the curve in defining requirements for many of the OSS functions.” and also………….
“Because of the issues identified by the ETSI NFV ISG in its NFV whitepaper, the success of open-source solutions in CSPs’ networks is questionable.” Thanks for your comment & url for the CSP article!

Light Reading: NFV Picks Up Speed

Network Functions Virtualization (NFV) may become a reality in commercial telecom operator networks sooner than many expected, with the pace of adoption and acceptance taking many from the vendor community by surprise, according to people here in Nice.

http://www.lightreading.com/software-defined-networking/mw13-nfv-picks-up-speed/240154917?f_src=lrweeklynewsletter&elq=1727c5fc3fb542b7952124f5680976b7&elqCampaignId=

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.