Categories
Alan Weissberger Software Defined Network

Highlights of Open Server Summit: Nov 11-13, 2014 in Santa Clara, CA

Executive Summary:

The big box buzz today is “Software Defined Everything” – from compute servers to networks, storage, and data centers. That’s according to the speakers at the 2014 Open Server Summit held last month in Santa Clara, CA. The implication is that a lot of existing hardware based functionality will be implemented as software which runs on compute servers and on network switch/routers built using commodity hardware.

If indeed that’s the case, the big losers will be the server vendors- HP, Dell, Linux (which now owns IBM’s x86 server business), IBM (which still sells Power 8 based servers) Oracle (Sun Micro/SPARC based servers), and Cisco (UCS C-Series Rack Servers). Traditional switch/router vendors like Cisco and Juniper will sell a lot less of their high margin networking gear.

The winners will be the Chinese/Taiwanese ODMs that make servers and “bare metal switches.” Semiconductor companies making processors and SoCs that will be used for commodity servers and bare metal switches will also fare well. That’s largely Intel (high-end processors for servers), but other semiconductor companies are entering the market – mostly with SoCs based on ARM cores (see section below).  Broadcom seems to be very well positioned with its switch/router silicon for all sorts of network equipment.

There have been notable advancements in materials’ technology – the optical connectors, multicore processors, and denser modules. Today, those are built into equipment used in (premises and cloud based) data centers which handle high-performance and mission-critical workloads.

Highlights and Takeaways:

  • Challenges in scaling the Data Center include: dynamic provisioning, migration to Virtual Machines (VMs), dynamic assignment of workloads to VMs, network management, software defined storage & networking (via overlays/virtualization or SDN centralized controller/data forwarding engines).
  • In Microsoft’s Azure public cloud offering, cloud services support hyper-scale workloads alongside enterprise workloads (e.g., Microsoft SQL Server, Microsoft Exchange, etc.). Many of the key concepts driving this infrastructure and the technology within it were developed by Microsoft’s Research group. Variable workloads are inevitable in this kind of environment, which is why scalable infrastructure is so important to providing elastic computing capabilities.
  • Cloud workloads exhibit increasing diversity and scale, according to Microsoft. Software redundancy, with multiple copies of data stored in different machines, is a requirement for such cloud workloads. If a server fails, it’s essential to move the workload to a different server via load balancing. That’s all about software scheduling and data replication in the cloud.
  • Cloud workloads are different from those running on premises based servers/data centers, says Microsoft.  Data is mostly read and processed with only the results stored in main memory or disk. Data retention is only for a few days to one month. A distributed file system is needed for cloud storage.
  • Some of the promising new technologies mentionedwere said to meet the demands of the cloud resident, software defined data centers:
    • Dis-aggregated building-blocks, tied together by high-speed fabrics and high-speed switches. A great example of that is the work being done by the Open Compute project.
    • Optical (light-driven) connectors linking high-speed processing with high-speed storage. It remains to be seen if silicon photonics will be used tor replace optical interconnects.
    • Hyper-converged systems, combining servers, storage and networking for faster performance. [This is a trend, but hasn’t occurred on a large-scale yet.]
    • “Flat networks” linking sections of the compute and storage fabrics within the data center. [That has yet to happen. There are still two networks in the data center- Ethernet for compute servers/routers and Fibre Channel for Storage equipment.]
    • More comprehensive and better, software management – for policy-enforcement, orchestration and automation.
  • Raejeanne Skillern, General Manager of Intel’s Cloud service Provider business, spoke about Software Defined Infrastructure—and the way it is working to provide virtualized pools of compute, storage and networking resources. This is being done to ensure that data services will scale, as needed, based on user demand for those data services. Dynamic orchestration of processing and a high degree of automation are both key enablers those this kind of software-defined infrastructure in next-generation data centers.
  • Intel makes ~ $14 billion in annual revenue from high-end processors used in servers and is predicting 15% compounded annual growth for several years to come, according to Barrons magazine.
  • While Intel holds a dominant share of processors used in servers, ARM Ltd is starting to gain market share. ARM’s “partners” which have licensed their cores for server and networking applications include AMD, Broadcom, Qualcomm, Cavium and Applied Micro.
  • In addition to Intel and ARM based silicon, there is also an effort around IBM’s POWER CPU via the OpenPOWER Foundation. IBM has opened up technology surrounding its Power Architecture such as processor specifications, firmware and software. It is offering this technology on a liberal license and they will be using a collaborative development model with their partners in the Foundation. The goal is to enable the server vendor ecosystem to build their own customized server, networking and storage hardware for future data centers and cloud computing. Processors based on IBM’s IP can now be fabricated on any foundry and mixed with other hardware products of the integrator’s choice.
  • Jian Li, Data Center Architect at Huawei, talked about “Developing the High Throughput Data Center.” He described a High Throughput Computing Data Center (HTC-DC) that supports higher throughput, better resource utilization, greater manageability and more efficient use of power. “Big Data” workloads demand this kind of infrastructure and that is why software-defined networks (SDNs) are so important in leveraging the resources already inside the data center to achieve workload scalability. Problem is there are “N “versions of SDN to choose from.
  • Steve Garrison, VP of Marketing for PICA8, talked about “White Box Switches and Integrating Open Flow (the protocol/API used between the Control plane and Data plane in classical SDN).” Steve said that SDN delivers a policy driven framework which drives down operational costs and enables “business logic” to be included in the network. The concept is to tailor the packet header and select what the business needs for each networking application.
  • “SDN should be about driving business logic into the network, so that it doesn’t constrain the business,” Garrison said. “Business logic requires rethinking the (protocol) stack,” he added. Steve believes that SDN use cases that deliver real business benefits will drive revenue starting in 2015.
  • The Data Center Battleground, according to PICA8, is depicted in the figure below. The company provides a network operating system (PicOS) that is loaded onto bare metal switches (often referred to as “white boxes,” but now established server vendors like Dell are also making them). Three types of ports are supported by PicOS: Conventional L2/L3, Open Flow (policy based traffic flows), and CrossFlow (Open Flow for policy rules combined with L2/L3 for frame/packet transport.
This is a slide describing why the data center is ripe for SDN and NFV and was from a presentation from Steve Garrison of Pica8 at the Open Server Summit.
Image courtesy of Pica8

Note: PICA8 has headquarters in Palo Alto, CA, but its R&D effort is in Beijing, China. They appear to be in competition with Cumulus Networks which also makes a network OS for bare metal switches.

  • Ron DiGiuseppe, Sr. Marketing Manager at Synopsys suggested network overlays was the best approach to SDN (rather than a classical SDN with Open Flow). VxLAN was said to be the most promising L2 tunneling mechanism to move VM traffic to any server within the Data Center, cloud or large campus network. Such an “overlay network” carries data (MAC frames) from individual VMs in an encapsulated format over a logical tunnel. It effectively extends L2 subnetworks across L3 networks while overcoming the limits of conventional (IEEE 802.1d spanning tree) MAC bridging. The target applications are intra data center communications and cloud networking where there are as many as 4M (or more) VMs.

Note: Synopsis has developed a 10Gigabit Ethernet (GE) Controller that incorporates VxLAN tunneling. That and related technologies (40GE MAC, 40GE Physical Coding Sublayer (PCS), and 12GE PHY)  are generally licensed as off-the-shelf products. However, Synopis is willing to work with large volume customers to make custom modifications to those IP cores. Ron said that 10GE implementations with VxLAN are becoming quite popular. Synopis IP was said to respond to needs of Data Center SoC’s by providing low latency, low power consumption, advanced protocols/features along with Reliability-Availability-Serviceability (RAS).

Main Messages:

Another important take-away is that processors with licensed ARM cores are making inroads in the server market. AMD has up till now been strictly an “x-86” processor shop. Yet in the Server Roadmaps session at this conference, the AMD representative said:

“ARM 64-­bit processors will disrupt the server market, primarily through cost and power the only way to differentiate ARM based SoCs for servers is through accelerators. ARM’s open SoC ecosystem makes it easy to integrate accelerators onto the chip.”

Business models are changing and ARM based SoCs may take a larger market share of the server market which would diminish Intel’s dominance of that business. That together with the relentless push for Software Defined Everything, running on commodity hardware, were the key messages of the 2014 Open Server Summit.

Author Alan Weissberger

By Alan Weissberger

Alan Weissberger is a renowned researcher in the telecommunications field. Having consulted for telcos, equipment manufacturers, semiconductor companies, large end users, venture capitalists and market research firms, we are fortunate to have his critical eye examining new technologies.

2 replies on “Highlights of Open Server Summit: Nov 11-13, 2014 in Santa Clara, CA”

Excellent job of distilling the conference. There is a lot to take in, but, at a high level, it seems to follow the theme of many of your articles regarding the commodization of the hardware with the value-add being in the software that can magically morph the hardware into what is needed at a particular time.

Thanks Ken. What I continue to be amazed at is how SDN has bifurcated into 2 camps- each one thinks that their definition of SDN is the ONLY one viable! PICA8 favors the Centralized Controller/Open Flow approach, espoused by the ONF. Synopsys, favors the overlay/virtual networking approach using VxLAN for tunneling a L2 network over a L3 network. Obviously, the two approaches don’t interoperate and there’s no inter-networking effort I’m aware of between the two disparate SDN schemes.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.