Alan Weissberger Cloud Computing Networking Software Defined Network Technology

Assessment of Open Networking, Bare Metal Switches, White Boxes, and NFVi

The Vision and Reality of Open Networking:

From 2012 to at least 2016, there was tremendous industry buzz about disaggregation of the network switch, transport equipment, and network appliances. Many industry “experts” and stock market analysts said that purpose-built hardware would be replaced by “open network” software running on commodity “white boxes” and “bare metal switches.”

–> Please see sidebar below for definitions.

A new networking software industry, including open network operating systems (NOS’s) and open source software of all types, was expected to emerge to create options and choices in the type of network infrastructures which all service providers and IT enterprise customers could put together.

That really didn’t happen. At least not for enterprises, co-location data centers,  2nd/3rd tier cloud service providers or most network providers/telcos. According to a recent Dell’ Oro group report, white box switch vendors lost market share in the 100G data center switching market in 2018.

“During 2018, sales of 100 GE Ethernet Switching has underpinned the entire strength of the Data Center market. As users other than the Top 4 US Cloud Providers deploy 100 Gbps, such as Enterprises and smaller Cloud Providers, they are sticking with branded vendors,” said Sameh Boujelbene, Senior Director at Dell’Oro Group. “We expect ongoing strength in 100 GE through 2019. The vendors have certainly priced it attractively compared to 40 GE. We are seeing price levels in 2018 that we expected a year from now,” added Boujelbene.

That’s the opposite of what was expected by the pundits who claimed open networking would usher in a new epoch for data center networking.

Open Networking Buzz Words image is courtesy of Aptira
Open Networking Buzz Words, courtesy of Aptira

Disaggregation and open networking did occur, but almost all that activity was confined to the hyper-scale cloud providers (e.g. Facebook, Microsoft, Google, etc.) and large network service providers/ telcos (e.g. AT&T, Deutsche Telekom, Telefonica, etc.).

[Note that Amazon AWS cloud data center equipment and interconnects are a deep secret.  All Amazon IT equipment, software high-level design, and functional requirements are done by Amazon engineers.  We don’t know if the world’s #1 cloud computing company uses any open source hardware or software].

Each of two IHS-Markit surveys (2017 and 2018) revealed that the biggest challenge for Open Compute Project (OCP) hardware and software adoption was lack of OCP vendor tech support and managing the multiple OCP vendor equipment.


Mission of Linux Foundation and Open Networking Foundation (ONF):

The Linux Foundation has become the undisputed hub of open source networking software projects. It hosts nearly two dozen industry-leading open source networking projects which can be viewed here.

There’s also the operator led Open Networking Foundation (ONF), which has greatly expanded its scope in the past few years.  ONF now serves as “the umbrella for a number of projects building solutions by leveraging network disaggregation, white box economics, open source software and software defined standards to revolutionize the carrier industry.”  ONF specifies open hardware in reference designs as well as open source software standards and open source code.

The NFV Conundrum and a new NFVi Consortium:

Around the same time, there was tremendous hype focused on Network Functions Virtualization (NFV) which would replace individual network equipment with software-based network appliances (e.g. application delivery controller, session border controller, firewall, gateways, etc.) running on generic compute servers.   Those network appliances were to be managed, scheduled, and orchestrated by a NFV Management And Network Orchestration (MANO) entity that would likely be supplied by a 3rd party.

NFV is still struggling to attract critical mass amongst service providers after many false starts.  We’ve said from inception that NFV would fail because there were no standards for interoperability or APIs and a lack of compatibility with the installed base of hardware network appliances.

Last week at the Open Networking Summit, ten carriers said they’ll work together to simplify network functions virtualization infrastructure (NFVi).  The group, which is called Common NFVi Telco Task Force, is comprised of AT&T, Bell Canada, China Mobile, Deutsche Telekom, Reliance Jio, Orange, SK Telecom, Telstra, Verizon, and Vodafone.

Currently, there are too many types of NFVi floating around, which means virtual network functions (VNFs) vendors need to create multiple versions of their VNFs to work with the different flavors of NFVi.  Again, that’s because there are no implementable standards. The Common NFVi Telco Task Force will endeavor to reduce the number of NFVi implementations down to three or four versions, according to AT&T Vice President of Network Cloud Amy Wheelus.

Why hasn’t Open Networking become more pervasive?

The biggest obstacles in deploying open source and specialized software for open networking are systems integration and lack of adequate vendor tech support.

The idea of buying switch hardware from one company, a NOS from another, and then software features – such as monitoring, management, security, etc. – from other software vendors frightens many IT and telco business leaders and operational groups. Of main concern is whether they have the skills to build such a network infrastructure and, most importantly, to operationalize it at scale? The large enterprise IT departments have not invested in the skill sets to build and manage this type of infrastructure.

Think about it for a moment.  The IT department/ network engineers responsible for data centers and/or central offices must work with bare metal switches, white box compute and storage servers, open transport equipment, network operating system (for white box or bare metal switches), and management/control software, which are very likely all from different vendors.  There may also be applications and other software purchased from independent software vendors (ISVs).

This is a HUGE disruptive change for the enterprise IT department that is normally working with only one vendor (e.g. Cisco, HP, Dell, etc).  Also, for the telco network engineers that are maintaining purpose-built, proprietary network equipment in their central offices (CO’s) and remote terminal locations.

The crucial problem with open networking, open source software running on open hardware is this:

How to integrate disaggregated functions from multiple vendors and resolve finger pointing when a failure or problem is detected?

Unless better support becomes available, the whole concept of open networking with open source and proprietary software running on white boxes/bare metal switches will fail.  Currently, it appears that ONLY the hyper-scale cloud companies and large telcos (participating in the OCP and TIP) have the newly trained IT departments and “new age” network engineers to do the required systems integration and tech support.

One Network Software Company’s Solution:

Alex Saroyan, CEO of XCLOUD Networks believes his company can provide a solution to the systems integration and tech support challenge noted above.

Alex wrote in an email to this author:

“Our customers build a network from various building blocks including: OCP white-box switches (various ODM vendors), either Cumulus Linux or Ubuntu Linux with SwitchDev driver, FRR (an open-source routing protocol suite), DPDK-based virtualized network functions running on a commodity servers and XCloud on top for automation, management and virtualization.

When it comes to support – XCloud Technical Assistance Center (TAC) is responsible for support throughout the whole stack. We believe that such comprehensive support is the key to provide sustainable product to our customers. Of course, it takes special validation and intensive testing of provided functionality so we at XCloud can be confident that whatever our customer configured from provided dashboard will be fully operational throughout the supported stack of open-source software and open-networking hardware.”

More about XCloud in the second part of this article to be published soon.  Stay tuned……

Sidebar on Definitions:

White boxes have come to mean any type of IT or network equipment built by Original Design Manufacturers (ODM’s) such as Accton, Delta Networks, EdgeCore Networks, Foxconn and Quanta Cloud Technology). Those ODMs build the white boxes from open hardware specifications (e.g. from OCP, TIP, O-RAN, ONF, and other consortiums). The ODMs that supply hardware to AWS do so using Amazon generating specs.

Most “white box switches” employ an “open” Linux-based Network Operating System (NOS) that is intended to be disaggregated, or abstracted, from the underlying network hardware. Hardware-software-disaggregation enables the user to swap out either the hardware or the NOS at any time as they are not tied to one another as are legacy switches from major vendors (Cisco, Arista, Juniper, Huawei, Nokia/Alcatel, etc) that install their own proprietary NOS’s.

Bare Metal Switches, on the other hand, don’t come with a NOS.  It’s up to the customer or a systems integrator to pick and install a NOS as well as management software (these are usually two separate pieces of code).  For example, a customer could use Cumulus’ NOS or open source SONIC (contributed by Microsoft to the OCP) along with proprietary or open source software for network management, configuration, and network virtualization functions (more about that in a follow-up article).  Edge-Core Networks, Delta, and Stordis are a few bare metal switch makers, all of which conform to OCP hardware specifications.

Most bare metal switches come with a boot loader called Open Network Install Environment (ONIE). Using ONIE, customers can load a network operating system onto the switch.

Back to Top #


Author Alan Weissberger

By Alan Weissberger

Alan Weissberger is a renowned researcher in the telecommunications field. Having consulted for telcos, equipment manufacturers, semiconductor companies, large end users, venture capitalists and market research firms, we are fortunate to have his critical eye examining new technologies.

6 replies on “Assessment of Open Networking, Bare Metal Switches, White Boxes, and NFVi”

Thank for publishing this great insight in the Viodi View, Alan. You make a good point that just because something is possible (e.g. homebrew network components), that it makes sense. I suppose its like the average Joe could take an Arduino and some other off-the-shelf parts and software create a cool gizmo. Except for the hobbyist or someone who wants to create something that hasn’t been created, 99% of the people would rather focus on other parts of their life. As you point out, it is similar for operators. Unless they are big enough to absorb the overhead of integration, it still makes sense to engage entities that can integrate the various blocks.

Looking forward to part 2.

Great explanation of the insight Alan. We had these points in our minds when XCloud was at idea stage and now we are happy to see our customers to operate live networks with the described disaggregated stack with simplicity never available before even with well-known conventional solutions.

Looking forward to part 2.

Thanks for your kind words Ken and Alex. Several Twitter followers asked me how XCLOUD will solve the systems integration and tech support problems and I’m waiting for a detailed explanation of that along with an equipment block diagram before starting on part 2.

NOTE that I didn’t mention SDN as it has morphed into a thousand (or more) different versions of user software control of network equipment. Each hyper-scale cloud provider and each large network operator has their own specs for SDN which are incompatible with anyone else’s. Also, SD-WANs have replaced SDN WANs (original concept of SDN with centralized SDN Controller for source routing, Open Flow API between Control and Data planes, packet forwarding engines in the data plane network). SD-WANs provide user control at the Application layer, they are not standardized (no standard UNI, NNI or even APIs), with not even an agreed upon definition (despite MEF efforts). Hence we don’t include SDN or SD-WANs as open networking as they are all proprietary = cloud/network operator specified and.or vendor specific, which is the OPPOSITE of OPEN NETWORKING which implies NO VENDOR LOCK-IN!

White boxes have come to mean any type of IT or network equipment built by Original Design Manufacturers (ODM’s) such as Accton, Delta Networks, EdgeCore Networks, Foxconn and Quanta Cloud Technology). Those ODMs build the white boxes from open hardware specifications (e.g. from OCP, TIP, O-RAN, ONF, and other consortiums) or from Amazon/Google high level design specs.

the Linux Foundation hosts 9 of the 10 largest open source networking projects and it recently launched LF Edge to address the growing momentum for edge deployment and the need for an open, interoperable development framework. This was also communicated at ONS from representatives of organizations such as 3GPP, ETSI, OCP, TIP, MEF, TM Forum, and the Linux Foundation, who came together to discuss collaboration efforts to accelerate networking innovation.  

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: