Ethernet Tech Summit Reveals Many Paths to "Open SDN"

 

Introduction:

“SDN” and “open networking” were very hot topics at last week’s Ethernet Technology Summit in Santa Clara, CA. You might be wondering what SDN has to do with Ethernet, as it’s not specified in any SDN or open networking standards. The answer is that Ethernet is the underlying Layer 1 and 2 (PHY and MAC) for SDN in the data center and campus/enterprise networks- the two most targeted areas for early SDN deployment.  Carrier Ethernet (metro and WAN), along with the OTN, will be the underlying transport network infrastructure for “carrier SDN.”

SDN Session Highlights:

Here are key messages from selected SDN related sessions at this excellent conference:

1. Open-Source Switching:

Roy Chua from SDN Central provided observations and identified lessons learned in the rollout of open- source switching. The move towards open source switching and “white box” hardware will bring open source software and hardware to IP routing and Ethernet switching.   The Open Compute Project (OCP) Networking activity is a good example of that. As a result, basic switch designs and software stacks could be available to everyone on a royalty-free basis.

2. Customer-Oriented Approach to Implementing SDN:

Arpit Joshipura of Dell said it was a great year for SDN with progress on all three architectural models: overlay solutions/network virtualization, (e.g. VMware/Nicira), vendor specific programmable solutions (e.g. Cisco), and “pure SDN” with a centralized controller and Open Flow API/Protocol (e.g. Big Switch Networks). The graphic below depicts the era of “open computing,” which shows any Operating System (or Hypervisor) running over an industry standard architecture for the control plane and a data plane built from merchant silicon (usually by ODMs in their “white boxes”).

An image depicting the network paradigm shift of Open Networking
Image courtesy of Dell

Dell’s Open Networking model is shown below. It can use any OS running on their “Open Networking Switch” with Broadcom switch silicon used in the data plane forwarding engine, which could be a “white box.”

An image showing Dell's Open Networking model and how it allows a choice of OS and Applications.
Image Courtesy of Dell

Going forward, Arpit sees three different SDN mind sets, each with their own versions of open networking:

  • Server/hypervisor- Build switches like servers to attain Open Networking
  • Vendor specific networking- Proprietary thinking with some degree of user programmability
  • Purist view with ONF standards (e.g. Open Flow v1.3 and open source software (e.g. Open Daylight). This view requires all new network equipment and is therefore only applicable for greenfield SDN deployments.

Organization change and (re) training will be a critical issue for companies that deploy SDN.  That’s something that this author thinks may take quite a long time. See section 7. Got SDN root? for more on the new skills required to manage and maintain an open SDN.

3. Expansive Openness Is the Key to SDN and NFV:

Marc Cohn of Ciena identified five attributes of Openness:

  • End users are in control
  • Multi-vendor Interoperability (via implementation of open standards/specifications)
  • Unprecedented choice (as a direct result of multi-vendor interoperability)
  • Not controlled by single vendor (i.e. no vendor lock-in)
  • Vibrant ecosystem

The various layers of an open SDN architecture are depicted in the graphic below.

Ciena's slide regarding openness and SDN.
Image courtesy of Ciena

Looking ahead, Marc sees SDN related standards, open source software and end-user groups all evolving and working together to create a virtuous cycle that will enhance the SDN/NFV ecosystem.  We’ll later provide references and our opinion about SDN openness (or not).

4. Qualifying SDN/OpenFlow Enabled Networks:

Dean Lee of Ixia did an excellent job of positioning SDN using Open Flow as the “Southbound” API/protocol from the Control Plane to the Data Plane- which are assumed to be implemented in different physical equipment/ boxes.

There are three definitive SDN features, which makes it unique and different from traditional networking:

  • Separation of the control plane from the data plane
  • A centralized controller and view of the network (note that each domain or subnetwork would have its own SDN controller which must communicate with others for path calculations and topology changes)
  • Programmability of the network by external applications, which is done via the “Northbound” API (from the Application to the Control plane)

Dean included Network Virtualization via an overlay network as part of “SDN” (Note that VMWare/Nicira doesn’t call that SDN and they don’t implement Open Flow. They simply refer to their open networking solution as “Network Virtualization”).   In this “SDN/NV” model, the physical network infrastructure is divided into multiple logical networks to support multi-tenants or end users. Connectivity is established across existing L2/L3 networks via a Network Virtualization Controller (such as NSX/VxLAN from VMware or OpenContrail from Juniper Networks).

Dean said that SDN has a lot to offer telecom carriers, including these benefits:

  • Customization (Value add): custom services, collaboration between applications and the network
  • Simple: operation and management —>Lower OPEX
  • Instant: fast service provisioning (and quicker time to deploy new services)
  • Elastic: flexible evolution of infrastructure

SDN evolution challenges for carriers are related to a relatively smooth migration from current legacy networks toward SDN. They include the following:

  • Significant installed base of existing carrier networks
  • Co-existence during migration
  • Evolution versus revolution
  • Reliability and scalability of centralized controllers
  • Expose much higher risk than distributed control planes
  • Fast recovery from data path failures
  • Supporting very large carrier networks
  • Flexibility versus Performance
  • Software flexibility and performance rely on hardware capability
  • Finding the correct hardware trade-offs
  • Lack of robust testing methodologies for validating various SDN implementations

5. Real Time Insight Needed for Managing SDN and NFV:

Peter Ekner of Napatech (Denmark) said the main advantage of SDN for carriers was agile provisioning of services, while for Network Function Virtualization (NFV) it is flexible deployment of services.

However, agile and flexible provisioning/deployment of services is only possible if the network operator controls the traffic and consumption of those same services. Clearly, that’s not the case today, as it’s the over the top video providers that actually generate most of the network traffic with timing that’s unpredictable. [According to a Cisco study, 50% of all U.S. Internet traffic in 2014 was from Netflix and YouTube. Video will consume 66% of all network traffic in 2018.] As a result, carriers no longer control what services are used and when they are consumed!

A variety of data types and high traffic volumes leads to network complexity, which Peter says can’t be orchestrated by static provisioning or path calculations. He proposes “Real Time Insight” to complement SDN and NFV functionality in a carrier network. Please refer to the figure below:

Napatech's view of the real-time insight needed to react and adapt to changes.
Image courtesy of Napatech

Real Time Insight enables the network to:

  • See What’s Happening as It Happens- Collect real-time data
  • Understand Exactly What Occurred- Store data for historical analysis
  • Detect When Something New Happens – Detect anomalies in real-time
  • Capture Data in Real Time, Store and Detect – Optimize services and network in real-time

The result will be a much better Quality of Experience (QoE) for users and improved network security. This is depicted in the illustration below:

Napatech's view of assuring OoE and security to enable new services for users and OTTs.

6. SDN Overlays-Possibilities and Implications:

Sharon Barkai of ConteXtream (Israel and the U.S.) identified several problems with SDN performance, especially scaling up to deal with increased data/video traffic to or from many users.  Sharon claims that, as currently defined, SDN is “unstructured” and can have serious “scale consistency” issues, especially for tier 1 carriers.  

A large network operator (such as AT&T, Verizon, BT, DT, Orange, NTT Com, etc) has to serve millions of customers. These customers are now demanding services to be delivered to multiple endpoints. With the number of subscriber endpoints exploding, a carrier grade SDN infrastructure needs to cope with millions of SDN rules for path computation and packet forwarding. This translates into huge capacity requirements for SDN controllers and switches, with the need for complex rules and flow commands handled by those SDN entities.  

Mr. Barkai said that these performance problems could be solved using Network Virtualization Overlays (NVOs). (Note that this is a completely different concept than VMWare’s Network Virtualization, which doesn’t use SDN/Open Flow anywhere).   In this model, NVOs would co-exist with SDN operating at the network edge and NFV functionality within a carrier network. Communications between those three entities (NVO, SDN, NFV) would be via exchange of Flow Mapping tables and associated primitives/ protocols. This is show in the figure below:

ConteXtream's solution where overlays complete virtualization.

Adding NVO “standards” to SDN starts with use of an IETF Location Identity Separation Protocol (RFC6830), according to Sharon.  Mr. Barkai said the following rules should be applied to this network overlay/ SDN OpenFlow/ NFV hybrid architecture:

  • SDN OpenFlow should not cross routing locations
  • SDN flows cross locations by “Map and Encapsulate”
  • Distribution is based purely by underlay (the real physical) network and mapping

The claim was that with such a distributed networking fabric and overlays, network operators could deliver a variety of network services to a large number of subscribers. A collapsed packet core, managed network service and distributed packet core backhaul were cited as use cases for proof of concept.

7. Got SDN root? Claim your seat at the new SDN table:

Patrick Hubbard of SolarWinds called attention to the critical need for “hands on” network management and control. He believes that with a centralized SDN controller and separate control/data planes, increased troubleshooting complexity will require “old-school” networking experience. Yet the success of any SDN deployment will also require new forward-looking skills for IT networking personnel.  In particular, new SDN training and certifications will be needed for that.

Will “old school” IT departments engage in such training and certification? How long might that take? Or is it too late to teach old dogs new tricks?

Is SDN Really Open?

In contrast to Marc Cohn’s Expansive Openness talk and Arpit Joshipura’s Customer-Oriented Approach to Implementing SDN keynote that identified only three paths to SDN (pure Open Flow with centralized controller, Overlay/Network Virtualization in the compute server, and proprietary SDN models), we now have:

50 Shades Of Open SDN 

[Thanks to Dan Pitt, PhD and Executive Director of the ONF for notifying me of the above article]

Professor Raj Jain, PhD (a multi-decade colleague of this author) read the above article and wrote in an email:

“The article is more about “Open” than about “SDN.” Right now “Open” sells and so everything is labeled “open.” But like an open window, the degree of openness vary. The article pointed this very well with specific examples.

Any idea that is widely adopted will be reshaped to meet the variety of needs of the wide audience and often it may look very different from the original idea. SDN is now undergoing that transformation. The wider the applicability more the “shades.” So while this is confusing now, it goes in favor of SDN that it is being adopted in all these varieties.”

Personal Perspective:

This author believes the term “Open SDN” is an oxymoron, primarily because of the lack of a complete standards suite of protocols/APIs and interfaces.  

First, there is the uneven acceptance of OpenFlow as the Southbound API (it’s just one of several alternative protocols between Control Plane entity and Data Plane/Bare Metal switches/white boxes). Many “SDN” vendors have not implemented any version of OpenFlow at all.  For those that have, there are often vendor specific (i.e. proprietary) extensions to OpenFlow v1.3. 

In addition, each SDN vendor must chose among many possible protocols for the Northbound API (e.g. OpenStack, CloudStack, etc) for Orchestration/Management of the SDN controller below (even if it’s implemented as a software module within the same physical compute server).   

Also, the East-West protocol between SDN controllers in different networking domains (i.e. SDN controller to SDN controller) has not been standardized by ONF and work hasn’t stated yet. “Use BGP” is their recommendation at this time for inter-domain communications between SDN controllers.

Finally, there are no standards for control, management, monitoring, fault detection, etc of the underlying fiber optic transport network.  Those functions were to come from the ONF Optical Transport Working Group whose charter states: “In close collaboration with the Framework and Architecture working group, the OTWG will develop a reference architecture and framework document describing terminology, modeling of optical transport switches and networks, and utilization of virtualization and network abstraction in optical transport networks.”   Yet we haven’t seen any outputs from that ONF activity.

The lack of a complete set of standards defeats a key point of openness- no vendor lock-in! When I asked three SDN vendors about the lack of multi-vendor interoperability at the Cloud Innovation Forum in March only Arpit had the courage to reply. He said “we (the vendors) are working on SDN controller interoperability, and it will come later this year.” Does anyone seriously believe that?

It should also be recognized that the ETSI NFV activity is not producing any open interfaces, protocols, or APIs that can be implemented. They are only specifying functionality for NFV logical entities. The actual NFV standards will come later (???) from ITU-T as “recommendations.”

Yet so many vendors say they are now “NFV compliant.” How can you be compliant if there are no implementable specifications for physical interfaces or protocols to be compliant with?

Bottom line:  We believe that almost every type of “SDN” is in reality vendor specific! “SDN” and “Open Networking” have become hype machines of the highest order! We think this has caused a lot of confusion among potential customers and that has delayed many SDN deployments.

In the near future, we think most of the SDN deployments will be provided by a single “SDN/NV controller” vendor solution which may or may not include ODM built “bare metal switches/white boxes” for the data forwarding plane.

End Note:  Here’s the best video you are ever likely to see on “SDN Industry Analysis.”

http://www.freenewspos.com/english/video/brand/GRVygzcXrM0

It was presented by IT Brand Pulse at Ethernet Tech Summit 2014. Raj Jain found it very entertaining. Hope you do too!

Do you think that this same video, with properly edited subtitles, can apply to any other future technology?  Or is it specific to the ultra hyped SDN?

Reference:

“Open Networking” panel session at January 2014 IEEE ComSocSCV meeting (organized by this author) http://comsocscv.org/showevent.php?id=1386572933

0 thoughts on “Ethernet Tech Summit Reveals Many Paths to "Open SDN"

  1. Thanks Alan for the write-up on the Ethernet Tech Summit and, more importantly, your analysis of a topic that is very confusing to someone, such as myself, who is on the periphery of this business and doesn’t have the knowledge-base in this area to distinguish between hype and reality..

    One observation from last week’s cable show and last month’s NAB is that video processing is quickly moving into the software defined space. The traditional suppliers of hardware encoders are transitioning to an approach of software on generic servers. I haven’t looked into it far enough, but I suspect the level of “openness” is less than the hype would have it.

  2. I congratulate Alan on this article which explains SDN issues in fair amount of detail. It seems SDN is mostly a catch-phrase (catch-acronym, if one prefers) to cover up the past and emerging incompatibilities that have propagated over a long time. Perhaps we are heading toward many flavors of SDNs as determined best in the interest of respective OEM.

  3. Great article, and thanks for the call out of Solar Winds. It was a pleasure to meet you at the conference. Feel free to ping anytime you’re curious about the practitioner view of SDN. Our customers will be sorting it out on the ground and their perspectives will be interesting to hear.

  4. Thanks Alan, Accurate account on the ConteXt part of your excellent article. Wanted to clarify that the diameter (# of end to end hops) of the network is the exponent in the “SDN scale equation.” Having millions of flows as the base just makes it more fun? 🙂
    And as you explain, surrounding this diameter with OpenFlow (in-network) edges mitigates the issue as well as allows for an SDN over IP evolution.
    Take it one flow (protocol) type at a time. Bridge-Route by default.
    And I loved the video clip at the end! 🙂

    1. Thanks George (and all others who commented on this post). A number of IEEE members emailed to ask is there any “Open Networking” activity that is truly open (with minimal or no fee for membership). Yes, it’s the Open Compute Project/Networking activity. Anyone can join their email list and attend f2f meetings at no charge. From their web page:

      The Open Compute Networking Project is creating a set of technologies that are disaggregated and fully open, allowing for rapid innovation in the network space. We aim to facilitate the development of network hardware and software – together with trusted project validation and testing – in a truly open and collaborative community environment.

      We’re bringing to networking the guiding principles that OCP has brought to servers & storage, so that we can give end users the ability to forgo traditional closed and proprietary network switches – in favor of a fully open network technology stack. Our initial goal is to develop a top-of-rack (leaf) switch, while future plans target spine switches and other hardware and software solutions in the space.
      http://www.opencompute.org/projects/networking/
      …………………..
      This April, the OCP started a tiered membership structure. Read more at: http://www.opencompute.org/blog/update-on-ocp-tiered-membership-and-ic-elections/

Leave a Reply

Your email address will not be published. Required fields are marked *

I accept that my given data and my IP address is sent to a server in the USA only for the purpose of spam prevention through the Akismet program.More information on Akismet and GDPR.

This site uses Akismet to reduce spam. Learn how your comment data is processed.