2014 Hot Interconnects Semiconductor Session Highlights & Takeaways- Part I.

Introduction:

With Software Defined: Networking (SDN), Storage and Data Center movements firmly entrenched, one might believe there’s not much opportunity for innovation in dedicated hardware implemented in silicon.  Several sessions at the 2014 Hot Interconnects conference, especially one from ARM Ltd, indicated that was not the case at all.

With the strong push for open networks, chips have to be much more flexible and agile, as well as more powerful, fast and functionally dense. Of course, there are well known players for specific types of silicon. For example: Broadcom for switch/routers;  ARM for CPU cores (also Intel and MIPS/Imagination Technologies),  many vendors for System on a Chip (SoC)- which includes 1 or more CPU core(s)-mostly from ARM (Qualcomm, Nvidia, Freescale, etc), and Network Processors (Cavium, LSI-Avago/Intel, PMC-Sierra, EZchip, Netronome, Marvell, etc),  bus interconnect fabrics (Arteris, Mellanox, PLX /Avago, etc).

What’s not known is how these types of components, especially SoC’s, will evolve to support open networking and software defined networking in telecom equipment (i.e. SDN/NFV).    Some suggestions were made during presentations and a panel session at this year’s excellent Hot Interconnects conference.

We summarize three invited Hot Interconnects presentations related to network silicon in this article. Our follow on Part II article will cover network hardware for SDN/NFV based on an Infonetics presentation and service provider market survey.

  1. Data & Control Plane Interconnect Solutions for SDN & NFV Networks, by Raghu Kondapalli, Director of Strategic Planning at LSI/Avago (Invited Talk)

Open networking, such as SDN (Software Defined Networking) and NFV (Network Function Virtualizationprovides software control of many network functions.   NFV enables virtualization of entire classes of network element functions such that they become modular building blocks that may be connected, or chained, together to create a variety of communication services.

Software defined and functionally disaggregated network elements rely heavily on deterministic and secure data and control plane communication within and across the network elements. In these environments scalability, reliability and performance of the whole network relies heavily on the deterministic behavior of this interconnect.  Increasing network agility and lower equipment prices are causing severe disruption in the networking industy.

A key SDN/NFV implementation issue is how to disaggregate network functions in a given network element (equipment type).  With such functions modularized, they could be implemented in different types of equipment along with dedicated functions (e.g. PHYs to connect to wire-line or wireless networks.  The equipment designer needs to: disaggregate, virtualize, interconnect, orchestrate and manage such network functions.

“Functional coordination and control plane acceleration are the keys to successful SDN deployments,” he said.  Not coincidently, the LSI/Avago Axxia multicore communication processor family (using an ARM CPU core) is being positioned for SDN and NFV acceleration, according to the company’s website. Other important points made by Raghu:

  • Scale poses many challenges for state management and traffic engineering
  • Traffic Management and Load Balancing are important functions
  • SDN/NFV backbone network components are needed
  • Disaggregated architectures will prevail.
  • Circuit board interconnection (backplane) should consider the traditional passive backplane vs. an active switch fabric.

Axxia 5516 16-core communications processor was suggested as the SoC to use for a SDN/NFV backbone network interface.  Functions identified included:  Ethernet switching, protocol pre-processing, packet classification (QoS), traffic rate shaping, encryption, security, Precision Time Protocol (IEEE 1588) to synchronize distributed clocks, etc.

Axxia’s multi-core SoCs were said to contain various programmable function accelerators to offer a scalable data and control plane solution.

Note:  Avago recently acquired semiconductor companies LSI Corp. and PLX Technology, but has now sold its Axxia Networking Business (originally from LSI which had acquired Agere in 2007 for $4 billion) to Intel for only $650 million in cash.  Agere Systems (which was formerly AT&T Micro-electronics- at one time the largest captive semiconductor maker in the U.S.) had a market capitalization of about $48 billion when it was spun off from Lucent Technologies in Dec 2000.

  1. Applicability of Open Flow based connectivity in NFV Enabled Networks, by Srinivasa Addepalli, Fellow and Chief Software Architect, Freescale (Invited Talk)

Mr. Addepalli’s presentation addressed the performance challenges in VMMs (Virtual Machine Monitors) and the opportunities to offload VMM packet processing using SoC’s like those from Freescale (another ARM core based SoC).   The VMM layer enables virtualization of networking hardware and exposes each virtual hardware element to VMs.

“Virtualization of network elements reduces operation and capital expenses and provides the ability for operators to offer new network services faster and to scale those services based on demand. Throughput, connection rate, low latency and low jitter are few important challenges in virtualization world. If not designed well, processing power requirements go up, thereby reducing the cost benefits,” according to Addepalli.

He positioned Open Flow as a communication protocol between control/offload layers, rather than the ONF’s API/protocol between the control and data planes (residing in the same or different equipment, respectively).  A new role for Open Flow in VMM and vNF (Virtual Network Function) offloads was described and illustrated.

The applicability of OpenFlow to NFV1 faces two challenges, according to Mr. Addepalli

  1. VMM networking
  2. Virtual network data path to VMs/

Note 1.  The ETSI NFV Industry Specification Group (ISG) is not considering the use of ONF’s Open Flow, or any other protocol, for NFV at this time.  It’s work scope includes reference architectures and functional requirements, but not protocol/interface specifications.  The ETSI NFV ISG will reach the end of Phase 1 by December 2014, with the publication of the remaining sixteen deliverables.

“To be successful, NFV must address performance challenges, which can best be achieved with silicon solutions,” Srinivasa concluded.   [Problem with that statement is that the protocols/interfaces to be used for fully standardized NFV have not been specified by ETSI or any standards body.  Hence, no one knows the exact combination of NFV functions that have to perform well]

  1. The Impact of ARM in Cloud and Networking Infrastructure, by Bob Monkman, Networking Segment Marketing Manager at ARM Ltd.

Bob revealed that ARM is  innnovating way beyond the CPU core it’s been licensing for years.  There are hardware accelerators, a cache coherent network and various types of network interconnects that have been combined into a single silicon block that is showed in the figure below:

Image courtesy of ARM - innovating beyond the core.
Image courtesy of ARM

Bob said something I thought was quite profound and dispels the notion that ARM is just a low power, core CPU cell producer: “It’s not just about a low power processor – it’s what you put around it.”  As a result, ARM cores are being included in SoC vendor silicon for both  networking and storage components. Those SoC companies, including LSI/Avago Axxia  and Freescale (see above), can leverage their existing IP by adding their own cell designs for specialized networking hardware functions (identified at the end of this article in the Addendum).

Bob noted that the ARM ecosystem was conducive to the disruption now being experience in the IT industy with software control of so many types of equipment.  The evolving network infrastructure – SDN, NFV, other Open Networking- is all about reducing total cost of ownership and enabling new services with smart and adaptable building blocks.  That’s depicted in the following illustration:

Evolving infrastructure is reducing costs and enabling new services.
Image courtesy of ARM.

Bob stated that one SoC size does not fit all.  For example, one type of Soc can contain: high performance CPU, power management, premises networking, storage & I/O building blocks.  While one for SDN/NFV might include: a high performance CPU, power management, I/O including wide area networking interfaces, and specialized hardware networking functions.

Monkman articulated very well what most already know:  that the Networking and Server equipment are often being combined in a single box (they’re “colliding” he said).  [In many cases, compute servers are running network virtualization (i.e.VMWare), acceleration, packet pre-processing, and/or control plane software (SDN model).]  Flexible intelligence is required on an end-to-end basis for this to work out well.  The ARM business model was said to enable innovation and differentiation, especially since the ARM CPU core has reached the 64 bit “inflection point.”

ARM is working closely with the Linaro Networking and Enterprise Groups. Linaro is a non-profit industry group creating open source software that runs on ARM CPU cores.  Member companies fund Linaro and provide half of its engineering resources as assignees who work full time on Linaro projects. These assignees combined with over 100 of Linaro’s own engineers create a team of over 200 software developers.

Bob said that Linaro is creating an optimized, open-source platform software for scalable infrastructure (server, network & storage).  It coordinates and multiplies members’ efforts, while accelerating product time to market (TTM).  Linaro open source software enables ARM partners (licensees of ARM cores) to focus on innovation and differentiated value-add functionality in their SoC offerings.

Author’s Note:  The Linaro Networking Group (LNG) is an autonomous segment focused group that is responsible for engineering development in the networking space. The current mix of LNG engineering activities includes:

  • Virtualization support with considerations for real-time performance, I/O optimization, robustness and heterogeneous operating environments on multi-core SoCs.
  • Real-time operations and the Linux kernel optimizations for the control and data plane
  • Packet processing optimizations that maximize performance and minimize latency in data flows through the network.
  • Dealing with legacy software and mixed-endian issues prevalent in the networking space
  • Power Management
  • Data Plane Programming API:

For more information: https://wiki.linaro.org/LNG


OpenDataPlane (ODP) http://www.opendataplane.org/ was described by Bob as a “truly cross-platform, truly open-source and open contribution interface.” From the ODP website:

ODP embraces and extends existing proprietary, optimized vendor-specific hardware blocks and software libraries to provide inter-operability with minimal overhead. Initially defined by members of the Linaro Networking Group (LNG), this project is open to contributions from all individuals and companies who share an interest in promoting a standard set of APIs to be used across the full range of network processor architectures available.]

Author’s Note:   There’s a similar project from Intel called DPDK or Data Plane Developer’s Kit that an audience member referenced during Q &A . We wonder if those APIs are viable alternatives or can be used in conjunction with the ONF’s OpenFlow API?


Next Generation Virtual Network Software Platforms, along with network operator benefits, are illustrated in the following graphic:

An image depicting the Next-Gen virtualized network software platforms.
Image courtesy of ARM.

Bob Monkman’s Summary:

  • Driven by total cost of ownership, the data center workload shift is leading to  more optimized and diverse silicon solutions
  • Network infrastructure is also well suited for the same highly integrated, optimized and scalable solutions ARM’s SoC partners understand and are positioned to deliver
  • Collaborative business model supports “one size does not fit all approach,” rapid pace of innovation, choice and diversity
  • Software ecosystem (e.g. Linaro open source) is developing quickly to support target markets
  • ARM ecosystem is leveraging standards and open source software to accelerate deployment readiness

Addendum:

In a post conference email exchange, I suggested several specific networking hardware functions that might be implemented in a SoC (with 1 or more ARM CPU cores).  Those include:  Encryption, Packet Classification, Deep Packet Inspection, Security functions,  intra-chip or inter-card interface/fabric, fault & performance monitoring, error counters?

Bob replied: “Yes, security acceleration such as SSL operations; counters of various sorts -yes; less common on the fault notification and performance monitoring. A recent example is found in the Mingoa acquisition, see: http://www.microsemi.com/company/acquisitions ”

…………………………………………………………………….

References:


End NOTE:  Stay tuned for Part II which will cover Infonetics’ Michael Howard’s presentation on Hardware and market trends for SDN/NFV.

2014 Hot Interconnects Highlight: Achieving Scale & Programmability in Google's Software Defined Data Center WAN

Introduction:

Amin Vahdat, PhD & Distinguished Engineer and Lead Network Architect at Google, delivered the opening keynote at 2014 Hot Interconnects, held August 26-27 in Mt View, CA. His talk presented an overview of the design and architectural requirements to bring Google’s shared infrastructure services to external customers with the Google Cloud Platform.

The wide area network underpins storage, distributed computing, and security in the Cloud, which is appealing for a variety of reasons:

  • On demand access to compute servers and storage
  • Easier operational model than premises based networks
  • Much greater up-time, i.e. five 9’s reliability; fast failure recovery without human intervention, etc
  • State of the art infrastructure services, e.g. DDoS prevention, load balancing, storage, complex event & stream processing, specialised data aggregation, etc
  • Different programming models unavailable elsewhere, e.g. low latency, massive IOPS, etc
  • New capabilities; not just delivering old/legacy applications cheaper

Andromeda- more than a galaxy in space:

Andromeda – Google’s code name for their managed virtual network infrastructure- is the enabler of Google’s cloud platform which provides many services to simultaneous end users. Andromeda provides Google’s customers/end users with robust performance, low latency and security services that are as good or better than private, premises based networks. Google has long focused on shared infrastructure among multiple internal customers and services, and in delivering scalable, highly efficient services to a global population.

An image of Google's Andromeda Controller diagram.
Click to view larger version. Image courtesy of Google

“Google’s (network) infra-structure services run on a shared network,” Vahdat said. “They provide the illusion of individual customers/end users running their own network, with high-speed interconnections, their own IP address space and Virtual Machines (VMs),” he added.  [Google has been running shared infrastructure since at least 2002 and it has been the basis for many commonly used scalable open-source technologies.]

From Google’s blog:

Andromeda’s goal is to expose the raw performance of the underlying network while simultaneously exposing network function virtualization (NFV). We expose the same in-network processing that enables our internal services to scale while remaining extensible and isolated to end users. This functionality includes distributed denial of service (DDoS) protection, transparent service load balancing, access control lists, and firewalls. We do this all while improving performance, with more enhancements coming.  Hence, Andromeda itself is not a Cloud Platform networking product; rather, it is the basis for delivering Cloud Platform networking services with high performance, availability, isolation, and security.”

Google uses its own versions of SDN and NFV to orchestrate provisioning, high availability, and to meet or exceed application performance requirements for Andromeda. The technology must be distributed throughout the network, which is only as strong as its weakest link, according to Amin.  “SDN” (Software Defined Networking) is the underlying mechanism for Andromeda. “It controls the entire hardware/software stack, QoS, latency, fault tolerance, etc.”

“SDN’s” fundamental premise is the separation of the control plane from the data plane, Google and everyone else agrees on that. But not much else!  Amin said the role of “SDN” is overall co-ordination and orchestration of network functions. It permits independent evolution of the control and data planes. Functions identified under SDN supervision were the following:

  • High performance IT and network elements: NICs, packet processors, fabric switches, top of rack switches, software, storage, etc.
  • Audit correctness (of all network and compute functions performed)
  • Provisioning with end to end QoS and SLA’s
  • Insuring high availability (and reliability)

“SDN” in Andromeda–Observations and Explanations:

“A logically centralized hierarchical control plane beats peer-to-peer (control plane) every time,” Amin said. Packet/frame forwarding in the data plane can run at network link speed, while the control plane can be implemented in commodity hardware (servers or bare metal switches), with scaling as needed. The control plane requires 1% of the overhead of the entire network, he added.

As expected, Vahdat did not reveal any of the APIs/ protocols/ interface specs that Google uses for its version of “SDN.” In particular, the API between the control and data plane (Google has never endorsed the ONF specified Open Flow v1.3). Also, he didn’t detail how the logically centralized, but likely geographically distributed control plane works.

Amin said that Google was making “extensive use of NFV (Network Function Virtualization) to virtualize SDN.” Andromeda NFV functions, illustrated in the above block diagram, include: Load balancing, DoS, ACLs, and VPN. New challenges for NFV include: fault isolation, security, DoS, virtual IP networks, mapping external services into name spaces and balanced virtual systems.

Managing the Andromeda infrastructure requires new tools and skills, Vahdat noted. “It turns out that running a hundred or a thousand servers is a very difficult operation. You can’t hire people out of college who know how to operate a hundred or a thousand servers,” Amin said. Tools are often designed for homogeneous environments and individual systems. Human reaction time is too slow to deliver “five nines” of uptime, maintenance outages are unacceptable, and the network becomes a bottleneck and source of outages.

Power and cooling are the major costs of a global data center and networking infrastructure like Google’s. “That’s true of even your laptop at home if you’re running it 24/7. At Google’s mammoth scale, that’s very apparent,” Vahdat said.

Applications require real-time high performance and low-latency communications to virtual machines. Google delivers those capabilities via its own Content Delivery Network (CDN).  Google uses the term “cluster networking” to describe huge switch/routers which are purpose-built out of cost efficient building blocks.

In addition to high performance and low latency, users may also require service chaining and load-balancing, along with extensibility (the capability to increase or reduce the number of servers available to applications as demand requires). Security is also a huge requirement. “Large companies are constantly under attack. It’s not a question of whether you’re under attack but how big is the attack,” Vahdat said.

[“Security will never be the same again. It’s a losing battle,” said Martin Casado, PhD during his Cloud Innovation Summit keynote on March 27, 2014]

Google has a global infrastructure, with data centers and points of presence worldwide to provide low-latency access to services locally, rather than requiring customers to access a single point of presence. Google’s software defined WAN (backbone private network) was one of the first networks to use “SDN”. In operation for almost three years, it is larger and growing faster than Google’s customer facing Internet Connectivity between Google’s cloud resident data centers and is comparable to the data traffic within a premises based data center, according to Vahdat.

Note 1.   Please refer to this article: Google’s largest internal network interconnects its Data Centers using Software Defined Network (SDN) in the WAN

“SDN” opportunities and challenges include:

  • Logically centralized network management- a shift from fully decentralized, box to box communications
  • High performance and reliable distributed control
  • Eliminate one-off protocols (not explained)
  • Definition of an API that will deliver NFV as a service

Cloud Caveats:

While Vahdat believes in the potential and power of cloud computing, he says that moving to the cloud (from premises based data centers) still poses all the challenges of running an IT infrastructure. “Most cloud customers, if you poll them, say the operational overhead of running on the cloud is as hard or harder today than running on your own infrastructure,” Vahdat said.

“In the future, cloud computing will require high bandwidth, low latency pipes.” Amin cited a “law” this author never heard of: “1M bit/sec of I/O is required for every 1MHz of CPU processing (computations).” In addition, the cloud must provide rapid network provisioning and very high availability, he added.

Network switch silicon and NPUs should focus on:

  • Hardware/software support for more efficient read/write of switch state
  • Increasing port density
  • Higher bandwidth per chip
  • NPUs must provide much greater than twice the performance for the same functionality as general purpose microprocessors and switch silicon.

Note: Two case studies were presented which are beyond the scope of this article to review.  Please refer to a related article on 2014 Hot Interconnects Death of the God Box

Vahdat’s Summary:

Google is leveraging its decade plus experience in delivering high performance shared IT infrastructure in its Andromeda network.  Logically centralized “SDN” is used to control and orchestrate all network and computing elements, including: VMs, virtual (soft) switches, NICs, switch fabrics, packet processors, cluster routers, etc.  Elements of NFV are also being used with more expected in the future.

References:

http://googlecloudplatform.blogspot.com/2014/04/enter-andromeda-zone-google-cloud-platforms-latest-networking-stack.html

https://www.youtube.com/watch?v=wpin6GKpDm8

http://gigaom.com/2014/04/02/google-launches-andromeda-a-software-defined-network-underlying-its-cloud/

http://virtualizationreview.com/articles/2014/04/03/google-andromeda.aspx

http://community.comsoc.org/blogs/alanweissberger/martin-casado-how-hypervisor-can-become-horizontal-security-layer-data-center

http://www.convergedigest.com/2014/03/ons-2014-google-keynote-software.html

https://www.youtube.com/watch?v=n4gOZrUwWmc

http://cseweb.ucsd.edu/~vahdat/

Addendum:  Amdahl’s Law

In a post conference email to this author, Amin wrote:

Here are a couple of references for Amdahl’s “law” on balanced system design:

Both essentially argue that for modern parallel computation, we need a fair amount of network I/O to keep the CPU busy (rather than stalled waiting for I/O to complete).
Most distributed computations today substantially under provision IO, largely because of significant inefficiency in the network software stack (RPC, TCP, IP, etc.) as well as the expense/complexity of building high performance network interconnects.  Cloud infrastructure has the potential to deliver balanced system infrastructure even for large-scale distributed computation.

Thanks, Amin

AT&T Outlines SDN/NFV Focus Areas for Domain 2.0 Initiative

Introduction:  The White Paper

As previously reported*, AT&T’s future Domain 2.0 network infrastructure must be open, simple, scalable and secure, according to John Donovan, AT&T’s senior executive vice president of technology and network operations.

* AT&T’s John Donovan talks BIG GAME but doesn’t reveal Game Plan at ONS 2014  

But what does that really mean?  And what are the research initiatives that are guiding AT&T’s transition to SDN/NFV?

Let’s first examine  AT&Ts Domain 2.0 white paper.

It specifically states the goal of moving to a virtualized, cloud based, SDN/NFV design based on off-the-shelf components (merchant silicon) and hardware and rejecting the legacy of OSMINE compliance and traditional telecom standards for OSS/BSS.  Yet there is no mention of the OpenFlow API/protocol we could find.

“In a nutshell, Domain 2.0 seeks to transform AT&T’s networking businesses from their current state to a future state where they are provided in a manner very similar to cloud computing services, and to transform our infrastructure from the current state to a future state where common infrastructure is purchased and provisioned in a manner similar to the PODs used to support cloud data center services. The replacement technology consists of a substrate of networking capability, often called Network Function Virtualization Infrastructure (NFVI) or simply infrastructure that is capable of being directed with software and Software Defined Networking (SDN) protocols to perform a broad variety of network functions and services.”

“This infrastructure is expected to be comprised of several types of substrate. The most typical type of substrate being servers that support NFV, followed by packet forwarding capabilities based on merchant silicon, which we often call white boxes. However it’s envisioned that other specialized network technologies are also brought to bear when general purpose processors or merchant silicon are not appropriate.”

AT&T''s vision of a user-defined cloud experience.
Image courtesy of AT&T

“AT&T services will increasingly become cloud-centric workloads. Starting in data centers (DC) and at the network edges – networking services, capabilities, and business policies will be instantiated as needed over the aforementioned common infrastructure. This will be embodied by orchestrating software instances that can be composed to perform similar tasks at various scale and reliability using techniques typical of cloud software architecture.”

Interview with AT&T’s Soren Telfer:

As a follow up to John Donovan’s ONS Keynote on AT&T’s “user-defined network cloud” (AKA Domain 2.0), we spoke to Soren Telfer, Lead Member of Technical Staff at AT&T’s Palo Alto, CA Foundry. Our intent was to gain insight and perspective on the company’s SDN/NFV research focus areas and initiatives.

Mr. Telfer said that AT&T’s Palo Alto Foundry is examining technical issues that will solve important problems in AT&T’s network.  One of those is the transformation to SDN/NFV so that future services can be cloud based.  While Soren admitted there were many gaps in SDN/NFV standard interfaces and protocols, he said, “Over time the gaps will be filled.”

Soren said that AT&T was working within the  Open Networking Labs (ON.LAB), which is part of the Stanford-UC Berkeley Open Network Research Community.  The ONRC mission from their website:  “As inventors of OpenFlow and SDN, we seek to ‘open up the Internet infrastructure for innovations’ and enable the larger network industry to build networks that offer increasingly sophisticated functionality yet are cheaper and simpler to manage than current networks.”  So for sure, ON.LAB work is based on the OpenFlow API/protocol between the Control and Data Planes (residing in different equipment).

The ON.LAB community is made up of open source developers, organizations and users who all collaborate on SDN tools and platforms to open the Internet and Cloud up to innovation.  They are trying to use a Linux (OS) foundation for open source controllers, according to Soren.  Curiously, AT&T is not listed as an ON.LAB contributor at http://onlab.us/community.html

AT&T’s Foundry Research Focus Areas:

Soren identified four key themes that AT&T is examining in its journey to SDN/NFV:

1.  Looking at new network infrastructures as “distributed systems.”  What problems need to be solved?  Google’s B4 network architecture was cited as an example.

[From a Google authored research paper: http://cseweb.ucsd.edu/~vahdat/papers/b4-sigcomm13.pdf]

“B4 is a private WAN connecting Google’s data centers across the globe. It has a number of unique characteristics:  i) massive bandwidth requirements deployed to a modest number of sites, ii) elastic traffic  demand that seeks to maximize average bandwidth, and iii) full control over the edge servers and network, which enables rate limiting and demand measurement at the edge.”

2.  Building diverse tools and environments for all future AT&T work on SDN/NFV/open networking. In particular, development, simulation and emulation of the network and its components/functional groupings in a consistent manner.  NTT Com’s VOLT (Versatile OpenFlow ValiDator) was cited as such a simulation tool for that carrier’s SDN based network.  For more on VOLT and NTT Com’s SDN/NFV please refer to: http://viodi.com/2014/03/15/ntt-com-leads-all-network-providers-in-deployment-of-sdnopenflow-nfv-coming-soon/

3.  Activities related to “what if questions.”  In other words, out of the box thinking to potentially use radically new network architecture(s) to deliver new services.  “Network as a social graph” was cited as an example.  The goal is to enable new experiences for AT&T’s customers via new services or additional capabilities to existing services.

Such a true “re-think+” initiative could be related to John Donovan’s reply to a question during his ONS keynote: “We will have new applications and new technology that will allow us to do policy and provisioning as a parallel process, rather than an overarching process that defines and inhibits everything we do.”

+ AT&T has been trying to change it’s tagline to:  “Re-think Possible” for some time now.  Yet many AT&T customers believe “Re-think” is impossible for AT&T, as its stuck in out dated methods, policies and procedures.  What’s your opinion?

According to Soren, AT&T is looking for the new network’s ability to “facilitate communication between people.”  Presumably, something more than is possible with today’s voice, video conferencing, email or social networks?  Functional test or universal tests are being considered to validate such a new network capability.

4.  Overlaying computation on a heterogeneous network system [presumably for cloud computing/storage and control of the Internet of Things (IoT)]. Flexible run times for compute jobs would be an example attribute for cloud computing.  Organizing billions of devices and choosing among meaningful services would be an IoT objective.

What then is the principle role of SDN in all of these research initiatives?  Soren said:

SDN will help us to organize and manage state.”  That includes correct configuration settings, meeting requested QoS, concurrency, etc.   Another goal was to virtualize many physical network elements (NEs).  DNS server, VoIP server and other NEs that could be deployed as Virtual Machines (VMs).

Soren noted that contemporary network protocols internalize state. For example, the routing data base for paths selected are internally stored in a router. An alternate “distributed systems” approach would be to externalize state such that it would not be internal to each network element.

However, NE’s accessing external state would require new state organization and management tools.  He cited Amazon’s Dynamo and Google’s B4 as network architectures AT&T was studying. But creating and deploying protocols that work with external state won’t be here soon.  “We’re looking to replace existing network protocols with those designed for more distributed systems in the next seven or eight years,” he added.

Summing up, Soren wrote in an email:

“AT&T is working to deliver the User Defined Network Cloud, through which AT&T will open, simplify, scale, and secure the network of the future.  That future network will first and foremost deliver new experiences to users and to businesses.

The User Defined Network Cloud and Domain 2.0, are bringing broad and sweeping organizational and technical changes to AT&T. The AT&T Foundry in Palo Alto is a piece of the broader story inside and outside of the company. At the Foundry, developers and engineers are prototyping potential pieces of the future network where AT&T sees gaps in the current ecosystem. These prototypes utilize the latest concepts from SDN and techniques from distributed computing to answer questions and to point paths towards the future network. In particular, the Foundry is exploring how to best apply SDN to the wide-area network to suit the needs of the User Defined Network Cloud.”

Comment and Analysis:

Soren’s remarks seem to imply AT&T is closely investigating Google’s use of SDN (and some version of OpenFlow or similar protocol) for interconnecting all of its data centers as one huge virtual cloud. It’s consistent with Mr. Donovan saying that AT&T would like to transform its 4,600 central offices into environments that support a virtual networking cloud environment.

After this year’s “beachhead projects,” Mr. Donovan said AT&T will start building out new network platforms in 2015, as part of its Domain 2.0 initiative.   But what Soren talked about was a much longer and greater network transformation.  Presumably, the platforms built in 2015 will be based on the results of the “beachhead projects” that Mr. Donovan mentioned during the Q &A portion of his ONS keynote speech.

Based on its previously referenced Domain 2.0 Whitepaper, we expect the emphasis to be placed on NFV concepts and white boxes, rather than pure SDN/Open Flow.  Here’s a relevant paragraph related to an “open networking router.”

“Often a variety of device sizes need to be purchased in order to support variances in workload from one location to another. In Domain 2.0, such a router is composed of NFV software modules, merchant silicon, and associated controllers. The software is written so that increasing workload consumes incremental resources from the common pool, and moreover so that it’s elastic: so the resources are only consumed when needed. Different locations are provisioned with appropriate amounts of network substrate, and all the routers, switches, edge caches, and middle-boxes are instantiated from the common resource pool. Such sharing of infrastructure across a broad set of uses makes planning and growing that infrastructure easier to manage.”

We will continue to follow SDN/NFV developments and deployments, particularly related to carriers such as AT&T, NTT, Verizon, Deutsche Telekom, Orange, etc.  Stay tuned…

Virtually Networked: The State of SDN

We have all heard about hectic activity with several initiatives on network virtualization. The potpourri of terms in this space (SDN/OpenFlow/OpenDaylight etc.) are enough to make one’s head spin. This article will try to lay out the landscape as of the time of writing and explain how some of these technologies are relevant to independent broadband service providers.

In the author’s view – Software Defined Networking (SDN) evolved with the aim of freeing the network operator from dependence on networking equipment vendors for developing new and innovative services and was intended to make networking services simpler to implement and manage.

Software Defined Networking decouples the control and data planes – thereby abstracting the physically architecture from the applications running over it. Network intelligence is centralized and separated away from the forwarding of packets.

SDN is the term used for a set of technologies that enable the management of services over computer networks without worrying about the lower level functionality – which is now abstracted away. This theoretically should allow the network operator to develop new services at the control plane without touching the data plane since they are now decoupled.

Network operators can control and manage network traffic via a software controller – mostly without having to physically touch switches and routers. While the physical IP network still exists – the software controller is the “brains” of SDN that drives the IP based forwarding plane. Centralizing this controller functionality allows the operator to programmatically configure and manage this abstracted network topology rather than having to hand configure every node in their network.

SDN provides a set of APIs to configure the common network services (such as routing/traffic management/security) .

OpenFlow is one standard protocol that defines the communication between such an abstracted control and data plane. OpenFlow was defined by the Open Networking Foundation – and allows direct manipulation of physical and virtual devices. OpenFlow would need to be implemented at both sides of the SDN controller software as well as the SDN-capable network infrastructure devices.

How would SDN impact an independent broadband service providers? If SDN lives up to its promise, it could provide the flexibility in networking that Telcos have needed for a long time. From a network operations perspective, it has the potential to revolutionize how networks are controlled and managed today – making it a very simple task to manage physical and virtual devices without ever having to change anything in the physical network.

However – these are still early days in the SDN space. Several vendors have implemented software controllers and the OpenFlow specification appears to be stabilizing. OpenDaylight is an open platform for network programmability to enable SDN. OpenDaylight has just released its first release of software code – Hydrogen and it can be downloaded as open source software today. But this is not the only approach to SDN – there are vendor specific approaches that this author will not cover in this article.

For independent broadband service providers wishing to learn more about SDN – it would be a great idea to download the Hydrogen release of OpenDaylight and play with it – but don’t expect it to provide any production ready functionality. Like the first release of any piece of software there are wrinkles to be ironed out and important features to be written. It would be a great time to get involved if one wants to contribute to the open source community.

For the independent broadband service providers wanting to deploy SDN – it’s not prime-time ready yet – but it’s an exciting and enticing idea that is fast becoming real. Keep a close ear to the ground – SDN might make our lives easier fairly soon.

[Editor’s Note; For more great insight from Kshitij about “SDN” and other topics , please go to his website at http://www.kshitijkumar.com/]

Infonetics Survey: Network Operators reveal where they plan to first deploy SDN and NFV

Introduction:

Top 5 network locations operators expect to deploy SDN and NFV by 2014
Image courtesy of Infonetics

There’s been a lot of hype and even more uncertainty related to “Carrier SDN” and in particular the use of Open Flow protocol in carrier networks – between a centralized control plane entity and data plane entities residing  in “packet forwarding” engines built from commodity silicon with minimal software intelligence.  Many carriers are interested in the ETSI NFV work, which will NOT produce any standard or specifications.  This author has been contacted by several network operators to assess their NFV plans (please note that such consulting is not free of charge).  As ETSI NFV will make contributions to ITU-T SG13 work on future networks, it may be several years before any implementable standard (ITU Recommendation) is produced.

For its just released SDN and NFV Strategies survey, Infonetics Research  interviewed network operators around the globe, which together represent ~53% of the world’s telecom capex and operating  revenue.  The objective of the survey was to determine the timing and priority of the many use cases for their software-defined network (SDN) and network function virtualization (NFV) projects.

SDN And NFV Strategies Survey Highlights:

  • Virtually all major operators are either evaluating SDNs now or plan to do so within the next 3 years
  • SDN and NFV evaluation and deployments are being driven by carriers’ desire for service agility resulting in quicker time to revenue and operational efficiency
  • The top 5 network domains named by operators when asked where they plan to deploy SDNs and NFV by 2014: Within data centers, between data centers, operations and management, content delivery networks (CDNs), and cloud services
  • 86% of operators are confident they will deploy SDN and NFV technology in their optical transport networks as well at some point, once standards are finalized
  • Study participants rated Content Delivery Networks (CDNs), IP multimedia subsystems (IMS), and virtual routers/security gateways as the top applications for NFV

“For the most part, carriers are starting small with their SDN and NFV deployments, focusing on only parts of their network, what we call ‘contained domains,’ to ensure they can get the technology to work as intended,” explains Michael Howard, co-founder and principal analyst for carrier networks at Infonetics Research.

“But momentum for more widespread use of SDN and NFV is strong, as evidenced by the vast majority of operators participating in our study who plan to deploy the technologies in key parts of their networks, from the core to aggregation to customer access,” Howard adds. “Even so, we believe it’ll be many years before we see bigger parts or a whole network controlled by SDNs.”

About The Survey:

Infonetics’ July 2013 27-page SDN and NFV survey is based on interviews with purchase-decision makers at 21 incumbent, competitive and independent wireless operators from EMEA (Europe, Middle East, Africa), Asia Pacific and North America that have evaluated SDN projects or plan to do so. Infonetics asked operators about their strategies and timing for SDN and NFV, including deployment drivers and barriers, target domains and use cases, and suppliers. The carriers participating in the study represent more than half of the world’s telecom revenue and capex.

To learn more about the report, contact Infonetics:

References:

  1. Video interview with Infonetics’ co-founder Michael Howard on What’s really driving demand for SDN/NFV
  2. SDN and NFV: Survey of Articles Comparing and Contrasting
  3. Move Over SDN – NFV Taking the Spotlight – Cisco Blog
  4. Subtle SDN/NFV Data Points
  5. “Service Provider SDN” Network Virtualization and the ETSI NFV ISG
  6. The Impact on Your IT Department of Software Defined Networking (SDN) and Network Functions Virtualization (NFV)
  7.  SDNs and NFV: Why Operators Are Investing Now (archived webinar):  

Analyst Opinions on Cisco's CRS-X Core Router & Its Impact on Competitors

Product Announcement:

The Cisco® CRS-X, which will be available this year, is a 400 Gigabit per second (Gbps) per slot core router system that can be expanded to nearly 1 petabit per second in a multi-chassis deployment. The CRS-X provides 10 times the capacity of the original CRS-1, which was introduced in 2004 as a new class of core routing system designed to scale network capacity to accommodate the proliferation in video, data and mobile traffic, which has taken place over the last decade.

With 400 Gbps per slot density, the CRS-X multichassis architecture provides network operators the ability to scale using a 400 Gbps line card with Cisco AnyPort™ technology.  That  line card uses complementary metal oxide semiconductor (CMOS) photonic technology, called Cisco CPAK™, to reduce power consumption, reduce the cost of sparing, and increase deployment flexibility.

For example, each interface can be configured for either single port 100 Gigabit Ethernet, 2×40 GE, or 10 x10 GE and either short-, long-, or extended-reach optics by selecting a specific CPAK transceiver. This flexibility simplifies network engineering and operations and helps ensure that service providers can meet the demand for 10 GE, 40 GE and 100 GE applications without replacing hardware.

Additionally, the CRS-X improves the simplicity and scale of IP and optical convergence. Service providers can now choose between deploying integrated optics or the new Cisco nV™ optical satellite. Both allow for a single IP and optical system that utilizes Cisco’s nLight™ technology for control plane automation. The nV optical satellite deployments operate as a single managed system with the Cisco CRS Family to reduce operational expense and deliver high-density 100 GE scaling.

More information is in the press release


Since the first CRS router made its debut in 2004, Cisco has brought in a total of $8 billion in revenue from the product range, according to Stephen Liu, Cisco’s director of service provider marketing.  “The CRS-X is the innovation we need to cross the $10 billion barrier,” Mr. Liu told Reuters.

Cisco’s rivals in the core Internet router sector include Juniper Networks, Huawei, and Alcatel Lucent.   Cisco was not the first vendor to offer 40Gbps per slot in a core router – Juniper took that honor. It wasn’t the first to offer 100Gbps router either – Alcatel-Lucent, Huawei, and Juniper were all there first.  Moreover, Alcatel-Lucent and Huawei each beat Cisco with 400Gbps products. However, with 54% of the global core router market, Cisco has proven that being first to market does not guarantee success.


Analyst Opinions:

Market research firm Current Analysis was quite positive about Cisco’s new CRS-X core router.  In a note to clients Current Analysis wrote:

“(We are) Very positive on Cisco’s launch of the CRS-X, because it provides existing CRS Series customers with an upgrade path to address growing scale and capacity requirements in their IP core networks. In addition to providing high-scale performance for high-density 10G, 40G and 100G-based services, the system incorporates Prime Management, nLight and new software to support network programmability in order to help service providers cope with unpredictable traffic patterns and to optimize network resources while improving time to service. The new ‘AnyPort’ technology helps reduce inventory costs by providing a common line card base card that can be flexibly configured. Closer integration between the IP and optical network is also provided, which improves resource utilization and provides a level of programmability to the transport network using the Cisco 15454 ONS platform as an extension shelf. The announcement also includes endorsements from SoftBank and Verizon, which confirmed the need for scale, resiliency and investment protection.”

UK based Ovum wrote:

“With the introduction of the CRS-X, Cisco is sending a message to its carrier customers: your investment in CRS products is being protected. The role of the core router revolves around high-performance, high-capacity packet processing. Core router vendors have been challenged to increase the capacity of their products to meet the growth in network traffic without the operator having to do a complete forklift of their existing systems.”

“Rather than simply comparing feeds and speeds against competitors, Ovum believes the key to success for the CRS-X will be the differentiation provided by coupling the product to Cisco’s Elastic Core solution and nLight technology for control plane automation and IP and optical convergence. The nV Optical Satellite capability announced with the CRS-X is an example of this type of differentiation. The nV Optical Satellite provides a single integrated management interface for control over the CRS and remotely located 100G DWDM platforms to reduce opex.”

http://ovum.com/2013/06/13/cisco-crs-x-delivers-a-message-investment-protection/

Northland Capital Markets wrote that growing pressure on carriers from cloud computing usage may prompt them to upgrade to the CRS-X:

“We see carriers/cable operators/ content providers requiring core router refresh as result of an increase in traffic generated by Cloud services and machine-to-machine connectivity. We believe Cloud computing has redefined the way applications run on the network, exposing the underlying limitations of providers’ existing networks.”

Raymond James’ thinks Cisco’s new core router will prove to be a challenge for non-router vendors as well as traditional competitors Juniper, Huawei and Alcatel-Lucent.  Finisar, Ciena and Infinera were singled out in this report excerpt:

“CRS-X will use Cisco’s internally developed CPAK optical interface, which represents a headwind for Finisar. Cisco promotes its architecture for Converged Transport Routers and cites deficiencies in alternatives (“Hollow core” – leveraging OTN and optical like Ciena’s 5400 and “Lean core” – leveraging MPLS like Juniper’s PTX), and argues that its converged solution of optical, MPLS, and routing with Cisco Prime management bringing the layers together.  Similar to Cisco, Alcatel-Lucent combined its optical and routing units into a single organization, but it offers a two-box strategy (1830 and 7950). Optical integration matters, but we don’t know pricing. Cisco has offered IP over DWDM in the past, but high prices discouraged some carriers from using these interfaces, instead opting to plug the routers into long haul optical platforms; we suspect the CRS-X will go after this application more aggressively, which could pose a threat to long haul 100G competitors such as Alcatel-Lucent, Ciena, and Infinera.”


CRS-X Puts Pressure on Cisco’s Competitors:

Current Analysis wrote in a report to clients:

  • Alcatel-Lucent needs to keep up the pressure to move upcoming IP core refresh cycles its way. The 7950 XRS has obtained nine customer wins and multiple ongoing field trials since its launch, which shows that there is a definite interest in the metro IP core proposition as well as leveraging the platform for pure IP core applications. Alcatel-Lucent should also elevate its service provider SDN vision, as its competitors are doing.
  • Juniper should provide a roadmap for its two core network solutions, the PTX Series and the T Series, where it needs to close the current performance gap (the T Series delivers 240 Gbps per slot). The capacity race often follows a ‘leapfrog’ model, where one vendor’s refresh cycle trumps another’s for a period of time; Juniper needs to counter Cisco’s latest CRS-X move carefully. Juniper also should continue to make the case for a more agile and flexible network based on its four-step SDN roadmap.
  • Huawei needs to capitalize on its IP core momentum and announce (or, at least refer to) customers that are, or will be, using the 480 Gbps/slot capabilities announced for its NE5000E IP core router. Huawei also needs to sharpen and reaffirm its SDN message with respect to its network core architecture and integrate SDN into its SingleBackbone model.
  • ZTE needs to update its T8000 roadmap and hint as to when it will deploy higher-density 100G interfaces on the platform. ZTE needs to join the fray with an SDN message of its own that builds on its current management capabilities.

Ovum believes Juniper must respond: “When Cisco’s CRS-X becomes available, Juniper will become the only one of the top four core router vendors not delivering 400Gbps-per-slot capacity in its core router product, unless it announces a capacity upgrade to its core router in the next six months. Its largest capacity core router product, the T4000, delivers only 240Gbps bandwidth per slot. Juniper’s PTX product is ready to provide 480Gbps per slot, but line cards to take full advantage of the available capacity are not yet available, and the PTX is an MPLS-optimized core switch, not an IP core router. ”

Raymond James thinks that Juniper and Alcatel-Lucent are now at a competitive disadvantage in the core router market:

“The new CRS-X can support 64 100 Gbps ports in a standard seven-foot rack, which compares to 80 for Alcatel-Lucent’s 7950 and Juniper’s 32 on its T4000. In a multishelf configuration, Cisco claims it can support 1152 slots or 922 Tbps.”


Closing Comment:

We find it quite interesting that despite the tremendous hype around SDN, it wasn’t mentioned at all in Cisco’s CSR-X product announcement.  Nor did any analysts have any SDN comments related to the CSR-X.

In a new on-line video, Cisco’s Lew Tucker talks about SDN in the context of OpenStack cloud software, but doesn’t mention the CSR-X product:  http://newsroom.cisco.com/video/1170801

2013 Ethernet Tech Summit- Market Research Panel, Carrier Ethernet & Unsung Heroes

Introduction:

“Ethernet Technology Summit attendance was up over 20% in 2013. Topics of special interest included software-defined networking (SDN), 40/100/400 GbE, venture opportunities, and market research.  Keynotes by Mellanox, Dell’Oro Group, Huawei, Ethernet Alliance, Cisco Systems, Big Switch Networks, Broadcom, and Dell all drew capacity audiences.” said Lance A. Leventhal, Program Chairperson.

The Market Research panel covered the prospects for Ethernet in the enterprise, among carriers, especially for cellular backhaul), and in the data center.  The session was chaired by Crystal Black, Channel Marketing Manager, APTARE

Panelists:

  • Michael Howard, Infonetics Research
  • Casey Quillin, Dell’Oro Group
  • Sergis Mushell, Gartner
  • Jag Bolaria, Linley Group
  • Vladimir Kozlov, LightCounting

Discussion:

An image depicting small cell backhaul.
Image Courtesy of Infonetics

1. Michael Howard of Infonetics Research talked about macro-cell and small cell backhaul. “Nearly all new Macro-cell Backhaul Connections are IP/Ethernet,” he said. “IP/Ethernet is 94% of 2012 macrocell MBH equipment spending,” Michael added. Most of macro-cells use either microwave or fiber backhaul, with macro-cell sites that aggregate small cell traffic to use the same existing macrocell fiber backhaul.  Most outdoor small cells were being deployed at street level in urban centers, with three to eight of them connecting to a macro-cell site on the top of a building.

“Small cells have been deployed since 2007 nearly all located in-building and 2G/3G”, stated Howard.  “What’s new is the outdoor deployments, where operators this year are trying and trialing many new products, new technology options, and new locations that present a myriad of challenges, such as how to negotiate for lightpost placement, connect and buy power, and meet city regulations for color, size, shape of the small cell and backhaul products,” he added.

Small cell backhaul status is summarized as follows:

  • Operators are evaluating, testing, planning outdoor small cells
  • Virtually all small cell deployments to date are 3G and in-building
  • Most operators will deploy first outdoor in the urban core with ~3 to 8 pico-cells per macrocell
  • Most wireless carriers will aggregate small cell backhaul traffic onto the nearest macro-cell site—typically connected to fiber backhaul network
  • Outdoor small cell backhaul is mostly an Ethernet NLOS–MWV–MMW (i.e. Microwave and millimeter wave) play
  • Backhaul aggregation is still a fiber play

2. Jag Bolaria of Linley Group made the following points:

  • High bandwidth available from 4G-LTE networks are enabling a continued huge increase in mobile data traffic.
  • Cloud Computing is changing Data Center architecture, especially in the areas of scalability and virtualization.
  • There are many Ethernet markets, including:  Mobile back-haul  Data Centers SMB enterprise, Carrier Ethernet, etc.
  • Data Center topology is moving from hierarchical to flat, due to more East-West (server-to-server) traffic patterns
  • Data Center (Ethernet) switches need a lot more bandwidth for connectivity between them. As more servers have 10GE interfaces, the inter-switch connection is likely to be 40GE.
  • Very large Data Centers will have multiple L2 networks with L3 tunneling to migrate between many different L2 domains.
  • A virtualized L2 network may use Equal Cost MultiPath (ECMP) to define the shortest path between switches and load balance traffic over that path.  “Open Flow” may help here,” Jag said.
  • 100GE using CFP is still too expensive and consumes too much power to be deployed on a large scale.  Jag predicts that CFP2, CFP4, silicon photonics, or Indium phosphide will be used to shrink 100GE modules.

3.  Sergis Mushell of Gartner made several forecasts, including that:

  • There are four distinct models for SDN as it applies to ICs (but they were not identified).
  • 40GE interfaces are coming to blade servers this year.
  • Fiber Channel rates will increase to 16 Gbps and 32 Gbps.
  • Silicon Photonics will be built into Data Center equipment in the near future.

4.  Casey Quillin of Dell’Oro Group talked about SANs and Data Center deployments. He said that:

  • Fiber Channel (FC) revenues are mostly at 8 Gbps, but declining.
  • Revenues are increasing for FC at speeds greater or equal to 16 Gbps.
  • Revenue from FC @ 16 Gbps is almost all from switch-to-switch connections and ASPs are high for 16 Gbps FC switch ports.
  • The total 2012 FC market was up 1% in revenue and that was mostly from FC switches as FC adapter sales fell.
  • The FC attach rate on blade servers has declined sharply and we may see FCoE (Fiber Channel over Ethernet) as a replacement.
  • FCOE switch ports will also have to support one or more DC bridging protocols, e.g. TRILL, IEEE 802.  Yet, FCoE is only for “greenfield deployments,” Casey said.

5.  Vladimir Kozlov of LightCounting (market research firm founded in 2004) tracks optical communications supply chain.  He made the following key points:

  • Overwhelming majority (~95%) of 10GE optical transceivers use SFP+ Direct Attach (uses a passive twin-ax cable assembly and connects directly into an SFP+ housing).
  • 40GE will experience “good growth” in the next 3 to 4 years
  • Data Centers are becoming more efficient in how they use bandwidth and that may result in a decrease in the number of switch/routers sold into that market segment.
  • Microwave back-haul will be 10-12% of total U.S. cellular backhaul market this year.
  • No forecast was made for fiber optic back haul,  which now only reaches 55-60% of cell sites in the U.S.
  • Market research firm iGR forecasts that growth of fiber back-haul is expected to reach a CAGR of nearly 85 percent between 2011 and 2016

Read more: Study: U.S. mobile back-haul demand to grow nearly 10x by 2016

FierceWireless http://www.fiercewireless.com/story/study-us-mobile-backhaul-demand-grow-nearly-10x-2016/2012-03-13#ixzz2QZaAeRlD

  • A LightCounting report on 40G and 100G Data Center Interconnects analyzes the impact of growing data traffic and changing architecture of data centers on market forecast for Ethernet and Fibre Channel optical transceivers.

Comment on this panel session:

Other than Ethernet frames used for mobile back-haul  there wasn’t any discussion about the Carrier Ethernet market or services.  That topic was the subject of an all day track of sessions on Wednesday. Carrier Ethernet lets wireline network operators use low cost Ethernet systems to offer data services to SMBs and larger enterprise customers. Carrier Ethernet includes carrier grade reliability, Operations, Administration and Maintenance (OAM) features, linear and ring protection switching  as well as QoS/ class of service. Carrier Ethernet is sometimes referred to as Business Ethernet and is offered over bonded copper (n X T1 or n X DSL) or fiber for higher speeds (typically 100 Mbps or greater).

Carrier Ethernet Services offered to business customers  include: Ethernet Private Line, Ethernet Tree (point to multi-point) and Ethernet LAN (multi-point to multi-point).  In addition, the MEF is positioning Carrier Ethernet 2.0 for use in wire-line access to Private Cloud services.

The problem seemed to be that there weren’t any carriers willing to participate in those sessions, so it was just equipment and silicon vendors talking to one another.

A new report forecasts the Global Ethernet Access Device market to grow at a CAGR Of 13.62% from 2012-2016.

http://www.businesswire.com/news/home/20130411006525/en/Research-Markets-Global-Ethernet-Access-Device-Market


Another highlight of the Ethernet Technology Summit was a Wednesday evening award ceremony to the “Unsung hero’s of Ethernet.”  They were: Dave Boggs who worked with Bob Metcalfe on the original 3Mb/sec Ethernet (and whose name appears on the Ethernet patent), Ron Crane who designed the first working 10 Mb/s coax based Ethernet (which later became standardized by IEEE 802.3 as 10Base5), Tat Lam who worked on the original version of Ethernet and early 10 Mb/s transceivers and long time IEEE ComSoc contributor Geoff Thompson for his  hard work, long term support and leadership of Ethernet standards work in IEEE 802 (he was chair/vice-chair of the 802.3 WG for many years), TIA and the ISO.

The Unsung Heroes etched crystal awards were paid for by the IEEE Santa Clara Valley section (the largest in the world).  They include an image of Bob Metcalfe’s original sketch of the Ethernet system.

Note: this author has been a member of the IEEE SCV Executive Committee for many years and decades.  More info at:

http://www.24-7pressrelease.com/press-release/ieee-santa-clara-valley-section-honoring-ethernets-unsung-heroes-at-ethernet-technology-summits-40th-anniversary-of-ethernet-awards-ceremony-336450.php

2013 IDC Directions Part III- Where Are We Headed with Software-Defined Networking (SDN)?

Introduction:

In the third article on the IDC Directions 2013 Conference (March 5th in Santa Clara, CA), we take a hard look at Software Defined Networking as presented by Rohit Mehra, IDC VP for Network Infrastructure.

Note: Please see 2013 IDC Directions Part I for an explanation of the “3rd Platform” and its critical importance to the IT industry and Part II on New Data Center Dynamics and Requirements


Background:

IDC firmly believes that the “3rd Platform” is the way forward and that the network is the vital link between cloud computing and mobility.  “The Cloud is evolving into a comprehensive, integrated application delivery model incorporating all four elements of the 3rd platform,” said Mr. Mehra.

  • Cloud Apps require network agility, flexibility and must support higher east-west traffic flows (between servers in a cloud resident data center).
  • Mobile access is crucial with the proliferation of mobile devices (e.g. smart phones and tablets) and continued exponential growth of mobile data traffic.
  • Variable end points and different traffic patterns must be supported.
  • Social networking is being integrated with other enterprise applications. This is resulting in increased volumes of cloud data exchanges with client devices and more server-to-server traffic flows.
  • Big Data/Analytics results in scale-out computing which needs scale-out networking. Greater application-to-network visibility will be required.

As a result of these strong 3rd platform trends, Mr. Mehra said, “Application access/delivery is dependent on the  cloud resident data center and enterprise network.  Both will need to become more dynamic and flexible with SDN.”

IDC asked IT managers: What was the main reason you needed to Re-Architect The Network to support Private Cloud? The top three reasons were:

  • We needed to ensure security between virtual servers
  • We needed more bandwidth to support the virtualized applications
  • The network became a bottleneck to new service provisioning

Rohit said that SDN could address those issues and was gaining traction in the data center.  “”SDN provides better alignment with the underlying applications, along with improved flexibility and command of the network,” he said.  Through SDN models, companies will likely find it easier to implement virtual cloud hosting environments, according to Rohit.

A recent IDC study SDN Shakes Up the Status Quo in Datacenter Networking projected that the SDN market will increase from $360 million in 2013 to $3.7 billion in 2016.

SDN Attributes include:

  • Architectural model that leads to network virtualization
  • Dynamic exchange between applications and the network
  • Delivering programmable interfaces to the network (e.g., OpenFlow, APIs)
  • Management abstraction of the topology
  • Separation of control and forwarding functions (implemented in different equipment)

Rohit stated that SDN was NOT another name for “Cloud-based Networking” and that they were each in functionally different domains:

  • Cloud-based Networking involves emerging network provisioning, configuration and management offerings that leverage cloud Computing and Storage capabilities.
  • It’s a “Network As A Service” model that can apply to routers, WLAN, Unified Communications, app delivery, etc.

Rohit expects network equipment and network management vendors to add these capabilities to their platforms in 2013.

Three Emerging SDN Deployment Models are envisioned by IDC:

1. Pure OpenFlow (more on the role of Open Flow later in this article)

  • Driven largely by being open and standards-based (by Open Networking Foundation or ONF)
  • Inhibited by fluidity of OpenFlow release schedule; limited support in existing switches

2. Overlays

  • Exemplified by Nicira/VMware’s Network Virtualization Platform (NVP), IBM’s DOVE, others
  • Some vendors that started out offering “pure OpenFlow” have adopted overlays (Big Switch Networks)

3. Hybrid (Overlay, OpenFlow, Other Protocols/APIs)

  • Put forward by established networking players such as Cisco and Juniper
  • Offer SDN controller, with support for distributed control plane for network programmability and virtualization, etc.
Image courtesy of IDC.
Image courtesy of IDC.

SDN vendors are offering SDN solutions from four different perspectives. Many of them solely target one of the four, while others offer a combination of the following:

  • SDN enabled switches, routers, and network equipment in the data/forwarding plane
  • Software tools and technologies that serve to provide virtualization and control (including vSwitches, controllers, gateways, overlay technologies)
  • Network services and applications that involve Layers 4-7, security, network analytics, etc
  • Professional service offerings around the SDN eco-system

SDN’s Place In The Datacenter-IDC sees two emerging approaches:

1. Some vendors will push SDN within the framework of converged infrastructure (servers, storage, network, management)

  • Appeals to enterprises looking for simplicity, ready integration, and “one throat to choke”
  • Vendors include HP, Dell, IBM, Cisco, Oracle and others

2. Some IT vendors will offer a software-defined data center, where physical hardware is virtualized, centrally managed, and treated as an abstracted resource that can by dynamically provisioned/configured.

  • Vendors include VMware, Microsoft, perhaps IBM
Image courtesy of IDC.
Image courtesy of IDC.

SDN Will Provide CapEx and OpEx Savings:

OpEx

  • Better control and alignment of virtual and physical resources
  • Automated configuration, and management of physical network
  • Service agility and velocity

CapEx

  • Move to software/virtual appliances running on x86 hardware can reduce expenditures on proprietary hardware appliances
  • Support for network virtualization improves utilization of server and switch hardware
  • Potentially cheaper hardware as SDN value chain matures (long-term, not today)

Role of OpenFlow as SDN Matures:

  • Initial OpenFlow interest and adoption from research community, cloud service providers (e.g., Google, Facebook) and select enterprise verticals- e.g., education
  • Led to successful launch of Open Networking Foundation (ONF)
  • Centralized control and programmability is the primary use case- but that may be its limitation
  • At a crossroads now- OpenFlow taking time to mature and develop, while alternate solutions are emerging
  • As the market for SDN matures, OpenFlow is likely to be one of the many tools and technologies (but not the ONLY protocol to be used between Control plane virtual switches/servers and Data forwarding equipment in the network)

SDN Challenges and Opportunities– For SDN Vendors and Customers:

  • Vendors will need to consider adding professional services to their SDN portfolio
  • The value chain will benefit from these services early within the market adoption cycle
  • Need for SDN certification and training programs to engage partner and customer constituencies and to reduce political friction associated with change
  • Education on use cases is critical to getting vendor message across, and for creating broader enthusiasm for change among customers
  • Customers must ensure that they have the right mix of skills to evaluate, select, deploy, and manage SDN
  • The battle to break down internal silos will intensify alignment of applications and networks means an alignment of teams that run them
Image courtesy of IDC.
Image courtesy of IDC.

Conclusions:

1.SDN is rapidly gaining traction as a potentially disruptive technology transition, not seen for a long time in networking
2.SDN is riding the wave of a “Perfect Storm”, with many individual market and technology factors coming together:

  • Growth of Cloud Services/Applications
  • Focus on converged infrastructures (compute/storage/network)
  • Emergence of Software-Defined Data Center (SDDC)
  • Lessons learned (and benefits) from server virtualization

3.SDN brings us closer to application and network alignment with next-generation IT
4.Incumbent vendors will need to find the right fit between showing leadership in SDN innovation and balancing existing portfolio investments


Addendum: Software Defined Networks and Large-Scale Network Virtualization Combine to Drive Change in Telecom Networks

In a March 7th press release IDC wrote that SDN along with large-scale network virtualization are two emerging telecom industry technologies that will combine to drive a more software-centric and programmable telecom infrastructure and services ecosystem. These complementary and transformative technologies will have a sustained impact on today’s communication service providers and the way they do business.

“IDC believes that the rapid global growth of data and video traffic across all networks, the increasing use of public and private cloud services, and the desire from consumers and enterprises for faster, more agile service and application delivery are driving the telecom markets toward an inevitable era of network virtualization,” said Nav Chander, Research Manager, Telecom Services and Network Infrastructure, IDC.

“SDN and large-scale network virtualization will become a game shifter, providing important building blocks for delivering future enterprise and hybrid, private, and public cloud services.”  he added.  Additional findings from IDC’s research includes the following:

  • Time to service agility is a key driver for SDN concepts
  • Lowering OPEX spend is a bigger driver than lowering CAPEX for CSPs
  • Network Function Virtualization and SDN will emerge as key components of both operator service strategies and telecom networking vendor’s product strategies

The IDC study, Will New SDN and Network Virtualization Technology Impact Telecom Networks? (IDC #239399), examines the rapidly emerging software-defined network (SDN) market, the developments in large-scale network virtualization, and a new Network Functions Virtualization ecosystem, which are likely to have an impact on telecom equipment vendors’ and CSP customers’ plans for next-generation wireline and wireless network infrastructure.


References:

http://community.comsoc.org/blogs/alanweissberger/fbr-sdn-result-40-drop-switchrouter-ports-deployed-service-providerslarge-en-0

http://community.comsoc.org/blogs/alanweissberger/googles-largest-internal-network-interconnects-its-data-centers-using-software


IEEE ComSocSCV had the two leaders of the SDN movement talk at one of our technical meetings last year. Their presentations are posted in the 2012 meeting archive section of the chapter website:

Date: Wednesday, July 11, 2012; 6:00pm-8:30pm
Title: Software Defined Networking (SDN) Explained — New Epoch or Passing Fad?
Speaker 1: Guru Parulkar, Executive Director of Open Networking Research Center
Subject: SDN: New Approach to Networking
Speaker 2: Dan Pitt, Executive Director at the Open Networking Foundation
Subject: The Open Networking Foundation
http://www.ewh.ieee.org/r6/scv/comsoc/ComSoc_2012_Presentations.php