2014 Hot Interconnects: Hardware for Software Defined Networks & NFV – Part II.


This closing Hot Interconnects session on  Hardware for Software Defined Networks (SDN) & Network Function Virtualization (NFV) was very informative. It revealed very interesting carrier market research and raised quite a few questions/ open issues related to dedicated network hardware to be used in SDN/NFV networks.  It was not, however, a “Venture Capital Forum” as the session was labeled in the program agenda [VC Forum-Fireside Dialogue: Mind the Silicon Innovation Gap].

Presentation & Discussion:

Infonetics co-founder and principal analyst Michael Howard led off the session with a presentation that contained a very intriguing time-line for network operators experiments and deployments of SDN/NFV (see chart below).

Operator SDN and NFV timeline, according to Infonetics.
Image courtesy of Infonetics

Author’s note:  We believe that it won’t be till 2018 or later for the SDN/NFV market to accelerate due to customer confusion of so many proprietary/vendor specific approaches.  Our assumption is based on standards being in place which will facilitate multi-vendor interoperability for SDN/NFV.


Here are the key points made by Mr. Howard during his presentation:

  • 2015 will be a year of field trials and a few commercial deployments.  Carriers will gather information and evaluate network and subscriber behavior during those trials.
  • 2016-2020 before operators deploy several SDN & NFV use cases, then more each year.  Commercial SDN/NFV deployment market will begin to ramp up (and vendors will start making money on the new technologies).
  • Infonetics includes SDN optimized network hardware in their view of 2020 Carrier Network architecture.  Distributed control, real-time analytics, policy inputs are characteristics of Centralized Control & Orchestration- to “control the controllers” in an end-to-end SDN based network.
  • NFV is all about moving implementation of carrier services from physical routers to Virtual Network Functions (VNFs), which run as software on commercial servers.*
  • Virtual Enterprise (vE)-CPE is top NFV use case for carrier revenue generation.

*  Note: The movement of services and associated functionality from hardware routers to VNFs that are implemented in software on commercially available compute servers is a very significant trend that appears to be gaining momentum and support.

Stephen Perrin of Heavy Reading wrote in a blog post: “Virtual routers in the WAN and NFV are tightly coupled trends. Routing functions are being virtualized in NFV, Operators are eyeing edge/access functions, at least initially; and key will be ensuring performance in virtualized networks.”


Infonetics found that some of the top carrier ranked service functions proposed for VNFs are: carrier grade Network Address Translation (NAT),  Content Delivery Network (CDN), IP-MPLS VPN & VPN termination, Intrusion Detection Systems (IDS) & Prevention (IPS), broadband remote access server (BRAS or B-RAS), firewall, load balancing, QoS support, Deep Packet Inspection (DPI), and WAN optimization controller.

Here is a supportive quote from Infonetics’ recent Service Provider Router Market Study:

“In our Routing, IP Edge, and Packet-Optical Strategies: Global Service Provider Survey, July, 2014, we interviewed router purchasing decision makers at 32 operators worldwide — incumbent, competitive, mobile, and cable operators from EMEA, Asia Pacific, North America, and CALA, that together control 41% of worldwide service provider capex.

In this survey, we focused on plans for moving router functions from physical routers to software (known as vRouters, which run on commercial servers); 100GE adoption; plans for P-OTS (packet-optical transport systems); and metro architectural changes typically embedded in NG-COs (next generation central offices—big telco Central Offices (COs) spread around a metro area).

In our first-time measure of the SDN-NFV hardware-focus to software-focus trend that affects the router market directly, 60% to 75% of respondents are either definitely or likely moving eight different functions from physical edge routers to software vRouters running on commercial servers in mini data centers in NG-COs. This will shift some edge router spending to software and NFV infrastructure, but will not replace the need for edge routers to handle traffic.”

MH post conference clarification: to be more exact, many of the network functions now running on physical routers will be moved to virtualized network functions, or VNFs, that run on commercial servers. The vRouter likely won’t include the VNFs.   This is just a terminology definition that is still being formed in the industry.

With 60% to 75% of routing functions being moved to VNFs running on commercial servers, it seems that the majority of SDN/NFV development efforts will be on the software side.  How then will hardware be optimized for SDN/NFV?

Some of the silicon hardware functions being proposed for SDN/NFV networks include: encryption, DPI, load balancing and QoS support.  Open Flow, on the other hand, won’t be implemented in hardware because a hardware based state machine wouldn’t be easy to change quickly.

How much hardware optimization is needed if generic processors are used to implement most vRouter functions?  While that’s an open question, it’s believed that hardware optimization is most needed at the network edge (that assumes dumb long haul pipes).

Intel’s DPDK (Data Plane Development Kit) was mentioned as a way to “speed up network intelligence of equipment.” [DPDK is a set of software libraries that can improve packet processing.]

Some open issues for successful network hardware placement and optimization include:

  • What dedicated network hardware functions, if any, will be put in a commercial compute server that’s hosting one or more vRouters?
  • What hardware/silicon functions will go into dedicated edge or core routers/switches?
  • Lack of visibility of QoS across mutiple network hops/nodes. How to ensure end to end QoS/SLAs?
  • How should QoS functions be partitioned between dedicated hardware (e.g. packet classification and priority queueing) vs. software implementation?
  • Will dongles be attached to commercial servers to monitor network performance?
  • What types of timing references and clocking are needed in vRouters and dedicated networking hardware?
  • How will NFV infrastructure be orchestrated, coordinated, and managed?

Summary & Conclusions:

  • Top drivers for operator investments in NFV (from Infonetics survey) are:
  1. Service agility to increase revenue
  2. Capex reduction (use commercial servers, not purpose-built network gear)
  3. More efficient and automated operations
  • Revenue from new or enhanced services is the top driver for SDN as well
  • PE (Provider Edge) router is under attack by virtual PE vRouters
  • Move services move from physical networking equipment to VNFs on servers
  • vE-CPE is top NFV use case for revenue generation

In closing, Mr. Howard stated:

“The move to SDN and NFV will change the way operators make equipment purchasing decisions, placing a greater focus on software. Though hardware will always be required, its functions will be refined, and the agility of services and operations will be driven by software.”

2014 Hot Interconnects Semiconductor Session Highlights & Takeaways- Part I.


With Software Defined: Networking (SDN), Storage and Data Center movements firmly entrenched, one might believe there’s not much opportunity for innovation in dedicated hardware implemented in silicon.  Several sessions at the 2014 Hot Interconnects conference, especially one from ARM Ltd, indicated that was not the case at all.

With the strong push for open networks, chips have to be much more flexible and agile, as well as more powerful, fast and functionally dense. Of course, there are well known players for specific types of silicon. For example: Broadcom for switch/routers;  ARM for CPU cores (also Intel and MIPS/Imagination Technologies),  many vendors for System on a Chip (SoC)- which includes 1 or more CPU core(s)-mostly from ARM (Qualcomm, Nvidia, Freescale, etc), and Network Processors (Cavium, LSI-Avago/Intel, PMC-Sierra, EZchip, Netronome, Marvell, etc),  bus interconnect fabrics (Arteris, Mellanox, PLX /Avago, etc).

What’s not known is how these types of components, especially SoC’s, will evolve to support open networking and software defined networking in telecom equipment (i.e. SDN/NFV).    Some suggestions were made during presentations and a panel session at this year’s excellent Hot Interconnects conference.

We summarize three invited Hot Interconnects presentations related to network silicon in this article. Our follow on Part II article will cover network hardware for SDN/NFV based on an Infonetics presentation and service provider market survey.

  1. Data & Control Plane Interconnect Solutions for SDN & NFV Networks, by Raghu Kondapalli, Director of Strategic Planning at LSI/Avago (Invited Talk)

Open networking, such as SDN (Software Defined Networking) and NFV (Network Function Virtualizationprovides software control of many network functions.   NFV enables virtualization of entire classes of network element functions such that they become modular building blocks that may be connected, or chained, together to create a variety of communication services.

Software defined and functionally disaggregated network elements rely heavily on deterministic and secure data and control plane communication within and across the network elements. In these environments scalability, reliability and performance of the whole network relies heavily on the deterministic behavior of this interconnect.  Increasing network agility and lower equipment prices are causing severe disruption in the networking industy.

A key SDN/NFV implementation issue is how to disaggregate network functions in a given network element (equipment type).  With such functions modularized, they could be implemented in different types of equipment along with dedicated functions (e.g. PHYs to connect to wire-line or wireless networks.  The equipment designer needs to: disaggregate, virtualize, interconnect, orchestrate and manage such network functions.

“Functional coordination and control plane acceleration are the keys to successful SDN deployments,” he said.  Not coincidently, the LSI/Avago Axxia multicore communication processor family (using an ARM CPU core) is being positioned for SDN and NFV acceleration, according to the company’s website. Other important points made by Raghu:

  • Scale poses many challenges for state management and traffic engineering
  • Traffic Management and Load Balancing are important functions
  • SDN/NFV backbone network components are needed
  • Disaggregated architectures will prevail.
  • Circuit board interconnection (backplane) should consider the traditional passive backplane vs. an active switch fabric.

Axxia 5516 16-core communications processor was suggested as the SoC to use for a SDN/NFV backbone network interface.  Functions identified included:  Ethernet switching, protocol pre-processing, packet classification (QoS), traffic rate shaping, encryption, security, Precision Time Protocol (IEEE 1588) to synchronize distributed clocks, etc.

Axxia’s multi-core SoCs were said to contain various programmable function accelerators to offer a scalable data and control plane solution.

Note:  Avago recently acquired semiconductor companies LSI Corp. and PLX Technology, but has now sold its Axxia Networking Business (originally from LSI which had acquired Agere in 2007 for $4 billion) to Intel for only $650 million in cash.  Agere Systems (which was formerly AT&T Micro-electronics- at one time the largest captive semiconductor maker in the U.S.) had a market capitalization of about $48 billion when it was spun off from Lucent Technologies in Dec 2000.

  1. Applicability of Open Flow based connectivity in NFV Enabled Networks, by Srinivasa Addepalli, Fellow and Chief Software Architect, Freescale (Invited Talk)

Mr. Addepalli’s presentation addressed the performance challenges in VMMs (Virtual Machine Monitors) and the opportunities to offload VMM packet processing using SoC’s like those from Freescale (another ARM core based SoC).   The VMM layer enables virtualization of networking hardware and exposes each virtual hardware element to VMs.

“Virtualization of network elements reduces operation and capital expenses and provides the ability for operators to offer new network services faster and to scale those services based on demand. Throughput, connection rate, low latency and low jitter are few important challenges in virtualization world. If not designed well, processing power requirements go up, thereby reducing the cost benefits,” according to Addepalli.

He positioned Open Flow as a communication protocol between control/offload layers, rather than the ONF’s API/protocol between the control and data planes (residing in the same or different equipment, respectively).  A new role for Open Flow in VMM and vNF (Virtual Network Function) offloads was described and illustrated.

The applicability of OpenFlow to NFV1 faces two challenges, according to Mr. Addepalli

  1. VMM networking
  2. Virtual network data path to VMs/

Note 1.  The ETSI NFV Industry Specification Group (ISG) is not considering the use of ONF’s Open Flow, or any other protocol, for NFV at this time.  It’s work scope includes reference architectures and functional requirements, but not protocol/interface specifications.  The ETSI NFV ISG will reach the end of Phase 1 by December 2014, with the publication of the remaining sixteen deliverables.

“To be successful, NFV must address performance challenges, which can best be achieved with silicon solutions,” Srinivasa concluded.   [Problem with that statement is that the protocols/interfaces to be used for fully standardized NFV have not been specified by ETSI or any standards body.  Hence, no one knows the exact combination of NFV functions that have to perform well]

  1. The Impact of ARM in Cloud and Networking Infrastructure, by Bob Monkman, Networking Segment Marketing Manager at ARM Ltd.

Bob revealed that ARM is  innnovating way beyond the CPU core it’s been licensing for years.  There are hardware accelerators, a cache coherent network and various types of network interconnects that have been combined into a single silicon block that is showed in the figure below:

Image courtesy of ARM - innovating beyond the core.
Image courtesy of ARM

Bob said something I thought was quite profound and dispels the notion that ARM is just a low power, core CPU cell producer: “It’s not just about a low power processor – it’s what you put around it.”  As a result, ARM cores are being included in SoC vendor silicon for both  networking and storage components. Those SoC companies, including LSI/Avago Axxia  and Freescale (see above), can leverage their existing IP by adding their own cell designs for specialized networking hardware functions (identified at the end of this article in the Addendum).

Bob noted that the ARM ecosystem was conducive to the disruption now being experience in the IT industy with software control of so many types of equipment.  The evolving network infrastructure – SDN, NFV, other Open Networking- is all about reducing total cost of ownership and enabling new services with smart and adaptable building blocks.  That’s depicted in the following illustration:

Evolving infrastructure is reducing costs and enabling new services.
Image courtesy of ARM.

Bob stated that one SoC size does not fit all.  For example, one type of Soc can contain: high performance CPU, power management, premises networking, storage & I/O building blocks.  While one for SDN/NFV might include: a high performance CPU, power management, I/O including wide area networking interfaces, and specialized hardware networking functions.

Monkman articulated very well what most already know:  that the Networking and Server equipment are often being combined in a single box (they’re “colliding” he said).  [In many cases, compute servers are running network virtualization (i.e.VMWare), acceleration, packet pre-processing, and/or control plane software (SDN model).]  Flexible intelligence is required on an end-to-end basis for this to work out well.  The ARM business model was said to enable innovation and differentiation, especially since the ARM CPU core has reached the 64 bit “inflection point.”

ARM is working closely with the Linaro Networking and Enterprise Groups. Linaro is a non-profit industry group creating open source software that runs on ARM CPU cores.  Member companies fund Linaro and provide half of its engineering resources as assignees who work full time on Linaro projects. These assignees combined with over 100 of Linaro’s own engineers create a team of over 200 software developers.

Bob said that Linaro is creating an optimized, open-source platform software for scalable infrastructure (server, network & storage).  It coordinates and multiplies members’ efforts, while accelerating product time to market (TTM).  Linaro open source software enables ARM partners (licensees of ARM cores) to focus on innovation and differentiated value-add functionality in their SoC offerings.

Author’s Note:  The Linaro Networking Group (LNG) is an autonomous segment focused group that is responsible for engineering development in the networking space. The current mix of LNG engineering activities includes:

  • Virtualization support with considerations for real-time performance, I/O optimization, robustness and heterogeneous operating environments on multi-core SoCs.
  • Real-time operations and the Linux kernel optimizations for the control and data plane
  • Packet processing optimizations that maximize performance and minimize latency in data flows through the network.
  • Dealing with legacy software and mixed-endian issues prevalent in the networking space
  • Power Management
  • Data Plane Programming API:

For more information: https://wiki.linaro.org/LNG

OpenDataPlane (ODP) http://www.opendataplane.org/ was described by Bob as a “truly cross-platform, truly open-source and open contribution interface.” From the ODP website:

ODP embraces and extends existing proprietary, optimized vendor-specific hardware blocks and software libraries to provide inter-operability with minimal overhead. Initially defined by members of the Linaro Networking Group (LNG), this project is open to contributions from all individuals and companies who share an interest in promoting a standard set of APIs to be used across the full range of network processor architectures available.]

Author’s Note:   There’s a similar project from Intel called DPDK or Data Plane Developer’s Kit that an audience member referenced during Q &A . We wonder if those APIs are viable alternatives or can be used in conjunction with the ONF’s OpenFlow API?

Next Generation Virtual Network Software Platforms, along with network operator benefits, are illustrated in the following graphic:

An image depicting the Next-Gen virtualized network software platforms.
Image courtesy of ARM.

Bob Monkman’s Summary:

  • Driven by total cost of ownership, the data center workload shift is leading to  more optimized and diverse silicon solutions
  • Network infrastructure is also well suited for the same highly integrated, optimized and scalable solutions ARM’s SoC partners understand and are positioned to deliver
  • Collaborative business model supports “one size does not fit all approach,” rapid pace of innovation, choice and diversity
  • Software ecosystem (e.g. Linaro open source) is developing quickly to support target markets
  • ARM ecosystem is leveraging standards and open source software to accelerate deployment readiness


In a post conference email exchange, I suggested several specific networking hardware functions that might be implemented in a SoC (with 1 or more ARM CPU cores).  Those include:  Encryption, Packet Classification, Deep Packet Inspection, Security functions,  intra-chip or inter-card interface/fabric, fault & performance monitoring, error counters?

Bob replied: “Yes, security acceleration such as SSL operations; counters of various sorts -yes; less common on the fault notification and performance monitoring. A recent example is found in the Mingoa acquisition, see: http://www.microsemi.com/company/acquisitions ”



End NOTE:  Stay tuned for Part II which will cover Infonetics’ Michael Howard’s presentation on Hardware and market trends for SDN/NFV.

2014 Hot Interconnects Highlight: Achieving Scale & Programmability in Google's Software Defined Data Center WAN


Amin Vahdat, PhD & Distinguished Engineer and Lead Network Architect at Google, delivered the opening keynote at 2014 Hot Interconnects, held August 26-27 in Mt View, CA. His talk presented an overview of the design and architectural requirements to bring Google’s shared infrastructure services to external customers with the Google Cloud Platform.

The wide area network underpins storage, distributed computing, and security in the Cloud, which is appealing for a variety of reasons:

  • On demand access to compute servers and storage
  • Easier operational model than premises based networks
  • Much greater up-time, i.e. five 9’s reliability; fast failure recovery without human intervention, etc
  • State of the art infrastructure services, e.g. DDoS prevention, load balancing, storage, complex event & stream processing, specialised data aggregation, etc
  • Different programming models unavailable elsewhere, e.g. low latency, massive IOPS, etc
  • New capabilities; not just delivering old/legacy applications cheaper

Andromeda- more than a galaxy in space:

Andromeda – Google’s code name for their managed virtual network infrastructure- is the enabler of Google’s cloud platform which provides many services to simultaneous end users. Andromeda provides Google’s customers/end users with robust performance, low latency and security services that are as good or better than private, premises based networks. Google has long focused on shared infrastructure among multiple internal customers and services, and in delivering scalable, highly efficient services to a global population.

An image of Google's Andromeda Controller diagram.
Click to view larger version. Image courtesy of Google

“Google’s (network) infra-structure services run on a shared network,” Vahdat said. “They provide the illusion of individual customers/end users running their own network, with high-speed interconnections, their own IP address space and Virtual Machines (VMs),” he added.  [Google has been running shared infrastructure since at least 2002 and it has been the basis for many commonly used scalable open-source technologies.]

From Google’s blog:

Andromeda’s goal is to expose the raw performance of the underlying network while simultaneously exposing network function virtualization (NFV). We expose the same in-network processing that enables our internal services to scale while remaining extensible and isolated to end users. This functionality includes distributed denial of service (DDoS) protection, transparent service load balancing, access control lists, and firewalls. We do this all while improving performance, with more enhancements coming.  Hence, Andromeda itself is not a Cloud Platform networking product; rather, it is the basis for delivering Cloud Platform networking services with high performance, availability, isolation, and security.”

Google uses its own versions of SDN and NFV to orchestrate provisioning, high availability, and to meet or exceed application performance requirements for Andromeda. The technology must be distributed throughout the network, which is only as strong as its weakest link, according to Amin.  “SDN” (Software Defined Networking) is the underlying mechanism for Andromeda. “It controls the entire hardware/software stack, QoS, latency, fault tolerance, etc.”

“SDN’s” fundamental premise is the separation of the control plane from the data plane, Google and everyone else agrees on that. But not much else!  Amin said the role of “SDN” is overall co-ordination and orchestration of network functions. It permits independent evolution of the control and data planes. Functions identified under SDN supervision were the following:

  • High performance IT and network elements: NICs, packet processors, fabric switches, top of rack switches, software, storage, etc.
  • Audit correctness (of all network and compute functions performed)
  • Provisioning with end to end QoS and SLA’s
  • Insuring high availability (and reliability)

“SDN” in Andromeda–Observations and Explanations:

“A logically centralized hierarchical control plane beats peer-to-peer (control plane) every time,” Amin said. Packet/frame forwarding in the data plane can run at network link speed, while the control plane can be implemented in commodity hardware (servers or bare metal switches), with scaling as needed. The control plane requires 1% of the overhead of the entire network, he added.

As expected, Vahdat did not reveal any of the APIs/ protocols/ interface specs that Google uses for its version of “SDN.” In particular, the API between the control and data plane (Google has never endorsed the ONF specified Open Flow v1.3). Also, he didn’t detail how the logically centralized, but likely geographically distributed control plane works.

Amin said that Google was making “extensive use of NFV (Network Function Virtualization) to virtualize SDN.” Andromeda NFV functions, illustrated in the above block diagram, include: Load balancing, DoS, ACLs, and VPN. New challenges for NFV include: fault isolation, security, DoS, virtual IP networks, mapping external services into name spaces and balanced virtual systems.

Managing the Andromeda infrastructure requires new tools and skills, Vahdat noted. “It turns out that running a hundred or a thousand servers is a very difficult operation. You can’t hire people out of college who know how to operate a hundred or a thousand servers,” Amin said. Tools are often designed for homogeneous environments and individual systems. Human reaction time is too slow to deliver “five nines” of uptime, maintenance outages are unacceptable, and the network becomes a bottleneck and source of outages.

Power and cooling are the major costs of a global data center and networking infrastructure like Google’s. “That’s true of even your laptop at home if you’re running it 24/7. At Google’s mammoth scale, that’s very apparent,” Vahdat said.

Applications require real-time high performance and low-latency communications to virtual machines. Google delivers those capabilities via its own Content Delivery Network (CDN).  Google uses the term “cluster networking” to describe huge switch/routers which are purpose-built out of cost efficient building blocks.

In addition to high performance and low latency, users may also require service chaining and load-balancing, along with extensibility (the capability to increase or reduce the number of servers available to applications as demand requires). Security is also a huge requirement. “Large companies are constantly under attack. It’s not a question of whether you’re under attack but how big is the attack,” Vahdat said.

[“Security will never be the same again. It’s a losing battle,” said Martin Casado, PhD during his Cloud Innovation Summit keynote on March 27, 2014]

Google has a global infrastructure, with data centers and points of presence worldwide to provide low-latency access to services locally, rather than requiring customers to access a single point of presence. Google’s software defined WAN (backbone private network) was one of the first networks to use “SDN”. In operation for almost three years, it is larger and growing faster than Google’s customer facing Internet Connectivity between Google’s cloud resident data centers and is comparable to the data traffic within a premises based data center, according to Vahdat.

Note 1.   Please refer to this article: Google’s largest internal network interconnects its Data Centers using Software Defined Network (SDN) in the WAN

“SDN” opportunities and challenges include:

  • Logically centralized network management- a shift from fully decentralized, box to box communications
  • High performance and reliable distributed control
  • Eliminate one-off protocols (not explained)
  • Definition of an API that will deliver NFV as a service

Cloud Caveats:

While Vahdat believes in the potential and power of cloud computing, he says that moving to the cloud (from premises based data centers) still poses all the challenges of running an IT infrastructure. “Most cloud customers, if you poll them, say the operational overhead of running on the cloud is as hard or harder today than running on your own infrastructure,” Vahdat said.

“In the future, cloud computing will require high bandwidth, low latency pipes.” Amin cited a “law” this author never heard of: “1M bit/sec of I/O is required for every 1MHz of CPU processing (computations).” In addition, the cloud must provide rapid network provisioning and very high availability, he added.

Network switch silicon and NPUs should focus on:

  • Hardware/software support for more efficient read/write of switch state
  • Increasing port density
  • Higher bandwidth per chip
  • NPUs must provide much greater than twice the performance for the same functionality as general purpose microprocessors and switch silicon.

Note: Two case studies were presented which are beyond the scope of this article to review.  Please refer to a related article on 2014 Hot Interconnects Death of the God Box

Vahdat’s Summary:

Google is leveraging its decade plus experience in delivering high performance shared IT infrastructure in its Andromeda network.  Logically centralized “SDN” is used to control and orchestrate all network and computing elements, including: VMs, virtual (soft) switches, NICs, switch fabrics, packet processors, cluster routers, etc.  Elements of NFV are also being used with more expected in the future.










Addendum:  Amdahl’s Law

In a post conference email to this author, Amin wrote:

Here are a couple of references for Amdahl’s “law” on balanced system design:

Both essentially argue that for modern parallel computation, we need a fair amount of network I/O to keep the CPU busy (rather than stalled waiting for I/O to complete).
Most distributed computations today substantially under provision IO, largely because of significant inefficiency in the network software stack (RPC, TCP, IP, etc.) as well as the expense/complexity of building high performance network interconnects.  Cloud infrastructure has the potential to deliver balanced system infrastructure even for large-scale distributed computation.

Thanks, Amin

Technology Outlook: The Cable Show 2014

An image of the Imagine Cafe at the 2014 Cable Show.
An image of the Imagine Cafe at the 2014 Cable Show.

The Cable Show 2014 was back in Los Angeles this year – which usually allows for a larger contingent of content folks to attend given the proximity of Hollywood. This year saw a good mix of technology folks rubbing shoulders with content-types but it almost felt like two shows in parallel – one set of tracks attended mostly by the techies of the industry and the other attended mostly by the content folks.

Some of the more interesting themes from the show are highlighted below.

Big Data

Several panels had experts from vendors and cable companies discussing the status of Big Data efforts in Cable. One recurring theme that was different from past shows was the interest in “Data science” and not just “Big Data infrastructure” in and of itself. That demonstrates a maturing of the technology and its assimilation into the Cable TV ecosystem.

Big Data infrastructure – such as Hadoop – are the building blocks enabling the processing of large amounts of data. Data science refers to the extraction of knowledge from the data – by a set of applications running over the infrastructure so that algorithms can be applied to generate meaningful results that can further be applied to solve various problems. A focus on Data Science requires the existence of a reasonable Big Data infrastructure in the first place and reflects the beginning of the maturity phase of Big Data in Cable.

The applications of Big Data being discussed on panels included Digital Advertising Insertion (DAI). In DAI advanced targeted advertising is being implemented by focusing on subscriber behavior (for instance – advertisements inserted compared to those being watched or being skipped) leading to a better understanding of what advertisements have a higher likelihood of being watched by a subscriber.

Cloud, SDN and the Open movement

Comcast has publicly announced that its X1 user interface is cloud based and is built using OpenStack. Private cloud – and specifically OpenStack – was one of the more frequent topics of discussion at the Cable Show 2014. It came up during the CTO panel and on several other panels with most speakers mentioning it as a current or roadmap item. OpenStack has garnered enough support from the vendor ecosystem that Cable companies appear to be feeling confident enough to embrace it for their future endeavors following in Comcast’s footsteps.

On the other hand there was hardly any discussion of Software Defined Networking (SDN) at the Cable show. When the author asked a question about MSO plans for SDN at the CTO panel the answers were not very satisfactory. It appears that Cable industry is taking a wait and see approach to SDN. There are people in Cablelabs who are working on SDN and Network Function Virtualization (NFV) efforts, but the operators themselves are still in the early understanding phase of SDN and NFV.

With the vendor ecosystem also coalescing around OpenFlow (an Open API for SDN’s Southbound interface), it would appear it is but a matter of time before we start seeing more SDN traction in the Cable industry.

The industry buzz OpenStack and OpenFlow initiatives brings a new trend to watch in the Cable industry – that of using Open Source products at a fraction of the cost of purchasing expensive products from the vendors in the industry. Add to that the availability of Apache Hadoop and the Open trend appears inevitable to accelerate in Cable.

The Internet of Things

Several panels and exhibits focused on topics in the realm of the Internet of Things (IoT). The IoT is supposed to enable every device to be Internet-enabled – therefore creating a lot of data every second of every day. This data will need to be processed and shared in near real-time and at a scale that would eclipse the scale of today’s biggest Big-Data efforts.

The IoT session in the Imagine Park (or startup-city as I think of it!) was quite eye-opening in that it exposed the audience to the breadth of the impact of IoT in our daily lives.

One presentation, for instance, by Carnegie Mellon University showcased inexpensive “disposable” robot helicopters that could be used to help in rescues inside burning buildings or observing nuclear reactors. Another demonstrated wearable technologies that sense and share muscle “tiredness” data in an athlete or a pilot in real-time and communicate their status to a monitoring authority. Perhaps the most interesting thing from a communications perspective was that the live demo of the wearable technology suffered from too many wireless devices in the demo area clogging up the WiFi bandwidth – so the information being collected by the wearable sensors could not actually make it to the monitoring station!

Did I hear someone say, “Bandwidth is king”?

Viodi View – 05/27/14

Beware of the Unseen Competitor was a title of an article written many years ago that warned broadband operators of the rise of competitors from completely different market sectors. Of course, it is the Internet and the intelligence of the things that helps turn products into mere features and brings in competition from seemingly disparate industries. In the Korner below, there is an example of this sort of disruptive development that could signal a revolution in the transport industry.

FCC Net Neutrality Proposal Stirs Up Controversy- Reclassify or Not? by Alan Weissberger

An image of Tom Wheeler, Chairman of the FCC,.
FCC Chairman, Tom Wheeler (image courtesy of FCC.gov)

On May 15th the FCC Commissioners narrowly voted to approve a framework for rules that would create an Internet fast lane, while trying to patch up the loopholes that would make that fast lane possible. The proposal from FCC Chairman Tom Wheeler would ban broadband providers from blocking or slowing down websites, but leaves the door open for them to strike deals with content companies for preferential treatment, or fast lanes to customers.

Click here to read the rest of Weissberger’s article and add to the lively discussion that follows.

Cable Show 2014 Musings

The following are some observations from and reactions to the recent 2014 Cable Show.

A picture of the Comcast booth at the Cable Show 2014
Click to read more
  • Impressive Demos
  • Open Up DNS, Comcast
  • Is it a Revolution or More of the Same
  • Freedom to Be Creative
  • Tap a WiFi Hot Spot
  • 4K, 4K, 4K
  • Stay Tuned

Click here to read more.

The Name Says It All

Ken Pyle with Steve Weed of Wave Broadband at the ACA Summit 2014
Click to view

CBO (Community Broadband Operator) might be a better term to describe operators traditionally described as CATV (Community Antenna TeleVision). The vision of Steve Weed, CEO of Wave Broadband, and his team has become reality as they now have more broadband customers than video subscribers. With that context, he looks forward to the day, in the not-too-distance future, when a new form of Over-the-Top video provider – a virtual MSOs (Multichannel System Operators) – ride over Wave Broadband pipes, giving consumers more choice in video packages and bringing more value to the broadband connection.

Click here to read more and view.

An Incremental Approach to SDN/NFV

Ken Pyle interviews Andy Randall of Metaswitch
Click to View

“All the intelligence and all the value is moving into software in the cloud,” said Andy Randall, GM Networking Business Unit & SVP Corp Development of Metaswitch. Randall talks about the transition to using commodity hardware with software defining how that hardware is used. Ultimately, a software-based approach will allow for operators to be more nimble in responding to customer and market demands.

Click here to view.

Are the Internet of Things (IoT) & Internet of Everything (IoE) the Same Thing? by Alan Weissberger

An image of an Internet Connected Water shut-off valve is shown.
Click to read more.

For quite some time, Cisco and Qualcomm have used the term Internet of Everything (IoE) to describe what almost everyone else refers to as the Internet of Things (IoT). McKinsey Global Institute’s Disruptive Technologies report calls out the Internet of Things (IoT) as a top disruptive technology trend that will have an impact of as much as $6 Trillion on the world economy by 2025 with 50 billion connected devices!

Click here to read more.

TiECON Flash: U.S. Dept of Commerce & TiE in Partnership to Promote Exports by Alan Weissberger

TiE Silicon Valley President Venk Shukla kicked off TiECon (The Indus Entrepreneurs annual conference) by stating that “wealth creation through entrepreneurship” was TiE’s principal mission (or reason for being).  Also, that TiE was “deeply ingrained in Silicon Valley” through its members (over 11,000 from over 50 countries) which are at start-ups, established companies, VCs and private equity firms. The surprising announcement at TiECon is that the U.S. Dept of Commerce and TiE have entered a partnership to promote TiE U.S. member companies products and/or services that are sold abroad.

Click here to read more.

Some Tweets and Short Thoughts:

  • One step down, two to go. Big thanks to the city for bringing San Jose one step closer to getting 
  • Live demo of a voice to calendar feature that took about 8 hours development at #mforum14Wow!
  • Using Amazon Web Services as a virtual lab to test 20M circuits. 1/60th cost. Great idea. #mforum14

The Korner – The Software Driven Car

A picture of an electric vehicle from LIT Motors at CES 2014.
Click to view and read more

As simple and as safe as a car combined with the benefits of a motorcycle is what LIT Motors promises with its C-1 electric vehicle. With a projected range of almost 200 miles, a top speed of over 100 miles per hour and anticipated pricing in the mid-20 thousands (before tax credits), the C-1 (working name) has potential to be a game-changer for transportation in urban areas.

The real revolution, however, may be in the way this company has done so much to turn one man’s vision into reality a relatively small investment (measured in the millions) and short amount of time. A handful of people created the prototype on display at CES. They are set up more as a Web 2.0 company, than an automobile company, as evidenced by their use of crowd-funding (for their $6,000, electric cargo scooter,Kubo), use of social media and direct relationship with the end customers.

And although they still have to set up manufacturing for mass-production, their relatively small investment gives them the flexibility to try new business models (e.g. think licensing, maybe open sourcing, etc.) that allow others to manufacturer and even market their vehicle designs. The interesting thing is that a brand that would license such a vehicle might not even be from the automobile space.

Click here to read more and view the video.

Ethernet Tech Summit Reveals Many Paths to "Open SDN"



“SDN” and “open networking” were very hot topics at last week’s Ethernet Technology Summit in Santa Clara, CA. You might be wondering what SDN has to do with Ethernet, as it’s not specified in any SDN or open networking standards. The answer is that Ethernet is the underlying Layer 1 and 2 (PHY and MAC) for SDN in the data center and campus/enterprise networks- the two most targeted areas for early SDN deployment.  Carrier Ethernet (metro and WAN), along with the OTN, will be the underlying transport network infrastructure for “carrier SDN.”

SDN Session Highlights:

Here are key messages from selected SDN related sessions at this excellent conference:

1. Open-Source Switching:

Roy Chua from SDN Central provided observations and identified lessons learned in the rollout of open- source switching. The move towards open source switching and “white box” hardware will bring open source software and hardware to IP routing and Ethernet switching.   The Open Compute Project (OCP) Networking activity is a good example of that. As a result, basic switch designs and software stacks could be available to everyone on a royalty-free basis.

2. Customer-Oriented Approach to Implementing SDN:

Arpit Joshipura of Dell said it was a great year for SDN with progress on all three architectural models: overlay solutions/network virtualization, (e.g. VMware/Nicira), vendor specific programmable solutions (e.g. Cisco), and “pure SDN” with a centralized controller and Open Flow API/Protocol (e.g. Big Switch Networks). The graphic below depicts the era of “open computing,” which shows any Operating System (or Hypervisor) running over an industry standard architecture for the control plane and a data plane built from merchant silicon (usually by ODMs in their “white boxes”).

An image depicting the network paradigm shift of Open Networking
Image courtesy of Dell

Dell’s Open Networking model is shown below. It can use any OS running on their “Open Networking Switch” with Broadcom switch silicon used in the data plane forwarding engine, which could be a “white box.”

An image showing Dell's Open Networking model and how it allows a choice of OS and Applications.
Image Courtesy of Dell

Going forward, Arpit sees three different SDN mind sets, each with their own versions of open networking:

  • Server/hypervisor- Build switches like servers to attain Open Networking
  • Vendor specific networking- Proprietary thinking with some degree of user programmability
  • Purist view with ONF standards (e.g. Open Flow v1.3 and open source software (e.g. Open Daylight). This view requires all new network equipment and is therefore only applicable for greenfield SDN deployments.

Organization change and (re) training will be a critical issue for companies that deploy SDN.  That’s something that this author thinks may take quite a long time. See section 7. Got SDN root? for more on the new skills required to manage and maintain an open SDN.

3. Expansive Openness Is the Key to SDN and NFV:

Marc Cohn of Ciena identified five attributes of Openness:

  • End users are in control
  • Multi-vendor Interoperability (via implementation of open standards/specifications)
  • Unprecedented choice (as a direct result of multi-vendor interoperability)
  • Not controlled by single vendor (i.e. no vendor lock-in)
  • Vibrant ecosystem

The various layers of an open SDN architecture are depicted in the graphic below.

Ciena's slide regarding openness and SDN.
Image courtesy of Ciena

Looking ahead, Marc sees SDN related standards, open source software and end-user groups all evolving and working together to create a virtuous cycle that will enhance the SDN/NFV ecosystem.  We’ll later provide references and our opinion about SDN openness (or not).

4. Qualifying SDN/OpenFlow Enabled Networks:

Dean Lee of Ixia did an excellent job of positioning SDN using Open Flow as the “Southbound” API/protocol from the Control Plane to the Data Plane- which are assumed to be implemented in different physical equipment/ boxes.

There are three definitive SDN features, which makes it unique and different from traditional networking:

  • Separation of the control plane from the data plane
  • A centralized controller and view of the network (note that each domain or subnetwork would have its own SDN controller which must communicate with others for path calculations and topology changes)
  • Programmability of the network by external applications, which is done via the “Northbound” API (from the Application to the Control plane)

Dean included Network Virtualization via an overlay network as part of “SDN” (Note that VMWare/Nicira doesn’t call that SDN and they don’t implement Open Flow. They simply refer to their open networking solution as “Network Virtualization”).   In this “SDN/NV” model, the physical network infrastructure is divided into multiple logical networks to support multi-tenants or end users. Connectivity is established across existing L2/L3 networks via a Network Virtualization Controller (such as NSX/VxLAN from VMware or OpenContrail from Juniper Networks).

Dean said that SDN has a lot to offer telecom carriers, including these benefits:

  • Customization (Value add): custom services, collaboration between applications and the network
  • Simple: operation and management —>Lower OPEX
  • Instant: fast service provisioning (and quicker time to deploy new services)
  • Elastic: flexible evolution of infrastructure

SDN evolution challenges for carriers are related to a relatively smooth migration from current legacy networks toward SDN. They include the following:

  • Significant installed base of existing carrier networks
  • Co-existence during migration
  • Evolution versus revolution
  • Reliability and scalability of centralized controllers
  • Expose much higher risk than distributed control planes
  • Fast recovery from data path failures
  • Supporting very large carrier networks
  • Flexibility versus Performance
  • Software flexibility and performance rely on hardware capability
  • Finding the correct hardware trade-offs
  • Lack of robust testing methodologies for validating various SDN implementations

5. Real Time Insight Needed for Managing SDN and NFV:

Peter Ekner of Napatech (Denmark) said the main advantage of SDN for carriers was agile provisioning of services, while for Network Function Virtualization (NFV) it is flexible deployment of services.

However, agile and flexible provisioning/deployment of services is only possible if the network operator controls the traffic and consumption of those same services. Clearly, that’s not the case today, as it’s the over the top video providers that actually generate most of the network traffic with timing that’s unpredictable. [According to a Cisco study, 50% of all U.S. Internet traffic in 2014 was from Netflix and YouTube. Video will consume 66% of all network traffic in 2018.] As a result, carriers no longer control what services are used and when they are consumed!

A variety of data types and high traffic volumes leads to network complexity, which Peter says can’t be orchestrated by static provisioning or path calculations. He proposes “Real Time Insight” to complement SDN and NFV functionality in a carrier network. Please refer to the figure below:

Napatech's view of the real-time insight needed to react and adapt to changes.
Image courtesy of Napatech

Real Time Insight enables the network to:

  • See What’s Happening as It Happens- Collect real-time data
  • Understand Exactly What Occurred- Store data for historical analysis
  • Detect When Something New Happens – Detect anomalies in real-time
  • Capture Data in Real Time, Store and Detect – Optimize services and network in real-time

The result will be a much better Quality of Experience (QoE) for users and improved network security. This is depicted in the illustration below:

Napatech's view of assuring OoE and security to enable new services for users and OTTs.

6. SDN Overlays-Possibilities and Implications:

Sharon Barkai of ConteXtream (Israel and the U.S.) identified several problems with SDN performance, especially scaling up to deal with increased data/video traffic to or from many users.  Sharon claims that, as currently defined, SDN is “unstructured” and can have serious “scale consistency” issues, especially for tier 1 carriers.  

A large network operator (such as AT&T, Verizon, BT, DT, Orange, NTT Com, etc) has to serve millions of customers. These customers are now demanding services to be delivered to multiple endpoints. With the number of subscriber endpoints exploding, a carrier grade SDN infrastructure needs to cope with millions of SDN rules for path computation and packet forwarding. This translates into huge capacity requirements for SDN controllers and switches, with the need for complex rules and flow commands handled by those SDN entities.  

Mr. Barkai said that these performance problems could be solved using Network Virtualization Overlays (NVOs). (Note that this is a completely different concept than VMWare’s Network Virtualization, which doesn’t use SDN/Open Flow anywhere).   In this model, NVOs would co-exist with SDN operating at the network edge and NFV functionality within a carrier network. Communications between those three entities (NVO, SDN, NFV) would be via exchange of Flow Mapping tables and associated primitives/ protocols. This is show in the figure below:

ConteXtream's solution where overlays complete virtualization.

Adding NVO “standards” to SDN starts with use of an IETF Location Identity Separation Protocol (RFC6830), according to Sharon.  Mr. Barkai said the following rules should be applied to this network overlay/ SDN OpenFlow/ NFV hybrid architecture:

  • SDN OpenFlow should not cross routing locations
  • SDN flows cross locations by “Map and Encapsulate”
  • Distribution is based purely by underlay (the real physical) network and mapping

The claim was that with such a distributed networking fabric and overlays, network operators could deliver a variety of network services to a large number of subscribers. A collapsed packet core, managed network service and distributed packet core backhaul were cited as use cases for proof of concept.

7. Got SDN root? Claim your seat at the new SDN table:

Patrick Hubbard of SolarWinds called attention to the critical need for “hands on” network management and control. He believes that with a centralized SDN controller and separate control/data planes, increased troubleshooting complexity will require “old-school” networking experience. Yet the success of any SDN deployment will also require new forward-looking skills for IT networking personnel.  In particular, new SDN training and certifications will be needed for that.

Will “old school” IT departments engage in such training and certification? How long might that take? Or is it too late to teach old dogs new tricks?

Is SDN Really Open?

In contrast to Marc Cohn’s Expansive Openness talk and Arpit Joshipura’s Customer-Oriented Approach to Implementing SDN keynote that identified only three paths to SDN (pure Open Flow with centralized controller, Overlay/Network Virtualization in the compute server, and proprietary SDN models), we now have:

50 Shades Of Open SDN 

[Thanks to Dan Pitt, PhD and Executive Director of the ONF for notifying me of the above article]

Professor Raj Jain, PhD (a multi-decade colleague of this author) read the above article and wrote in an email:

“The article is more about “Open” than about “SDN.” Right now “Open” sells and so everything is labeled “open.” But like an open window, the degree of openness vary. The article pointed this very well with specific examples.

Any idea that is widely adopted will be reshaped to meet the variety of needs of the wide audience and often it may look very different from the original idea. SDN is now undergoing that transformation. The wider the applicability more the “shades.” So while this is confusing now, it goes in favor of SDN that it is being adopted in all these varieties.”

Personal Perspective:

This author believes the term “Open SDN” is an oxymoron, primarily because of the lack of a complete standards suite of protocols/APIs and interfaces.  

First, there is the uneven acceptance of OpenFlow as the Southbound API (it’s just one of several alternative protocols between Control Plane entity and Data Plane/Bare Metal switches/white boxes). Many “SDN” vendors have not implemented any version of OpenFlow at all.  For those that have, there are often vendor specific (i.e. proprietary) extensions to OpenFlow v1.3. 

In addition, each SDN vendor must chose among many possible protocols for the Northbound API (e.g. OpenStack, CloudStack, etc) for Orchestration/Management of the SDN controller below (even if it’s implemented as a software module within the same physical compute server).   

Also, the East-West protocol between SDN controllers in different networking domains (i.e. SDN controller to SDN controller) has not been standardized by ONF and work hasn’t stated yet. “Use BGP” is their recommendation at this time for inter-domain communications between SDN controllers.

Finally, there are no standards for control, management, monitoring, fault detection, etc of the underlying fiber optic transport network.  Those functions were to come from the ONF Optical Transport Working Group whose charter states: “In close collaboration with the Framework and Architecture working group, the OTWG will develop a reference architecture and framework document describing terminology, modeling of optical transport switches and networks, and utilization of virtualization and network abstraction in optical transport networks.”   Yet we haven’t seen any outputs from that ONF activity.

The lack of a complete set of standards defeats a key point of openness- no vendor lock-in! When I asked three SDN vendors about the lack of multi-vendor interoperability at the Cloud Innovation Forum in March only Arpit had the courage to reply. He said “we (the vendors) are working on SDN controller interoperability, and it will come later this year.” Does anyone seriously believe that?

It should also be recognized that the ETSI NFV activity is not producing any open interfaces, protocols, or APIs that can be implemented. They are only specifying functionality for NFV logical entities. The actual NFV standards will come later (???) from ITU-T as “recommendations.”

Yet so many vendors say they are now “NFV compliant.” How can you be compliant if there are no implementable specifications for physical interfaces or protocols to be compliant with?

Bottom line:  We believe that almost every type of “SDN” is in reality vendor specific! “SDN” and “Open Networking” have become hype machines of the highest order! We think this has caused a lot of confusion among potential customers and that has delayed many SDN deployments.

In the near future, we think most of the SDN deployments will be provided by a single “SDN/NV controller” vendor solution which may or may not include ODM built “bare metal switches/white boxes” for the data forwarding plane.

End Note:  Here’s the best video you are ever likely to see on “SDN Industry Analysis.”


It was presented by IT Brand Pulse at Ethernet Tech Summit 2014. Raj Jain found it very entertaining. Hope you do too!

Do you think that this same video, with properly edited subtitles, can apply to any other future technology?  Or is it specific to the ultra hyped SDN?


“Open Networking” panel session at January 2014 IEEE ComSocSCV meeting (organized by this author) http://comsocscv.org/showevent.php?id=1386572933

Viodi View – 03/28/14

A Sneak Peak Innovation Trailer

The ViodiTV app is on display in this image.
Click to View

Innovating by improving business processes was a recurring theme of this week’s Minnesota Telecom Alliance Convention and Tradeshow in Minneapolis. The cloud and web applications are driving many of the efficiency improvements for not only operators, but their suppliers and consultants.  A few of the stories from MTA are in this issue, while others will be revealed in future issues of the Viodi View. In the meantime, check out this trailer video for the ViodiTV channel that appeared on the convention hotels.

Click here to view.

Social Media – It Takes a Team

Rosie Berg of Pinnacle discusses the importance of the team in creating and maintaining a social media presence.
Click to View

Rosie Berg of Pinnacle, a publisher of directories and web applications, pointed out in her MTA presentation that it takes a team to keep up with social media. She provided an overview of the many online tools that are available for an operator to stay in contact with its prospects and customers. One take-away, in a conversation with her after her presentation, is that social media channels are increasingly important for customer service, even if the dirty laundry is sometimes exposed.  Read the Korner below, for my first-hand experience with the importance of a well-monitored and responsive social media strategy.

Click here to view the interview with Berg.

The Gigabit “Halo Effect”

David Seda of Calix discusses the role of software in transforming an operator's way of doing business.
Click to View

There is this thing called a halo effect,” said David Seda, Vice President of Marketing for Calix. Seda says that customers perceive operators which offer gigabit services as being cutting-edge. As such, gigabit service is lifting the perceived value of operators’ other services. Seda also talks about the increasing importance of software in transforming the network from a pipe to a broadband ecosystem where the operator can rapidly address customers’ needs; needs that the customers sometimes don’t even know they have.

Click here to view.

Stanford President John Hennessy Educating SBU Alumni on Broad Range of Topics (Stanford = Silicon Valley) by Alan Weissberger

Stanford president John Hennessy and fellow Stony Brook alum, Alan Weissberger at the March 2014 Stony Brook Alumni Northern California Chapter meeting.
Click to Read More

Stanford president, John Hennessy, treated the Stony Brook University Alumni audience to an enlightening seminar on many diverse and interesting topics that spanned education, technology and the value of liberal arts/ reading classic literature. Not only was his presentation crystal clear and very informative, but his relaxed style and down to earth discourse made it most enjoyable as well.

Click here to view.

AT&T Outlines SDN/NFV Focus Areas for Domain 2.0 Initiative by Alan Weissberger

AT&T''s vision of a user-defined cloud experience.
Image courtesy of AT&T

As previously reported, AT&T’s future Domain 2.0 network infrastructure must be open, simple, scalable and secure, according to John Donovan, AT&T’s senior executive vice president of technology and network operations. But what does that really mean?  And what are the research initiatives that are guiding AT&T’s transition to SDN/NFV? What is particularly intriguing is the idea that AT&T’s 4,600 central offices and the power to create a cloud-based network around those points of presence.

Click here to read more.

NTT Com Leads all Network Providers in Deployment of SDN/OpenFlow; NFV Coming Soon by Alan Weissberger

NTT-Com plans to extend SDN to control its entire WAN, including Cloud as depicted in the illustration
Click to read more

While AT&T has gotten a much press for its announced plans to use Software Defined Networking (SDN) to revamp its core network, another large global carrier has been quietly deploying SDN/OpenFlow for almost two years and soon plans to launch Network Function Virtualization (NFV) into its WAN. NTT Communications (NTT-Com) is using an “SDN overlay” to connect 12 of its cloud data centers (including one’s in China and Germany scheduled for launch this year) located on three different continents.

Click to read more.

Some Tweets and Short Thoughts:

  • @AjitPaiFCC Norman Borlaug, the man who saved more lives than anyone in history, would have turned 100 yesterday.
  • @MattatACA @RepAnnaEshoo To Speak At #ACA Annual Washington Summit | #AmericanCableAssociation http://www.americancable.org/node/4713 
  • The Viacom negotations are getting down to the wire. Messages on many independent operators’ web sites expressing their concerns, including this excerpt, “While we are restricted from talking about specific rates, Viacom demanded a rate increase that is 40 times the rate of inflation over last year’s fees for the same channels you get today!”
  • This is a big deal, as with the power of the independent operators and their local operations, Sprint’s network has the potential to have a reach that extends virtually everywhere.
  • #cornpalace in Mitchell, SD just mentioned on #TheBlacklist here is what it looked like when I visited this Midwestern landmark.
  • Here are some pictures from this week’s MTA Convention

The Korner – From Social Media to Shelbots

Amy and Sarah of Suitable Technologies demonstrate a mobile telepresence solution.
Click to View

The @Viodi Twitter handle is supposed to be for industry-relevant items, but sometimes it is used for things that are seemingly a bit off-topic. Traveling home from MTA and inspired by Rosie Berg of Pinnacle, I figured that Twitter might be the easiest way to draw attention to what looked to be a potential log-jam at American Airlines MSP gate. With only two agents and about 25 people in line, it was clear they needed help.

This incident triggered a thought on how one of the more interesting demonstrations at International CES 2014 could be used to improved customer support when the kiosks and computer systems just don’t seem to work and when a human touch is needed. The above video provides a clue as to the potential use of a technology that is intended to eliminate travel and how it could be be used to help customer support.

Click here to view the video and read the entire article.

AT&T Outlines SDN/NFV Focus Areas for Domain 2.0 Initiative

Introduction:  The White Paper

As previously reported*, AT&T’s future Domain 2.0 network infrastructure must be open, simple, scalable and secure, according to John Donovan, AT&T’s senior executive vice president of technology and network operations.

* AT&T’s John Donovan talks BIG GAME but doesn’t reveal Game Plan at ONS 2014  

But what does that really mean?  And what are the research initiatives that are guiding AT&T’s transition to SDN/NFV?

Let’s first examine  AT&Ts Domain 2.0 white paper.

It specifically states the goal of moving to a virtualized, cloud based, SDN/NFV design based on off-the-shelf components (merchant silicon) and hardware and rejecting the legacy of OSMINE compliance and traditional telecom standards for OSS/BSS.  Yet there is no mention of the OpenFlow API/protocol we could find.

“In a nutshell, Domain 2.0 seeks to transform AT&T’s networking businesses from their current state to a future state where they are provided in a manner very similar to cloud computing services, and to transform our infrastructure from the current state to a future state where common infrastructure is purchased and provisioned in a manner similar to the PODs used to support cloud data center services. The replacement technology consists of a substrate of networking capability, often called Network Function Virtualization Infrastructure (NFVI) or simply infrastructure that is capable of being directed with software and Software Defined Networking (SDN) protocols to perform a broad variety of network functions and services.”

“This infrastructure is expected to be comprised of several types of substrate. The most typical type of substrate being servers that support NFV, followed by packet forwarding capabilities based on merchant silicon, which we often call white boxes. However it’s envisioned that other specialized network technologies are also brought to bear when general purpose processors or merchant silicon are not appropriate.”

AT&T''s vision of a user-defined cloud experience.
Image courtesy of AT&T

“AT&T services will increasingly become cloud-centric workloads. Starting in data centers (DC) and at the network edges – networking services, capabilities, and business policies will be instantiated as needed over the aforementioned common infrastructure. This will be embodied by orchestrating software instances that can be composed to perform similar tasks at various scale and reliability using techniques typical of cloud software architecture.”

Interview with AT&T’s Soren Telfer:

As a follow up to John Donovan’s ONS Keynote on AT&T’s “user-defined network cloud” (AKA Domain 2.0), we spoke to Soren Telfer, Lead Member of Technical Staff at AT&T’s Palo Alto, CA Foundry. Our intent was to gain insight and perspective on the company’s SDN/NFV research focus areas and initiatives.

Mr. Telfer said that AT&T’s Palo Alto Foundry is examining technical issues that will solve important problems in AT&T’s network.  One of those is the transformation to SDN/NFV so that future services can be cloud based.  While Soren admitted there were many gaps in SDN/NFV standard interfaces and protocols, he said, “Over time the gaps will be filled.”

Soren said that AT&T was working within the  Open Networking Labs (ON.LAB), which is part of the Stanford-UC Berkeley Open Network Research Community.  The ONRC mission from their website:  “As inventors of OpenFlow and SDN, we seek to ‘open up the Internet infrastructure for innovations’ and enable the larger network industry to build networks that offer increasingly sophisticated functionality yet are cheaper and simpler to manage than current networks.”  So for sure, ON.LAB work is based on the OpenFlow API/protocol between the Control and Data Planes (residing in different equipment).

The ON.LAB community is made up of open source developers, organizations and users who all collaborate on SDN tools and platforms to open the Internet and Cloud up to innovation.  They are trying to use a Linux (OS) foundation for open source controllers, according to Soren.  Curiously, AT&T is not listed as an ON.LAB contributor at http://onlab.us/community.html

AT&T’s Foundry Research Focus Areas:

Soren identified four key themes that AT&T is examining in its journey to SDN/NFV:

1.  Looking at new network infrastructures as “distributed systems.”  What problems need to be solved?  Google’s B4 network architecture was cited as an example.

[From a Google authored research paper: http://cseweb.ucsd.edu/~vahdat/papers/b4-sigcomm13.pdf]

“B4 is a private WAN connecting Google’s data centers across the globe. It has a number of unique characteristics:  i) massive bandwidth requirements deployed to a modest number of sites, ii) elastic traffic  demand that seeks to maximize average bandwidth, and iii) full control over the edge servers and network, which enables rate limiting and demand measurement at the edge.”

2.  Building diverse tools and environments for all future AT&T work on SDN/NFV/open networking. In particular, development, simulation and emulation of the network and its components/functional groupings in a consistent manner.  NTT Com’s VOLT (Versatile OpenFlow ValiDator) was cited as such a simulation tool for that carrier’s SDN based network.  For more on VOLT and NTT Com’s SDN/NFV please refer to: http://viodi.com/2014/03/15/ntt-com-leads-all-network-providers-in-deployment-of-sdnopenflow-nfv-coming-soon/

3.  Activities related to “what if questions.”  In other words, out of the box thinking to potentially use radically new network architecture(s) to deliver new services.  “Network as a social graph” was cited as an example.  The goal is to enable new experiences for AT&T’s customers via new services or additional capabilities to existing services.

Such a true “re-think+” initiative could be related to John Donovan’s reply to a question during his ONS keynote: “We will have new applications and new technology that will allow us to do policy and provisioning as a parallel process, rather than an overarching process that defines and inhibits everything we do.”

+ AT&T has been trying to change it’s tagline to:  “Re-think Possible” for some time now.  Yet many AT&T customers believe “Re-think” is impossible for AT&T, as its stuck in out dated methods, policies and procedures.  What’s your opinion?

According to Soren, AT&T is looking for the new network’s ability to “facilitate communication between people.”  Presumably, something more than is possible with today’s voice, video conferencing, email or social networks?  Functional test or universal tests are being considered to validate such a new network capability.

4.  Overlaying computation on a heterogeneous network system [presumably for cloud computing/storage and control of the Internet of Things (IoT)]. Flexible run times for compute jobs would be an example attribute for cloud computing.  Organizing billions of devices and choosing among meaningful services would be an IoT objective.

What then is the principle role of SDN in all of these research initiatives?  Soren said:

SDN will help us to organize and manage state.”  That includes correct configuration settings, meeting requested QoS, concurrency, etc.   Another goal was to virtualize many physical network elements (NEs).  DNS server, VoIP server and other NEs that could be deployed as Virtual Machines (VMs).

Soren noted that contemporary network protocols internalize state. For example, the routing data base for paths selected are internally stored in a router. An alternate “distributed systems” approach would be to externalize state such that it would not be internal to each network element.

However, NE’s accessing external state would require new state organization and management tools.  He cited Amazon’s Dynamo and Google’s B4 as network architectures AT&T was studying. But creating and deploying protocols that work with external state won’t be here soon.  “We’re looking to replace existing network protocols with those designed for more distributed systems in the next seven or eight years,” he added.

Summing up, Soren wrote in an email:

“AT&T is working to deliver the User Defined Network Cloud, through which AT&T will open, simplify, scale, and secure the network of the future.  That future network will first and foremost deliver new experiences to users and to businesses.

The User Defined Network Cloud and Domain 2.0, are bringing broad and sweeping organizational and technical changes to AT&T. The AT&T Foundry in Palo Alto is a piece of the broader story inside and outside of the company. At the Foundry, developers and engineers are prototyping potential pieces of the future network where AT&T sees gaps in the current ecosystem. These prototypes utilize the latest concepts from SDN and techniques from distributed computing to answer questions and to point paths towards the future network. In particular, the Foundry is exploring how to best apply SDN to the wide-area network to suit the needs of the User Defined Network Cloud.”

Comment and Analysis:

Soren’s remarks seem to imply AT&T is closely investigating Google’s use of SDN (and some version of OpenFlow or similar protocol) for interconnecting all of its data centers as one huge virtual cloud. It’s consistent with Mr. Donovan saying that AT&T would like to transform its 4,600 central offices into environments that support a virtual networking cloud environment.

After this year’s “beachhead projects,” Mr. Donovan said AT&T will start building out new network platforms in 2015, as part of its Domain 2.0 initiative.   But what Soren talked about was a much longer and greater network transformation.  Presumably, the platforms built in 2015 will be based on the results of the “beachhead projects” that Mr. Donovan mentioned during the Q &A portion of his ONS keynote speech.

Based on its previously referenced Domain 2.0 Whitepaper, we expect the emphasis to be placed on NFV concepts and white boxes, rather than pure SDN/Open Flow.  Here’s a relevant paragraph related to an “open networking router.”

“Often a variety of device sizes need to be purchased in order to support variances in workload from one location to another. In Domain 2.0, such a router is composed of NFV software modules, merchant silicon, and associated controllers. The software is written so that increasing workload consumes incremental resources from the common pool, and moreover so that it’s elastic: so the resources are only consumed when needed. Different locations are provisioned with appropriate amounts of network substrate, and all the routers, switches, edge caches, and middle-boxes are instantiated from the common resource pool. Such sharing of infrastructure across a broad set of uses makes planning and growing that infrastructure easier to manage.”

We will continue to follow SDN/NFV developments and deployments, particularly related to carriers such as AT&T, NTT, Verizon, Deutsche Telekom, Orange, etc.  Stay tuned…

NTT Com Leads all Network Providers in Deployment of SDN/OpenFlow; NFV Coming Soon


An image of Yukio Ito, Senior Vice President for Service Infrastructure at NTT Communications
Yukio Ito*, Senior Vice President for Service Infrastructure at NTT Communications

While AT&T has gotten a lot of press for its announced plans to use Software Defined Networking (SDN) to revamp its core network, another large global carrier has been quietly deploying SDN/OpenFlow for almost two years and soon plans to launch Network Function Virtualization (NFV) into its WAN.

NTT Communications (NTT-Com) is using an “SDN overlay” to connect 12 of its cloud data centers (including one’s in China and Germany scheduled for launch this year) located on three different continents.    This summer, the global network operator plans to deploy NFV in their WAN, based on virtualization technology from their Virtela acquisition last year.

ONS Presentation and Interview:

At a March 4, 2013 Open Network Summit (ONS) plenary session, Yukio Ito*, Senior Vice President for Service Infrastructure at NTT Communications described NTT-Com’s use of SDN to reduce management complexity, capex, and opex, while reducing time to market for new customers and services.

The SDN overlay inter-connects the data centers used in NTT-Com’s “Enterprise Cloud.”

Diagram of how NTT Com is helping customer Yamaha Motor reduce ICT costs via cloud migration.
Diagram of how NTT Com is helping customer Yamaha Motor reduce ICT costs via cloud migration.

Started in June 2012, it was the first private cloud in the world to adopt virtualized network technology.  Enterprise Cloud became available on a global basis in February 2013.  In July 2013, NTT-Com launched the world’s first SDN-based cloud migration service- On-premises Connection.  The service facilitates smooth, flexible transitions to the cloud by connecting customer on-premises systems with NTT Com’s Enterprise Cloud via an IP-MPLS VPN.  Changes in the interconnected cloud data centers create changes in NTT-Com’s IP-MPLS VPN (which connects NTT-Com’s enterprise customers to cloud resident data centers).

NTT-Com’s Enterprise Cloud currently uses SDN/OpenFlow within and between 10 cloud resident data centers in in 8 countries, and will launch two additional locations (Germany and China) within 2014.  The company’s worldwide infrastructure now reaches 196 countries/regions.

NTT-Com chose SDN for faster network provisioning and configuration than manual/semi-automated proprietary systems provided. “In our enterprise cloud, we eliminated cost structures and human error due to manual processes,” Ito-san said.  The OpenFlow protocol has proved useful in helping customers configure VPNs, according to Mr. Ito. “It might just be a small part of the whole network (5 to 10%), but it is an important step in making our network more efficient,” he added.

SDN technology enables NTT-Com’s customers to make changes promptly and flexibly, such as adjusting bandwidth to transfer large data in off-peak hours.  On-demand use helps to minimize the cost of cloud migration because payment for the service, including gateway equipment, is on a per-day basis.

Automated tools are another benefit made possible by SDN and can be leveraged by both NTT- Com and its customers.  One example is the ability to let a customer running a data backup storage service  to crank up its bandwidth then throttle back down when the backup is complete. In that case, the higher bandwidth is no longer needed. Furthermore, SDN also allows customers to retain their existing IP addresses when migrating from their own data centers to NTT-Com’s clouds.

In addition to faster provisioning/reconfiguration, CAPEX and OPEX savings, NTT-Com’s SDN deployment allows the carrier to enable the carrier to partner with multiple vendors for networking, avoid redundant deployment, simplify system cooperation, and shorten time-to-market, Ito-san said. NTT-Com is currently using SDN Controllers (with OpenFlow and BGP protocols) and Data Forwarding (AKA Packet Forwarding) equipment made by NEC Corp.

The global carrier plans to use SDN throughout its WAN. A new SDN Controller platform is under study with an open API. “The SDN Controller will look over the entire network, including packet transport and optical networks. It will orchestrate end-to-end connectivity.” Ito-san said.  The SDN-WAN migration will involve several steps, including interconnection with various other networks and equipment that are purpose built to deliver specific services (e.g. CDN, VNO/MVNO, VoIP, VPN, public Internet, etc).

NTT-Com plans to extend SDN to control its entire WAN, including Cloud as depicted in the illustration
NTT-Com plans to extend SDN to control its entire WAN, including Cloud as depicted in the illustration

NFV Deployment Planned:

NTT Com is further enhancing its network and cloud services with SDN related technology, such as NFV and overlay networks.  In the very near future, the company is looking to deploy NFV to improve network efficiency and utilization. This will be through technology from Virtela, which was acquired in October 2013.

The acquisition of cloud-based network services provider Virtela has enhanced NTT’s portfolio of cloud services and expanded coverage to 196 countries. The carrier plans to add Virtela’s NFV technology to its cloud-based network services this summer to enhance its virtualization capabilities.

“Many of our customers and partners request total ICT solutions. Leveraging NTT Com’s broad service portfolio together with Virtela’s asset-light networking, we will now be able to offer more choices and a single source for all their cloud computing, data networking, security and voice service requirements,” said Virtela President Ron Haigh. “Together, our advanced global infrastructure enables rapid innovation and value for more customers around the world while strengthening our leadership in cloud-based networking services.”

High value added network functions can be effectively realized with NFV, according to Ito-san, especially for network appliances. Ito-san wrote in an email to this author:

“In the case of NFV, telecom companies such as BT, France Telecom/Orange, Telefonica, etc. are thinking about deploying SDN on their networks combined with NFV. They have an interesting evolution of computer network technologies. In their cloud data centers, they have common x86-based hardware. And meanwhile, they have dedicated hardware special-function networking devices using similar technologies that cost more to maintain and are not uniform. I agree with the purpose of an NFV initiative that helps transform those special-function systems to run on common x86-based hardware.  In the carrier markets, the giants need some kind of differentiation. I feel that they can create their own advantage by adding virtualized network functions. Combined with their existing transport, core router infrastructure and multiple data center locations, they can use NFV to create an advantage against competitors.”

NTT’s ONS Demo’s -Booth # 403:

NTT-Com demonstrated three SDN-like technologies at its ONS booth, which I visited:

  1. A Multiple southbound interface control Platform and Portal system or AMPP, a configurable system architecture that accommodates both OpenFlow switches and command line interface (CLI)-based network devices;
  2. Lagopus Switch, a scalable, high-performance and elastic software-based OpenFlow switch that leverages multi-core CPUs and network I/O to achieve 10Gbps level-flow processing; and
  3. The Versatile OpenFlow ValiDator or VOLT, a first of a kind system that can validate flow entries and analyze network failures in OpenFlow environments.  I found such a simulation tool to be very worthwhile for network operators deploying SDN/Open Flow. An AT&T representative involved in that company’s SDN migration strategy also spoke highly of this tool.

NEC, NTT, NTT Com, Fujitsu, Hitachi develop SDN technologies under the ‘Open Innovation Over Network Platforms’ (O3 Project):

During his ONS keynote, Mr. Ito described the mission of the O3 Project as “integrated design, operations and management.”  The O3 Project is the world’s first R&D project that seeks to make a variety of wide area network (WAN) elements compatible with SDN, including platforms for comprehensively integrating and managing multiple varieties of WAN infrastructure and applications. The project aims to achieve wide area SDN that will enable telecommunications carriers to reduce the time to design, construct and change networks by approximately 90% when compared to conventional methods.  This will enable service providers to dramatically reduce the time needed to establish and withdraw services. In the future, enterprises will be able to enjoy services by simply installing the specialized application for services, such as a big data application, 8K HD video broadcasting and global enterprise intranet, and at the same time, an optimum network for the services will be provided promptly.

The O3 Project was launched in June 2013, based on research consigned by the Japan Ministry of Internal Affairs and Communications’ Research and Development of Network Virtualization Technology, and has been promoted jointly by the five companies. The five partners said the project defined unified expressions of network information and built a database for handling them, allowing network resources in lower layers such as optical networks to be handled at upper layers such as packet transport networks. This enables the provision of software that allows operation management and control of different types of networks based on common items. These technologies aim to enable telecoms operators to provide virtual networks that combine optical, packet, wireless and other features.

NTT-Com, NEC Corporation and IIGA Co. have jointly established the Okinawa Open Laboratory to develop SDN and cloud computing technologies.  The laboratory, which opened in May 2013, has invited engineers from private companies and academic organizations in Japan and other countries to work at the facility on the development of SDN and cloud-computing technologies and verification for commercial use.  Study results will be distributed widely to the public. Meanwhile, Ito-san invited all ONS attendees to visit that lab if they travel to Japan. That was a very gracious gesture, indeed!

Read more about this research partnership here:

Summary and Conclusion:

“NTT-Com is already providing SDN/Openflow-based services, but that is not where our efforts will end. We will continue to work on our development of an ideal SDN architecture and OpenFlow/SDN controller to offer unique and differentiated services with quick delivery. Examples of these services include: cloud migration, cloud-network automatic interconnection, virtualized network overlay function, NFV, and SDN applying to WAN,” said Mr. Ito. “Moreover, leveraging our position as a leader in SDN, NTT Com aims to spread the benefits of the technology through many communities,” he added.

Addendum:  Arcstar Universal One

NTT-Com this month is planning to launch its Arcstar Universal One Virtual Option service, which uses SDN virtual technology to create and control overlay networks via existing corporate networks or the Internet. Arcstar Universal One initially will be available in 21 countries including the U.S., Japan, Singapore, the U.K., Hong Kong, Germany, and Australia. The number of countries served will eventually expand to 30. NTT-Com says it is the first company to offer such a service.

Arcstar Universal One Virtual Option clients can create flexible, secure, low-cost, on-demand networks simply by installing an app on a PC, smart phone or similar device, or by using an adapter. Integrated management and operation of newly created virtual networks will be possible using the NTT-Com Business Portal, which greatly reduces the time to add or change network configurations.  Studies from NTT-Com show clients can expect to reduce costs by up to 60% and shorten the configuration period by up 80% compared to the conventional establishment.

*Yukio Ito is a board member of the Open Networking Foundation and Senior Vice President of Service Infrastructure at NTT Communications Corporation (NTT-Com) in Tokyo, a subsidiary of NTT, one of the largest telecommunications companies in the world.

Virtually Networked: The State of SDN

We have all heard about hectic activity with several initiatives on network virtualization. The potpourri of terms in this space (SDN/OpenFlow/OpenDaylight etc.) are enough to make one’s head spin. This article will try to lay out the landscape as of the time of writing and explain how some of these technologies are relevant to independent broadband service providers.

In the author’s view – Software Defined Networking (SDN) evolved with the aim of freeing the network operator from dependence on networking equipment vendors for developing new and innovative services and was intended to make networking services simpler to implement and manage.

Software Defined Networking decouples the control and data planes – thereby abstracting the physically architecture from the applications running over it. Network intelligence is centralized and separated away from the forwarding of packets.

SDN is the term used for a set of technologies that enable the management of services over computer networks without worrying about the lower level functionality – which is now abstracted away. This theoretically should allow the network operator to develop new services at the control plane without touching the data plane since they are now decoupled.

Network operators can control and manage network traffic via a software controller – mostly without having to physically touch switches and routers. While the physical IP network still exists – the software controller is the “brains” of SDN that drives the IP based forwarding plane. Centralizing this controller functionality allows the operator to programmatically configure and manage this abstracted network topology rather than having to hand configure every node in their network.

SDN provides a set of APIs to configure the common network services (such as routing/traffic management/security) .

OpenFlow is one standard protocol that defines the communication between such an abstracted control and data plane. OpenFlow was defined by the Open Networking Foundation – and allows direct manipulation of physical and virtual devices. OpenFlow would need to be implemented at both sides of the SDN controller software as well as the SDN-capable network infrastructure devices.

How would SDN impact an independent broadband service providers? If SDN lives up to its promise, it could provide the flexibility in networking that Telcos have needed for a long time. From a network operations perspective, it has the potential to revolutionize how networks are controlled and managed today – making it a very simple task to manage physical and virtual devices without ever having to change anything in the physical network.

However – these are still early days in the SDN space. Several vendors have implemented software controllers and the OpenFlow specification appears to be stabilizing. OpenDaylight is an open platform for network programmability to enable SDN. OpenDaylight has just released its first release of software code – Hydrogen and it can be downloaded as open source software today. But this is not the only approach to SDN – there are vendor specific approaches that this author will not cover in this article.

For independent broadband service providers wishing to learn more about SDN – it would be a great idea to download the Hydrogen release of OpenDaylight and play with it – but don’t expect it to provide any production ready functionality. Like the first release of any piece of software there are wrinkles to be ironed out and important features to be written. It would be a great time to get involved if one wants to contribute to the open source community.

For the independent broadband service providers wanting to deploy SDN – it’s not prime-time ready yet – but it’s an exciting and enticing idea that is fast becoming real. Keep a close ear to the ground – SDN might make our lives easier fairly soon.

[Editor’s Note; For more great insight from Kshitij about “SDN” and other topics , please go to his website at http://www.kshitijkumar.com/]