2014 Hot Interconnects Semiconductor Session Highlights & Takeaways- Part I.

Introduction:

With Software Defined: Networking (SDN), Storage and Data Center movements firmly entrenched, one might believe there’s not much opportunity for innovation in dedicated hardware implemented in silicon.  Several sessions at the 2014 Hot Interconnects conference, especially one from ARM Ltd, indicated that was not the case at all.

With the strong push for open networks, chips have to be much more flexible and agile, as well as more powerful, fast and functionally dense. Of course, there are well known players for specific types of silicon. For example: Broadcom for switch/routers;  ARM for CPU cores (also Intel and MIPS/Imagination Technologies),  many vendors for System on a Chip (SoC)- which includes 1 or more CPU core(s)-mostly from ARM (Qualcomm, Nvidia, Freescale, etc), and Network Processors (Cavium, LSI-Avago/Intel, PMC-Sierra, EZchip, Netronome, Marvell, etc),  bus interconnect fabrics (Arteris, Mellanox, PLX /Avago, etc).

What’s not known is how these types of components, especially SoC’s, will evolve to support open networking and software defined networking in telecom equipment (i.e. SDN/NFV).    Some suggestions were made during presentations and a panel session at this year’s excellent Hot Interconnects conference.

We summarize three invited Hot Interconnects presentations related to network silicon in this article. Our follow on Part II article will cover network hardware for SDN/NFV based on an Infonetics presentation and service provider market survey.

  1. Data & Control Plane Interconnect Solutions for SDN & NFV Networks, by Raghu Kondapalli, Director of Strategic Planning at LSI/Avago (Invited Talk)

Open networking, such as SDN (Software Defined Networking) and NFV (Network Function Virtualizationprovides software control of many network functions.   NFV enables virtualization of entire classes of network element functions such that they become modular building blocks that may be connected, or chained, together to create a variety of communication services.

Software defined and functionally disaggregated network elements rely heavily on deterministic and secure data and control plane communication within and across the network elements. In these environments scalability, reliability and performance of the whole network relies heavily on the deterministic behavior of this interconnect.  Increasing network agility and lower equipment prices are causing severe disruption in the networking industy.

A key SDN/NFV implementation issue is how to disaggregate network functions in a given network element (equipment type).  With such functions modularized, they could be implemented in different types of equipment along with dedicated functions (e.g. PHYs to connect to wire-line or wireless networks.  The equipment designer needs to: disaggregate, virtualize, interconnect, orchestrate and manage such network functions.

“Functional coordination and control plane acceleration are the keys to successful SDN deployments,” he said.  Not coincidently, the LSI/Avago Axxia multicore communication processor family (using an ARM CPU core) is being positioned for SDN and NFV acceleration, according to the company’s website. Other important points made by Raghu:

  • Scale poses many challenges for state management and traffic engineering
  • Traffic Management and Load Balancing are important functions
  • SDN/NFV backbone network components are needed
  • Disaggregated architectures will prevail.
  • Circuit board interconnection (backplane) should consider the traditional passive backplane vs. an active switch fabric.

Axxia 5516 16-core communications processor was suggested as the SoC to use for a SDN/NFV backbone network interface.  Functions identified included:  Ethernet switching, protocol pre-processing, packet classification (QoS), traffic rate shaping, encryption, security, Precision Time Protocol (IEEE 1588) to synchronize distributed clocks, etc.

Axxia’s multi-core SoCs were said to contain various programmable function accelerators to offer a scalable data and control plane solution.

Note:  Avago recently acquired semiconductor companies LSI Corp. and PLX Technology, but has now sold its Axxia Networking Business (originally from LSI which had acquired Agere in 2007 for $4 billion) to Intel for only $650 million in cash.  Agere Systems (which was formerly AT&T Micro-electronics- at one time the largest captive semiconductor maker in the U.S.) had a market capitalization of about $48 billion when it was spun off from Lucent Technologies in Dec 2000.

  1. Applicability of Open Flow based connectivity in NFV Enabled Networks, by Srinivasa Addepalli, Fellow and Chief Software Architect, Freescale (Invited Talk)

Mr. Addepalli’s presentation addressed the performance challenges in VMMs (Virtual Machine Monitors) and the opportunities to offload VMM packet processing using SoC’s like those from Freescale (another ARM core based SoC).   The VMM layer enables virtualization of networking hardware and exposes each virtual hardware element to VMs.

“Virtualization of network elements reduces operation and capital expenses and provides the ability for operators to offer new network services faster and to scale those services based on demand. Throughput, connection rate, low latency and low jitter are few important challenges in virtualization world. If not designed well, processing power requirements go up, thereby reducing the cost benefits,” according to Addepalli.

He positioned Open Flow as a communication protocol between control/offload layers, rather than the ONF’s API/protocol between the control and data planes (residing in the same or different equipment, respectively).  A new role for Open Flow in VMM and vNF (Virtual Network Function) offloads was described and illustrated.

The applicability of OpenFlow to NFV1 faces two challenges, according to Mr. Addepalli

  1. VMM networking
  2. Virtual network data path to VMs/

Note 1.  The ETSI NFV Industry Specification Group (ISG) is not considering the use of ONF’s Open Flow, or any other protocol, for NFV at this time.  It’s work scope includes reference architectures and functional requirements, but not protocol/interface specifications.  The ETSI NFV ISG will reach the end of Phase 1 by December 2014, with the publication of the remaining sixteen deliverables.

“To be successful, NFV must address performance challenges, which can best be achieved with silicon solutions,” Srinivasa concluded.   [Problem with that statement is that the protocols/interfaces to be used for fully standardized NFV have not been specified by ETSI or any standards body.  Hence, no one knows the exact combination of NFV functions that have to perform well]

  1. The Impact of ARM in Cloud and Networking Infrastructure, by Bob Monkman, Networking Segment Marketing Manager at ARM Ltd.

Bob revealed that ARM is  innnovating way beyond the CPU core it’s been licensing for years.  There are hardware accelerators, a cache coherent network and various types of network interconnects that have been combined into a single silicon block that is showed in the figure below:

Image courtesy of ARM - innovating beyond the core.
Image courtesy of ARM

Bob said something I thought was quite profound and dispels the notion that ARM is just a low power, core CPU cell producer: “It’s not just about a low power processor – it’s what you put around it.”  As a result, ARM cores are being included in SoC vendor silicon for both  networking and storage components. Those SoC companies, including LSI/Avago Axxia  and Freescale (see above), can leverage their existing IP by adding their own cell designs for specialized networking hardware functions (identified at the end of this article in the Addendum).

Bob noted that the ARM ecosystem was conducive to the disruption now being experience in the IT industy with software control of so many types of equipment.  The evolving network infrastructure – SDN, NFV, other Open Networking- is all about reducing total cost of ownership and enabling new services with smart and adaptable building blocks.  That’s depicted in the following illustration:

Evolving infrastructure is reducing costs and enabling new services.
Image courtesy of ARM.

Bob stated that one SoC size does not fit all.  For example, one type of Soc can contain: high performance CPU, power management, premises networking, storage & I/O building blocks.  While one for SDN/NFV might include: a high performance CPU, power management, I/O including wide area networking interfaces, and specialized hardware networking functions.

Monkman articulated very well what most already know:  that the Networking and Server equipment are often being combined in a single box (they’re “colliding” he said).  [In many cases, compute servers are running network virtualization (i.e.VMWare), acceleration, packet pre-processing, and/or control plane software (SDN model).]  Flexible intelligence is required on an end-to-end basis for this to work out well.  The ARM business model was said to enable innovation and differentiation, especially since the ARM CPU core has reached the 64 bit “inflection point.”

ARM is working closely with the Linaro Networking and Enterprise Groups. Linaro is a non-profit industry group creating open source software that runs on ARM CPU cores.  Member companies fund Linaro and provide half of its engineering resources as assignees who work full time on Linaro projects. These assignees combined with over 100 of Linaro’s own engineers create a team of over 200 software developers.

Bob said that Linaro is creating an optimized, open-source platform software for scalable infrastructure (server, network & storage).  It coordinates and multiplies members’ efforts, while accelerating product time to market (TTM).  Linaro open source software enables ARM partners (licensees of ARM cores) to focus on innovation and differentiated value-add functionality in their SoC offerings.

Author’s Note:  The Linaro Networking Group (LNG) is an autonomous segment focused group that is responsible for engineering development in the networking space. The current mix of LNG engineering activities includes:

  • Virtualization support with considerations for real-time performance, I/O optimization, robustness and heterogeneous operating environments on multi-core SoCs.
  • Real-time operations and the Linux kernel optimizations for the control and data plane
  • Packet processing optimizations that maximize performance and minimize latency in data flows through the network.
  • Dealing with legacy software and mixed-endian issues prevalent in the networking space
  • Power Management
  • Data Plane Programming API:

For more information: https://wiki.linaro.org/LNG


OpenDataPlane (ODP) http://www.opendataplane.org/ was described by Bob as a “truly cross-platform, truly open-source and open contribution interface.” From the ODP website:

ODP embraces and extends existing proprietary, optimized vendor-specific hardware blocks and software libraries to provide inter-operability with minimal overhead. Initially defined by members of the Linaro Networking Group (LNG), this project is open to contributions from all individuals and companies who share an interest in promoting a standard set of APIs to be used across the full range of network processor architectures available.]

Author’s Note:   There’s a similar project from Intel called DPDK or Data Plane Developer’s Kit that an audience member referenced during Q &A . We wonder if those APIs are viable alternatives or can be used in conjunction with the ONF’s OpenFlow API?


Next Generation Virtual Network Software Platforms, along with network operator benefits, are illustrated in the following graphic:

An image depicting the Next-Gen virtualized network software platforms.
Image courtesy of ARM.

Bob Monkman’s Summary:

  • Driven by total cost of ownership, the data center workload shift is leading to  more optimized and diverse silicon solutions
  • Network infrastructure is also well suited for the same highly integrated, optimized and scalable solutions ARM’s SoC partners understand and are positioned to deliver
  • Collaborative business model supports “one size does not fit all approach,” rapid pace of innovation, choice and diversity
  • Software ecosystem (e.g. Linaro open source) is developing quickly to support target markets
  • ARM ecosystem is leveraging standards and open source software to accelerate deployment readiness

Addendum:

In a post conference email exchange, I suggested several specific networking hardware functions that might be implemented in a SoC (with 1 or more ARM CPU cores).  Those include:  Encryption, Packet Classification, Deep Packet Inspection, Security functions,  intra-chip or inter-card interface/fabric, fault & performance monitoring, error counters?

Bob replied: “Yes, security acceleration such as SSL operations; counters of various sorts -yes; less common on the fault notification and performance monitoring. A recent example is found in the Mingoa acquisition, see: http://www.microsemi.com/company/acquisitions ”

…………………………………………………………………….

References:


End NOTE:  Stay tuned for Part II which will cover Infonetics’ Michael Howard’s presentation on Hardware and market trends for SDN/NFV.

2014 Hot Interconnects Highlight: Achieving Scale & Programmability in Google's Software Defined Data Center WAN

Introduction:

Amin Vahdat, PhD & Distinguished Engineer and Lead Network Architect at Google, delivered the opening keynote at 2014 Hot Interconnects, held August 26-27 in Mt View, CA. His talk presented an overview of the design and architectural requirements to bring Google’s shared infrastructure services to external customers with the Google Cloud Platform.

The wide area network underpins storage, distributed computing, and security in the Cloud, which is appealing for a variety of reasons:

  • On demand access to compute servers and storage
  • Easier operational model than premises based networks
  • Much greater up-time, i.e. five 9’s reliability; fast failure recovery without human intervention, etc
  • State of the art infrastructure services, e.g. DDoS prevention, load balancing, storage, complex event & stream processing, specialised data aggregation, etc
  • Different programming models unavailable elsewhere, e.g. low latency, massive IOPS, etc
  • New capabilities; not just delivering old/legacy applications cheaper

Andromeda- more than a galaxy in space:

Andromeda – Google’s code name for their managed virtual network infrastructure- is the enabler of Google’s cloud platform which provides many services to simultaneous end users. Andromeda provides Google’s customers/end users with robust performance, low latency and security services that are as good or better than private, premises based networks. Google has long focused on shared infrastructure among multiple internal customers and services, and in delivering scalable, highly efficient services to a global population.

An image of Google's Andromeda Controller diagram.
Click to view larger version. Image courtesy of Google

“Google’s (network) infra-structure services run on a shared network,” Vahdat said. “They provide the illusion of individual customers/end users running their own network, with high-speed interconnections, their own IP address space and Virtual Machines (VMs),” he added.  [Google has been running shared infrastructure since at least 2002 and it has been the basis for many commonly used scalable open-source technologies.]

From Google’s blog:

Andromeda’s goal is to expose the raw performance of the underlying network while simultaneously exposing network function virtualization (NFV). We expose the same in-network processing that enables our internal services to scale while remaining extensible and isolated to end users. This functionality includes distributed denial of service (DDoS) protection, transparent service load balancing, access control lists, and firewalls. We do this all while improving performance, with more enhancements coming.  Hence, Andromeda itself is not a Cloud Platform networking product; rather, it is the basis for delivering Cloud Platform networking services with high performance, availability, isolation, and security.”

Google uses its own versions of SDN and NFV to orchestrate provisioning, high availability, and to meet or exceed application performance requirements for Andromeda. The technology must be distributed throughout the network, which is only as strong as its weakest link, according to Amin.  “SDN” (Software Defined Networking) is the underlying mechanism for Andromeda. “It controls the entire hardware/software stack, QoS, latency, fault tolerance, etc.”

“SDN’s” fundamental premise is the separation of the control plane from the data plane, Google and everyone else agrees on that. But not much else!  Amin said the role of “SDN” is overall co-ordination and orchestration of network functions. It permits independent evolution of the control and data planes. Functions identified under SDN supervision were the following:

  • High performance IT and network elements: NICs, packet processors, fabric switches, top of rack switches, software, storage, etc.
  • Audit correctness (of all network and compute functions performed)
  • Provisioning with end to end QoS and SLA’s
  • Insuring high availability (and reliability)

“SDN” in Andromeda–Observations and Explanations:

“A logically centralized hierarchical control plane beats peer-to-peer (control plane) every time,” Amin said. Packet/frame forwarding in the data plane can run at network link speed, while the control plane can be implemented in commodity hardware (servers or bare metal switches), with scaling as needed. The control plane requires 1% of the overhead of the entire network, he added.

As expected, Vahdat did not reveal any of the APIs/ protocols/ interface specs that Google uses for its version of “SDN.” In particular, the API between the control and data plane (Google has never endorsed the ONF specified Open Flow v1.3). Also, he didn’t detail how the logically centralized, but likely geographically distributed control plane works.

Amin said that Google was making “extensive use of NFV (Network Function Virtualization) to virtualize SDN.” Andromeda NFV functions, illustrated in the above block diagram, include: Load balancing, DoS, ACLs, and VPN. New challenges for NFV include: fault isolation, security, DoS, virtual IP networks, mapping external services into name spaces and balanced virtual systems.

Managing the Andromeda infrastructure requires new tools and skills, Vahdat noted. “It turns out that running a hundred or a thousand servers is a very difficult operation. You can’t hire people out of college who know how to operate a hundred or a thousand servers,” Amin said. Tools are often designed for homogeneous environments and individual systems. Human reaction time is too slow to deliver “five nines” of uptime, maintenance outages are unacceptable, and the network becomes a bottleneck and source of outages.

Power and cooling are the major costs of a global data center and networking infrastructure like Google’s. “That’s true of even your laptop at home if you’re running it 24/7. At Google’s mammoth scale, that’s very apparent,” Vahdat said.

Applications require real-time high performance and low-latency communications to virtual machines. Google delivers those capabilities via its own Content Delivery Network (CDN).  Google uses the term “cluster networking” to describe huge switch/routers which are purpose-built out of cost efficient building blocks.

In addition to high performance and low latency, users may also require service chaining and load-balancing, along with extensibility (the capability to increase or reduce the number of servers available to applications as demand requires). Security is also a huge requirement. “Large companies are constantly under attack. It’s not a question of whether you’re under attack but how big is the attack,” Vahdat said.

[“Security will never be the same again. It’s a losing battle,” said Martin Casado, PhD during his Cloud Innovation Summit keynote on March 27, 2014]

Google has a global infrastructure, with data centers and points of presence worldwide to provide low-latency access to services locally, rather than requiring customers to access a single point of presence. Google’s software defined WAN (backbone private network) was one of the first networks to use “SDN”. In operation for almost three years, it is larger and growing faster than Google’s customer facing Internet Connectivity between Google’s cloud resident data centers and is comparable to the data traffic within a premises based data center, according to Vahdat.

Note 1.   Please refer to this article: Google’s largest internal network interconnects its Data Centers using Software Defined Network (SDN) in the WAN

“SDN” opportunities and challenges include:

  • Logically centralized network management- a shift from fully decentralized, box to box communications
  • High performance and reliable distributed control
  • Eliminate one-off protocols (not explained)
  • Definition of an API that will deliver NFV as a service

Cloud Caveats:

While Vahdat believes in the potential and power of cloud computing, he says that moving to the cloud (from premises based data centers) still poses all the challenges of running an IT infrastructure. “Most cloud customers, if you poll them, say the operational overhead of running on the cloud is as hard or harder today than running on your own infrastructure,” Vahdat said.

“In the future, cloud computing will require high bandwidth, low latency pipes.” Amin cited a “law” this author never heard of: “1M bit/sec of I/O is required for every 1MHz of CPU processing (computations).” In addition, the cloud must provide rapid network provisioning and very high availability, he added.

Network switch silicon and NPUs should focus on:

  • Hardware/software support for more efficient read/write of switch state
  • Increasing port density
  • Higher bandwidth per chip
  • NPUs must provide much greater than twice the performance for the same functionality as general purpose microprocessors and switch silicon.

Note: Two case studies were presented which are beyond the scope of this article to review.  Please refer to a related article on 2014 Hot Interconnects Death of the God Box

Vahdat’s Summary:

Google is leveraging its decade plus experience in delivering high performance shared IT infrastructure in its Andromeda network.  Logically centralized “SDN” is used to control and orchestrate all network and computing elements, including: VMs, virtual (soft) switches, NICs, switch fabrics, packet processors, cluster routers, etc.  Elements of NFV are also being used with more expected in the future.

References:

http://googlecloudplatform.blogspot.com/2014/04/enter-andromeda-zone-google-cloud-platforms-latest-networking-stack.html

https://www.youtube.com/watch?v=wpin6GKpDm8

http://gigaom.com/2014/04/02/google-launches-andromeda-a-software-defined-network-underlying-its-cloud/

http://virtualizationreview.com/articles/2014/04/03/google-andromeda.aspx

http://community.comsoc.org/blogs/alanweissberger/martin-casado-how-hypervisor-can-become-horizontal-security-layer-data-center

http://www.convergedigest.com/2014/03/ons-2014-google-keynote-software.html

https://www.youtube.com/watch?v=n4gOZrUwWmc

http://cseweb.ucsd.edu/~vahdat/

Addendum:  Amdahl’s Law

In a post conference email to this author, Amin wrote:

Here are a couple of references for Amdahl’s “law” on balanced system design:

Both essentially argue that for modern parallel computation, we need a fair amount of network I/O to keep the CPU busy (rather than stalled waiting for I/O to complete).
Most distributed computations today substantially under provision IO, largely because of significant inefficiency in the network software stack (RPC, TCP, IP, etc.) as well as the expense/complexity of building high performance network interconnects.  Cloud infrastructure has the potential to deliver balanced system infrastructure even for large-scale distributed computation.

Thanks, Amin

NTT Com Leads all Network Providers in Deployment of SDN/OpenFlow; NFV Coming Soon

Introduction:

An image of Yukio Ito, Senior Vice President for Service Infrastructure at NTT Communications
Yukio Ito*, Senior Vice President for Service Infrastructure at NTT Communications

While AT&T has gotten a lot of press for its announced plans to use Software Defined Networking (SDN) to revamp its core network, another large global carrier has been quietly deploying SDN/OpenFlow for almost two years and soon plans to launch Network Function Virtualization (NFV) into its WAN.

NTT Communications (NTT-Com) is using an “SDN overlay” to connect 12 of its cloud data centers (including one’s in China and Germany scheduled for launch this year) located on three different continents.    This summer, the global network operator plans to deploy NFV in their WAN, based on virtualization technology from their Virtela acquisition last year.

ONS Presentation and Interview:

At a March 4, 2013 Open Network Summit (ONS) plenary session, Yukio Ito*, Senior Vice President for Service Infrastructure at NTT Communications described NTT-Com’s use of SDN to reduce management complexity, capex, and opex, while reducing time to market for new customers and services.

The SDN overlay inter-connects the data centers used in NTT-Com’s “Enterprise Cloud.”

Diagram of how NTT Com is helping customer Yamaha Motor reduce ICT costs via cloud migration.
Diagram of how NTT Com is helping customer Yamaha Motor reduce ICT costs via cloud migration.

Started in June 2012, it was the first private cloud in the world to adopt virtualized network technology.  Enterprise Cloud became available on a global basis in February 2013.  In July 2013, NTT-Com launched the world’s first SDN-based cloud migration service- On-premises Connection.  The service facilitates smooth, flexible transitions to the cloud by connecting customer on-premises systems with NTT Com’s Enterprise Cloud via an IP-MPLS VPN.  Changes in the interconnected cloud data centers create changes in NTT-Com’s IP-MPLS VPN (which connects NTT-Com’s enterprise customers to cloud resident data centers).

NTT-Com’s Enterprise Cloud currently uses SDN/OpenFlow within and between 10 cloud resident data centers in in 8 countries, and will launch two additional locations (Germany and China) within 2014.  The company’s worldwide infrastructure now reaches 196 countries/regions.

NTT-Com chose SDN for faster network provisioning and configuration than manual/semi-automated proprietary systems provided. “In our enterprise cloud, we eliminated cost structures and human error due to manual processes,” Ito-san said.  The OpenFlow protocol has proved useful in helping customers configure VPNs, according to Mr. Ito. “It might just be a small part of the whole network (5 to 10%), but it is an important step in making our network more efficient,” he added.

SDN technology enables NTT-Com’s customers to make changes promptly and flexibly, such as adjusting bandwidth to transfer large data in off-peak hours.  On-demand use helps to minimize the cost of cloud migration because payment for the service, including gateway equipment, is on a per-day basis.

Automated tools are another benefit made possible by SDN and can be leveraged by both NTT- Com and its customers.  One example is the ability to let a customer running a data backup storage service  to crank up its bandwidth then throttle back down when the backup is complete. In that case, the higher bandwidth is no longer needed. Furthermore, SDN also allows customers to retain their existing IP addresses when migrating from their own data centers to NTT-Com’s clouds.

In addition to faster provisioning/reconfiguration, CAPEX and OPEX savings, NTT-Com’s SDN deployment allows the carrier to enable the carrier to partner with multiple vendors for networking, avoid redundant deployment, simplify system cooperation, and shorten time-to-market, Ito-san said. NTT-Com is currently using SDN Controllers (with OpenFlow and BGP protocols) and Data Forwarding (AKA Packet Forwarding) equipment made by NEC Corp.

The global carrier plans to use SDN throughout its WAN. A new SDN Controller platform is under study with an open API. “The SDN Controller will look over the entire network, including packet transport and optical networks. It will orchestrate end-to-end connectivity.” Ito-san said.  The SDN-WAN migration will involve several steps, including interconnection with various other networks and equipment that are purpose built to deliver specific services (e.g. CDN, VNO/MVNO, VoIP, VPN, public Internet, etc).

NTT-Com plans to extend SDN to control its entire WAN, including Cloud as depicted in the illustration
NTT-Com plans to extend SDN to control its entire WAN, including Cloud as depicted in the illustration

NFV Deployment Planned:

NTT Com is further enhancing its network and cloud services with SDN related technology, such as NFV and overlay networks.  In the very near future, the company is looking to deploy NFV to improve network efficiency and utilization. This will be through technology from Virtela, which was acquired in October 2013.

The acquisition of cloud-based network services provider Virtela has enhanced NTT’s portfolio of cloud services and expanded coverage to 196 countries. The carrier plans to add Virtela’s NFV technology to its cloud-based network services this summer to enhance its virtualization capabilities.

“Many of our customers and partners request total ICT solutions. Leveraging NTT Com’s broad service portfolio together with Virtela’s asset-light networking, we will now be able to offer more choices and a single source for all their cloud computing, data networking, security and voice service requirements,” said Virtela President Ron Haigh. “Together, our advanced global infrastructure enables rapid innovation and value for more customers around the world while strengthening our leadership in cloud-based networking services.”

High value added network functions can be effectively realized with NFV, according to Ito-san, especially for network appliances. Ito-san wrote in an email to this author:

“In the case of NFV, telecom companies such as BT, France Telecom/Orange, Telefonica, etc. are thinking about deploying SDN on their networks combined with NFV. They have an interesting evolution of computer network technologies. In their cloud data centers, they have common x86-based hardware. And meanwhile, they have dedicated hardware special-function networking devices using similar technologies that cost more to maintain and are not uniform. I agree with the purpose of an NFV initiative that helps transform those special-function systems to run on common x86-based hardware.  In the carrier markets, the giants need some kind of differentiation. I feel that they can create their own advantage by adding virtualized network functions. Combined with their existing transport, core router infrastructure and multiple data center locations, they can use NFV to create an advantage against competitors.”

NTT’s ONS Demo’s -Booth # 403:

NTT-Com demonstrated three SDN-like technologies at its ONS booth, which I visited:

  1. A Multiple southbound interface control Platform and Portal system or AMPP, a configurable system architecture that accommodates both OpenFlow switches and command line interface (CLI)-based network devices;
  2. Lagopus Switch, a scalable, high-performance and elastic software-based OpenFlow switch that leverages multi-core CPUs and network I/O to achieve 10Gbps level-flow processing; and
  3. The Versatile OpenFlow ValiDator or VOLT, a first of a kind system that can validate flow entries and analyze network failures in OpenFlow environments.  I found such a simulation tool to be very worthwhile for network operators deploying SDN/Open Flow. An AT&T representative involved in that company’s SDN migration strategy also spoke highly of this tool.

NEC, NTT, NTT Com, Fujitsu, Hitachi develop SDN technologies under the ‘Open Innovation Over Network Platforms’ (O3 Project):

During his ONS keynote, Mr. Ito described the mission of the O3 Project as “integrated design, operations and management.”  The O3 Project is the world’s first R&D project that seeks to make a variety of wide area network (WAN) elements compatible with SDN, including platforms for comprehensively integrating and managing multiple varieties of WAN infrastructure and applications. The project aims to achieve wide area SDN that will enable telecommunications carriers to reduce the time to design, construct and change networks by approximately 90% when compared to conventional methods.  This will enable service providers to dramatically reduce the time needed to establish and withdraw services. In the future, enterprises will be able to enjoy services by simply installing the specialized application for services, such as a big data application, 8K HD video broadcasting and global enterprise intranet, and at the same time, an optimum network for the services will be provided promptly.

The O3 Project was launched in June 2013, based on research consigned by the Japan Ministry of Internal Affairs and Communications’ Research and Development of Network Virtualization Technology, and has been promoted jointly by the five companies. The five partners said the project defined unified expressions of network information and built a database for handling them, allowing network resources in lower layers such as optical networks to be handled at upper layers such as packet transport networks. This enables the provision of software that allows operation management and control of different types of networks based on common items. These technologies aim to enable telecoms operators to provide virtual networks that combine optical, packet, wireless and other features.

NTT-Com, NEC Corporation and IIGA Co. have jointly established the Okinawa Open Laboratory to develop SDN and cloud computing technologies.  The laboratory, which opened in May 2013, has invited engineers from private companies and academic organizations in Japan and other countries to work at the facility on the development of SDN and cloud-computing technologies and verification for commercial use.  Study results will be distributed widely to the public. Meanwhile, Ito-san invited all ONS attendees to visit that lab if they travel to Japan. That was a very gracious gesture, indeed!

Read more about this research partnership here:

Summary and Conclusion:

“NTT-Com is already providing SDN/Openflow-based services, but that is not where our efforts will end. We will continue to work on our development of an ideal SDN architecture and OpenFlow/SDN controller to offer unique and differentiated services with quick delivery. Examples of these services include: cloud migration, cloud-network automatic interconnection, virtualized network overlay function, NFV, and SDN applying to WAN,” said Mr. Ito. “Moreover, leveraging our position as a leader in SDN, NTT Com aims to spread the benefits of the technology through many communities,” he added.

Addendum:  Arcstar Universal One

NTT-Com this month is planning to launch its Arcstar Universal One Virtual Option service, which uses SDN virtual technology to create and control overlay networks via existing corporate networks or the Internet. Arcstar Universal One initially will be available in 21 countries including the U.S., Japan, Singapore, the U.K., Hong Kong, Germany, and Australia. The number of countries served will eventually expand to 30. NTT-Com says it is the first company to offer such a service.

Arcstar Universal One Virtual Option clients can create flexible, secure, low-cost, on-demand networks simply by installing an app on a PC, smart phone or similar device, or by using an adapter. Integrated management and operation of newly created virtual networks will be possible using the NTT-Com Business Portal, which greatly reduces the time to add or change network configurations.  Studies from NTT-Com show clients can expect to reduce costs by up to 60% and shorten the configuration period by up 80% compared to the conventional establishment.


*Yukio Ito is a board member of the Open Networking Foundation and Senior Vice President of Service Infrastructure at NTT Communications Corporation (NTT-Com) in Tokyo, a subsidiary of NTT, one of the largest telecommunications companies in the world.

Virtually Networked: The State of SDN

We have all heard about hectic activity with several initiatives on network virtualization. The potpourri of terms in this space (SDN/OpenFlow/OpenDaylight etc.) are enough to make one’s head spin. This article will try to lay out the landscape as of the time of writing and explain how some of these technologies are relevant to independent broadband service providers.

In the author’s view – Software Defined Networking (SDN) evolved with the aim of freeing the network operator from dependence on networking equipment vendors for developing new and innovative services and was intended to make networking services simpler to implement and manage.

Software Defined Networking decouples the control and data planes – thereby abstracting the physically architecture from the applications running over it. Network intelligence is centralized and separated away from the forwarding of packets.

SDN is the term used for a set of technologies that enable the management of services over computer networks without worrying about the lower level functionality – which is now abstracted away. This theoretically should allow the network operator to develop new services at the control plane without touching the data plane since they are now decoupled.

Network operators can control and manage network traffic via a software controller – mostly without having to physically touch switches and routers. While the physical IP network still exists – the software controller is the “brains” of SDN that drives the IP based forwarding plane. Centralizing this controller functionality allows the operator to programmatically configure and manage this abstracted network topology rather than having to hand configure every node in their network.

SDN provides a set of APIs to configure the common network services (such as routing/traffic management/security) .

OpenFlow is one standard protocol that defines the communication between such an abstracted control and data plane. OpenFlow was defined by the Open Networking Foundation – and allows direct manipulation of physical and virtual devices. OpenFlow would need to be implemented at both sides of the SDN controller software as well as the SDN-capable network infrastructure devices.

How would SDN impact an independent broadband service providers? If SDN lives up to its promise, it could provide the flexibility in networking that Telcos have needed for a long time. From a network operations perspective, it has the potential to revolutionize how networks are controlled and managed today – making it a very simple task to manage physical and virtual devices without ever having to change anything in the physical network.

However – these are still early days in the SDN space. Several vendors have implemented software controllers and the OpenFlow specification appears to be stabilizing. OpenDaylight is an open platform for network programmability to enable SDN. OpenDaylight has just released its first release of software code – Hydrogen and it can be downloaded as open source software today. But this is not the only approach to SDN – there are vendor specific approaches that this author will not cover in this article.

For independent broadband service providers wishing to learn more about SDN – it would be a great idea to download the Hydrogen release of OpenDaylight and play with it – but don’t expect it to provide any production ready functionality. Like the first release of any piece of software there are wrinkles to be ironed out and important features to be written. It would be a great time to get involved if one wants to contribute to the open source community.

For the independent broadband service providers wanting to deploy SDN – it’s not prime-time ready yet – but it’s an exciting and enticing idea that is fast becoming real. Keep a close ear to the ground – SDN might make our lives easier fairly soon.

[Editor’s Note; For more great insight from Kshitij about “SDN” and other topics , please go to his website at http://www.kshitijkumar.com/]

Viodi View – 09/06/13

theater
Click here to view stories of the heartland

The telecommunications traffic that traverses the fiber optic networks of rural telecom providers is secondary to the real value of locally-owned, telecom operators. The larger value these entities bring is the positive impact they have on their respective communities. Great bandwidth is a big part of that impact and is what is measured by Washington D.C.

It is the intangible (intangible from afar, that is), that makes the difference to the long-term health of the rural communities served by these entities.  This intangible value comes from the owners, managers and employees living in the community, serving on local boards, organizing programs for their youth and doing whatever it takes to improve their hometowns. They are in it for the long-haul, which isn’t typical of today’s Internet-driven, fad-driven online world we inhabit (see Stories of the Heartland for examples).


Cooperation Needed Across the Video Food Chain

Ed Holleran of Atlantic Broadband is interviewed at the Indy Show.
Click to view

“It comes down to the consumer, said Edward T. Holleran, President and Chief Executive Officer of Atlantic Broadband. He suggested a laser-like focus on meeting the needs of the customer is more important than ever, given changes in tastes, technology and competition.

He makes the point that broadband has subsidized operators’ video product for some time and that all parts of the content food chain will need to cooperate to prevent an impasse, such as the many battles over retranmission consent over the past several years.

Click here to view and read more.


A Canary in the Pay TV Coal Mine

Ken Pyle interviews Bryan Rader at BBC 2013.
Click to view

“They care more about broadband capacity than choice of TV providers,” said Bryan Rader of Bandwidth Consulting. Rader was referring to apartment dwellers who are leading the way in cutting the cord. In a sense, this echoes what happened a few years back when these same customers were among the first to adopt over-the-top, VoIP services. In this interview, he talks about the best way property owners can take advantage of this trend, as well as what this trend means for the overall market.

Click to view.


A Global Reach of People and Machines

Erik Kling of Vodafone is interviewed at Connections 2013.
Click to view

What may not be so obvious in this story, however, is that, through its global operations, Vodafone has a presence in the U.S. beyond Verizon. In the above interview, Erik Kling, Vice President of New Business Development at Vodafone Global Enterprise,  discusses how Vodafone helps connect disparate global organizations. And those connections aren’t just about connecting people, but connecting machines to machines.

Click to view.


Identifying Power Usage and Taking Action

Ken Pyle interviews Roderick Morris of Opower at the 2013 Smart Energy Summit.
Click to view

Today’s Wall Street Journal quotes Opower in a column about different ways utilities are engaging customers to be more proactive about energy management. Earlier this year, we caught up with Roderick Morris of Opower in this interview at Parks Associates’ 2013 Smart Energy Summit. Opower works with utilities of various sizes to help consumers improve the efficiency of energy usage. Morris provides insight into techniques for helping people be more efficient, marketing it and using big data to help identify what changes make a difference.

Click to view.


Outstanding Sessions at 2013 Hot Interconnects Conference by Alan Weissberger

Image depicts Incremental Adoption of Network as a Service.
Click to read more

In its 21st year in Silicon Valley, the Hot Interconnects Conference addresses the state of the art and future technologies used in Data-Center Networking. The high performance communications aspects are also of interest to the Supercomputing community. This article provides an overview of the presentation on the Open Compute Project (OCP), a thriving consumer-led community dedicated to promoting more openness and a greater focus on scale, efficiency, and sustainability in the development of data center infrastructure technologies. Additionally, an overview of the presentation on NaaS (Network as a Service) is given. The research in this area has the potential for a big impact on ISPs in the coming years.

Click here to read more.


Disposable Phone Numbers

Describing itself as the “Snapchat for calls”, RingMeMaybe combines old school phone numbers with the latest in destructible identities. RingMeMaybe provides a phone number that self-destructs after five days. Applications for this no-cost iOS download (Android version is in the works) include dating, classified advertising; really anything where a person wants to keep their personal number private. It seems like illicit activities would be a use-case for this sort of app, although it could have some interesting applications in direct marketing. Phone numbers can be tagged, so it is possible to associate meta-data with an incoming call (e.g. the jerk I met at a bar).

Another app, Burner, provides similar functionality and both apps provide initial credits to get one started at no-cost. Via its public relations agency, the founder of RingMeMaybe indicates, “RingMeMaybe has a deal with a telecom operator and respects FCC regulations.”  Further, they follow the industry standard for recycling numbers, waiting 5 weeks with zero inbound communications on the numbers to be declared.

This sort of disposable privacy app seems like a good one for independent broadband operators to offer, as it would be complementary to other offerings.


The Korner – A Challenge Tying It All Together

An image from the 2013 Challenger Jamboree held at Moreland Little League.
Click to view.

It is a struggle to keep a community together. It is easy to get caught up in the noise and lose sight of the bigger picture. The more difficult thing is to set aside differences, find common ground and move forward. This has been one take-away from my two years as president of Moreland Little League.

Part of the reason I took this role is that I wanted to get an idea of the challenges my friends in the heartland face, as they are often on multiple boards, are volunteer firefighters and are leaders in their respective communities.

Like independent rural telecom providers, one value that Little League provides is it gives a bunch of otherwise disconnected citizens something in common and is the thread that can stitch together community. In urban areas especially, its system of boundaries forces a tight geographic community, which is unlike the direction of youth sports today where the community is increasingly based around performance. With Little League, everyone plays, regardless of income or ability.

Like the aforementioned rural telecommunications providers, the primary benefit is not obvious. Baseball is just the vehicle to impart positive values to our youth, and, in the process, it helps adults if they are open to learning. It has certainly taught me a great deal. Everything that is good about Little League is manifested in its Challenger program.

It was an honor to be part of the 2013 Challenger Jamboree, which featured teams from south Silicon Valley to San Francisco to the north. This truly was an extraordinary event that wouldn’t have been possible without the hard work by so many volunteers and the generous donations by the many sponsors.

It is one thing to read and understand the impact this program has on the children and their families, but it doesn’t compare to the tangibility of being there. The next best thing to being there is this video, which provides a flavor for the day’s events and what this great program is all about.

Click here to view.

Outstanding Sessions at 2013 Hot Interconnects Conference

Introduction:

In its 21st year in Silicon Valley, the Hot Interconnects Conference addresses the state of the art and futre technologies used in Data-Center Networking. The high performance communications aspects are also of interest to the Supercomputing community. The 2013 Hot Interconnects Conference Program can be found here.

We regret that we missed the first keynote on August 21st: Scale and Programmability in Google’s Software Defined Data Center WAN by Amin Vahdat of UCSD/Google. Those that attended that I spoke with were quite impressed. This article summarizes two excellent Hot Interconnect sessions this author did attend:

  1. Overview and Next Steps for the Open Compute Project by John Kenevey, Facebook and the Open Compute Project
  2. Networking as a Service (NaaS) by Tom Anderson, PhD and Professor at University of Washington (Tom received the prestigious IEEE 2013 Kobayashi Award after his talk)

1. Open Compute Project (OCP):

Mr. Kenevey provided an overview of the Open Compute Project (OCP), a thriving consumer-led community dedicated to promoting more openness and a greater focus on scale, efficiency, and sustainability in the development of data center infrastructure technologies. A brief history of the project was presented as well as its vision for the future.

The Open Compute Project (OCP) goal is to develop technologies for servers and data centers that are referred to as “open hardware,” because they adhere to the model traditionally associated with open source software projects.  Read more here

The most intriquing new OCP mentioned was one to develop an open network switch, using silicon photonics from Intel and data center switching silicon from Broadcom. The open network switch was described as “disaggreagated” in that it’s functionality is distributed amongst “off the shelf” modules and/or silicon- in this case, from Intel and Broadcom. The Networking project will focus on developing a specification and a reference box for an open, OS-neutral, top-of-rack switch for use in the data center.

Intel was said to have been working on silicon photonics for 10 years (After the conference, Victor Krutul, Director, External Marketing for Intel’s Silicon Photonics Operation confirmed that via email.  Intel submitted a paper to Nature magazine in Dec 2003 titled, “A high-speed silicon optical modulator based on a metal–oxide–semiconductor capacitor“).   Facebook, which builds its own data center switching gear, has been working with Intel for 9 months on silicon photonics, which is NOT yet an announced product.

Intel’s Research web site states: “Intel and Facebook are collaborating on a new disaggregated, rack-scale server architecture that enables independent upgrading of compute, network and storage subsystems that will define the future of mega-datacenter designs for the next decade. The disaggregated rack architecture includes Intel’s new photonic architecture, based on high-bandwidth, 100Gbps Intel® Silicon Photonics Technology, that enables fewer cables, increased bandwidth, farther reach and extreme power efficiency compared to today’s copper based interconnects.” Read the news release here. Intel announced an Open Networking Platform this year, which may also be of interest to readers. It’s described here.


SiPh Addendum from Victor Krutul of Intel:

“Silicon Photonics is a whole new science and many technical challenges had to be solved, many which other scientists called impossible.  For example the 1G modulator referenced in the Nature paper was deemed impossible because the world record at the time was 20Mhz.  BTW, we announced 10G a year later and 40G 2 years after that.  Our esteemed Research team has been granted over 200 patents with 50-10 more in process.  As you can see a lot of heavy lifting.”


The OCP Networking group is also working with Broadcom to get them to contribute their chip design spec as “open source hardware.” The Network group hopes to have something implementable by year end 2013, Kenevey said. During the Q and A, John stated that the open network switch technology could be used for optical interconnects as well as storage area networking. That would be in addition to traditional data center switch- to- switch and switch- to- compute server connectivity.

In response to a question from this author, John said the OCP networking group focus is connectivity within the data center, and not to interconnecting multiple data centers – which would be a traditional “greenfield” deployment. Currently, the OCP Networking group  has no schedule for taking their specifications in this area to an official standards body (like IEEE or IETF). They think that would be very “resource draining” and hence slow their forward progress.


2.  Networking as a Service (NaaS):

a] Overview

Professor Anderson’s main thesis was that there are many current Internet problems that ISPs can’t solve, because they only control a small piece of the Internet. “Today, an ISP offers a service that depends on trustworthiness of every other ISP on the planet,” he said.  A large majority of Internet traffic terminates on a different ISP network and transits several carrier networks on the way there.

Quality of service, resilience against denial of service attacks, route control, very high end-to-end reliability, etc are just a few of a long list of features that customers want from the Internet, but can’t have, except at enormous expense.  Many Internet problems result in outages, during which the customer has no Internet access (even if the problem is not with their ISP).

The figure below shows that 10% of Internet outages account for 40% of the downtime experienced on the Internet.

A slide that characterizes Internet outages.
Slide courtesy of Tom Anderson, University of Washington

Taking advantage of high performance network edge packet processing software, ISPs will be able to offer advanced services to both local remote customers, according to Tom. This is similar to how data center processing and storage are sold to remote users today- as a service. The benefit will be to unlock the knot limiting the availability of advanced end-to-end network services. In this new model, each ISP will only need to promise what it can reliably provide over its own network resources. The end-to-end data properties are proposed to be achieved by the end point customer (or end ISP) stitching together services from a sequence of networks along the end-to-end path.

b] Drilling down into NaaS motivation, architecture and functionality:

1. Motivation: ISPs are dependent on other ISPs to deliver the majority of Internet traffic to remote destinations:

  • To coordinate application of route updates
  • To not misconfigure routers
  • To not hijack prefixes
  • To squelch DDoS attacks

2. Several problems with the Internet often result in outages and poor performance.  Diagnosis is complicated by lack of ISP visibility into the entire WAN (i.e. end to end path). Internet problems are mainly due to:

  • Pathological routing policies
  • Route convergence delays
  • Misconfigured ISPs
  • Prefix hijacking
  • Malicious route injection
  • Router software and firmware bugs
  • Distributed denial of service

Yet, there are known technical solutions to all of these issues! A trustworthy network requires fixes to all of the above, according to Professor Anderson.

3. NaaS as a Solution:

NaaS (Network as a Service) is a way of constructing a network where ISP’s only promise what they can directly deliver through their own network. This would then have the potential to provide much better: security, reliability, worst case performance and be capable of incremental adoption (as opposed to today’s Internet).

Value added services (e.g. multicast, content-centric networking) might also be offered at lower costs under NaaS.

4. In the NaaS scenario, either the destination enterprise customer or end ISP would:

  • Stitch together end- to- end paths from the hops along the way
  • Based on advertised resources from each ISP
  • Portions of path may use plain old Internet

5. Why now for NaaS?

  • Distributing topology updates on a global scale is now practical. There is no longer an engineering need to do localized topology management.
  • In addition, high performance packet processing at the network edge (10 Gbps per core with minimum sized packets) is now possible, which makes the NaaS schema realizable today.
  • Finally, “ISPs have made considerable progress at improving reliability of their own internal operations, which is often, two orders of magnitude more reliable than the global Internet,” according to Tom.

6. NaaS Design Principles:

a) Agile and reliable ISPs

  • Flexible deployment of new functionality at the edge (key NaaS principle)

b) Each ISP promises only what it can guarantee through its own network

  • Packet delivery, QoS from PoP (Point of Presence) to PoP

c) Incentives for incremental adoption (please refer to the figure below)

Image depicts Incremental Adoption of Network as a Service.
Image Courtesy of Tom Anderson, University of Washington
  • Each ISP charges for its added services, without waiting for its neighbors to adopt NaaS principles

d) Security through minimal information exposure

  • Simpler protocols result in a smaller attack surface and hence better security threat mitigation

7. Proposed ISP network architecture and functionality:

  • Software processing at the edge, hardware switching in the core
  • Software packet processing: 10 Gbps per core on modern servers (min-sized packets) could be extended to ISP network edge processing
  • Fault tolerant control plane layer with setup/teardown circuits, install filters, etc.

Closing Comment and Follow Up Interview:

We found it quite refreshing, that Professor Anderson didn’t mention the need for using SDN, NFV, NV or other forms of open networking buzz words in NaaS! Instead, the software networking aspect is simply to do much more transit packet processing at the ISP edge and hardware switching in the core. That seems like a better model for delivering Internet and WAN services to both business and residential customers.

In a follow up telephone interview after the conference, Tom disclosed that the NSF is “largely funding NaaS as a research project under its Future Internet Architecture (FIA) Program.”  Cisco and Google are also financing this research (as per the title page of Tom’s superb presentation).  The FIA program is called NEBULA and described here.

Four faculty members (from University of Washington and UC Berkeley) along with  several students are involved in the NaaS research project. The NaaS team has received “generally positive feedback from other researchers and ISPs,” Tom said during our phone call.

The NaaS research group plans to build and test a working prototype that would look like a small ISP network that encompasses the NaaS functionality described above. That work is expected to be completed and evaluated within two years. After that, the NaaS concept could be extended to inter-domain ISP traffic. We wish him luck and hope NaaS succeeds!

References:

Other Hot Interconnect Sessions:

Video on Day 1 highlights of the Conference:

Infonetics Survey: Network Operators reveal where they plan to first deploy SDN and NFV

Introduction:

Top 5 network locations operators expect to deploy SDN and NFV by 2014
Image courtesy of Infonetics

There’s been a lot of hype and even more uncertainty related to “Carrier SDN” and in particular the use of Open Flow protocol in carrier networks – between a centralized control plane entity and data plane entities residing  in “packet forwarding” engines built from commodity silicon with minimal software intelligence.  Many carriers are interested in the ETSI NFV work, which will NOT produce any standard or specifications.  This author has been contacted by several network operators to assess their NFV plans (please note that such consulting is not free of charge).  As ETSI NFV will make contributions to ITU-T SG13 work on future networks, it may be several years before any implementable standard (ITU Recommendation) is produced.

For its just released SDN and NFV Strategies survey, Infonetics Research  interviewed network operators around the globe, which together represent ~53% of the world’s telecom capex and operating  revenue.  The objective of the survey was to determine the timing and priority of the many use cases for their software-defined network (SDN) and network function virtualization (NFV) projects.

SDN And NFV Strategies Survey Highlights:

  • Virtually all major operators are either evaluating SDNs now or plan to do so within the next 3 years
  • SDN and NFV evaluation and deployments are being driven by carriers’ desire for service agility resulting in quicker time to revenue and operational efficiency
  • The top 5 network domains named by operators when asked where they plan to deploy SDNs and NFV by 2014: Within data centers, between data centers, operations and management, content delivery networks (CDNs), and cloud services
  • 86% of operators are confident they will deploy SDN and NFV technology in their optical transport networks as well at some point, once standards are finalized
  • Study participants rated Content Delivery Networks (CDNs), IP multimedia subsystems (IMS), and virtual routers/security gateways as the top applications for NFV

“For the most part, carriers are starting small with their SDN and NFV deployments, focusing on only parts of their network, what we call ‘contained domains,’ to ensure they can get the technology to work as intended,” explains Michael Howard, co-founder and principal analyst for carrier networks at Infonetics Research.

“But momentum for more widespread use of SDN and NFV is strong, as evidenced by the vast majority of operators participating in our study who plan to deploy the technologies in key parts of their networks, from the core to aggregation to customer access,” Howard adds. “Even so, we believe it’ll be many years before we see bigger parts or a whole network controlled by SDNs.”

About The Survey:

Infonetics’ July 2013 27-page SDN and NFV survey is based on interviews with purchase-decision makers at 21 incumbent, competitive and independent wireless operators from EMEA (Europe, Middle East, Africa), Asia Pacific and North America that have evaluated SDN projects or plan to do so. Infonetics asked operators about their strategies and timing for SDN and NFV, including deployment drivers and barriers, target domains and use cases, and suppliers. The carriers participating in the study represent more than half of the world’s telecom revenue and capex.

To learn more about the report, contact Infonetics:

References:

  1. Video interview with Infonetics’ co-founder Michael Howard on What’s really driving demand for SDN/NFV
  2. SDN and NFV: Survey of Articles Comparing and Contrasting
  3. Move Over SDN – NFV Taking the Spotlight – Cisco Blog
  4. Subtle SDN/NFV Data Points
  5. “Service Provider SDN” Network Virtualization and the ETSI NFV ISG
  6. The Impact on Your IT Department of Software Defined Networking (SDN) and Network Functions Virtualization (NFV)
  7.  SDNs and NFV: Why Operators Are Investing Now (archived webinar):  

2013 TiECon- Part 3: Software Defined Infrastructure Presentations & Panels (Continued)

Introduction:

In this third and final article on the information packed 2013 TiECon, we summarize key messages from the second half of the SDI Track on May 17th, including the afternoon keynote and two panel sessions. The first article covered all of the TiECon opening keynotes. The second article summarized the invited SDI presentations from the morning of May 17th (but not the panel sessions covered in this post).


PM Keynote: Software Defined Infrastructure – The Coming Wave of Datacenter Disruption

by Steve Herrod, PhD -General Catalyst Partners

Large Data Center (DC) operators are under great pressure as they attempt to learn how to bring the benefits of public clouds to their DCs.  Those benefits of the cloud include: agility, flexibility, scalability, and reduction in total cost of ownership. The DC operators recognize they have to rebuild their DC infrastructure to realize those goals.  Compute server virtualization was the first step – it’s been widely deployed and has been hugely successful in improving efficiency and lowering compute system cost.

In the Software Defined Data Center (SD-DC), all infrastructure (compute, storage, network, management) is virtualized and delivered as a service. Control of the DC is then entirely driven by software.

Software Defined Storage is emerging together with a new class of applications. The objectives here include:

  • Move compute (servers) closer to storage
  • Make use of local disc and flash memory
  • Get better utilization of flash memory
  • Take advantage of economies of scale (e.g. get good pricing on memory)
  • Auto provisioning
  • Common automated management across all tiers

Today’s network can be a barrier to effective cloud computing. Network virtualization will help break that barrier. It provides a separate logical view of the network to services and applications, which is independent of how the physical network is laid out (topology) and implemented. Automated management should be an adjunct to Network Virtualization (logically off to the side or on residing top of it).

How to bring efficiency and automation to high level services in the SD-DC? Steve recommended the following:

  • Migrate entire DC (and beyond) to SDI.
  • Traffic management and security policies should be part of SDI.
  • Bring firewall and intrusion detection close to workloads, which results in a tighter security policy.
  • Provide “Instant” provisioning of the network (L2-L3) as well as L4-L7 services.
  • Provide Disaster Recovery (DR) without the need for pinging.
  • Management should wrap all of the software-defined services together.
  • Treat SDI management as a “big data problem.”

The figure below is a candidate solution for the SD-DC.  It depicts a Software Defined Network with Open Flow protocol used to communicate between the Physical Network (Data Plane) and centralized SDN Controller (Control plane):

Software Defined Network is shown with OpenFlow control.
Image courtesy of IBM

Panel Session: Software Defined Networks (SDN) Adoption — Crossing the chasm from Early Adopters to Mainstream:   Google, NEC-America, Orange-Silicon Valley

Key points made during this panel session:

Google:

There’s a steep growth in number of network users, need for more bandwidth, different types of information (e.g. video) transmitted, and a variety of services delivered over broadband networks. But the network “economies of scale are now terrible.” SDN provides better management, orchestration and control of network resources.

Global knowledge of the network topology facilitates quickly moving functionality from one (Google) DC location to another. Note, Google’s first SDN implementation was in an internal network which interconnects all their DCs.

“The industry needs more clarity in terminology, e.g. SDN, Open Flow, NFV, NV, etc.”

NEC-America:

Commonality is needed across all parts of the IT infrastructure (compute, storage, network, management). In particular, for auto-provisioning, service automation, and interactions with the rest of the system. More network interactions with many more devices, appliances, load balancers, etc. calls for a new, software based approach to networking. SDN may be used in Content Delivery Networks (CDNs) (No explanation or justification was provided).

There are still a lot of gaps in SDN before it can become mainstream. These include: QoS control, scalability, details on orchestration/provisioning, and network resiliency (protection and/or path restoration on failure). SDN also needs more business drivers to create a mass market.

Orange-Silicon Valley:

How do we innovate while ensuring openness for the Data Plane? At a minimum, the Data Plane needs to access flow tables and semantics (from the Control plane) in a standardized way (ONF is standardizing Open Flow for that very purpose).  Standardized communication between Control and Data Plane will lead to reduced CAPEX and (more importantly) OPEX for network operators. It will also facilitate the creation of new revenue generating services for network service providers, like Orange. Telcos should think of the network as a service (NaaS), with greater agility and lower OPEX than current networks offer.

ETSI NFV is adjacent to SDN. There are mobile network use cases for both. How SDN and NFV are related is an open question. Look at the mobile packet core (Evolved Packet Core =EPC for LTE) as a good place to virtualize the network.

We need a full implementation of SDN (not just Open Flow) as well as truly open APIs to have a programmable network. Network as a Service is not quite here yet. We need to personalize the network first, i.e. the user “owns” the network for a particular session.


Panel: Architecting a Scalable Software Defined Networks (SDN) Solution: Juniper, Cisco, Big Switch

Key points made during this panel session:

  • Cloud computing, mobile data & connections, and social networking are all putting pressure on the existing network infrastructure (both wireless and wireline).
  • Users want more control of the network and more automation.
  • Need a level of abstraction that we don’t have today.
  • Juniper: SDN can provide more control of the network while being decoupled from network equipment and devices.
  • Big Switch: Industry needs to bring good software design into network architecture. This includes: loose coupling, modularity, APIs in between layers.
  • Cisco: SDI solutions for network service providers will be different than those for enterprise or Data Center customers.
  • Cisco: IT organizations are not set up to take advantage of SDN and this represents a barrier to adoption. There are also perceived risks with any new technology (such as SDN). IT skill sets need to change to take advantage of what SDN can offer.
  • Big Switch: Corporate culture and skill set gap is an issue for SDN deployments. “It will take people a while before they can wrap their minds around SDN.”
  • Juniper: SDN will simplify the network and lower barriers to adoption. If using SDN makes a company more efficient, it will realize more revenue and be able to do more with less people, e.g. network administrators.
  • Juniper: What is the correct abstraction for networks? He doesn’t know if SDN is the answer and doesn’t believe in standardized APIs that enable users to “program the network.”
  • Big Switch: Open Daylight consortium (open source software) will permit everyone to leverage a commodity SDN controller.
  • Cisco: Industry needs to bring eco-system together to avoid market fragmentation (this author thinks that this is the #1 stumbling block to SDI).
  • Big Switch: Customers (users) care about benefits of the application(s). They don’t care about implementation details of SDN.
  • Juniper: DC scale is huge. We have an agile compute infrastructure now (through server virtualization), but the neither the DC network or storage is agile or efficient. That’s the big opportunity for SDI. He recommends configuration of the DC network to handle storage information exchanges, as well as packet forwarding for compute tasks. Note, that’s a huge change from the current DC environment where there are separate networks for storage (Fiber channel based SANs) and compute (Ethernet).
  • Big Switch: As everyone is working on SDN for DC networks, start-ups should look for less crowded SDN opportunities. Those might include: campus LAN, enterprise networks, or mobile networks (campus and wireless telco). Advice: be focused and nimble (they are opposites of one another), be prepared to go from plan A to plan B, carefully pick the ecosystem to work in and not do anything else.
  • Cisco: A great start-up opportunity is SDN professional services and support. Start-ups should “go where no one else is.” He says, “Plexxi stuff is awesome.” Pick something to work on that gives you a sustainable advantage over the competition.
  • Juniper: Start-ups should focus on their value add when making the key decision of what to do and not do in the SDI space. “Think about how you are going to make your first dollar and who is on your leadership team.”
  • Cisco: SDN is a tool that may solve customers problems. (No explanation or justification provided)
  • Juniper: Customers say “I want SDN. What is It?” That indicates potential SDN users are confused by all the hype and vendor claims.

A Service Provider view of SDN– Victa McClelland of Ericsson

Victa talked about Ericcson’s SDN trial with Telstra. Due to time and space limitations, we can not cover it in this article, but refer the reader to this Ericsson presentation from 2013 Open Network Summit: SERVICE PROVIDER SDN MEETS OPERATOR CHALLENGES

Panel: How do you manage the Software Defined Infrastructure (SDI) – Use Cases and Technology : AppDynamics, BMC, VMWare

Again, due to time and space limitations, we can not cover this panel session in this article. However, we emphasize that management will be a crucial component of SDI. There are no standards or open specifications for SDI management at this time, so it will be vendor specific for quite some time.

http://tiecon.org/content/how-do-you-manage-software-defined-infrastructure-sdi-use-cases-and-technology

Closing Comment:

We hope readers enjoyed this three part series on 2013 TiECon, which highlighted the SDI Track sessions. We will be covering more of that topic in future posts, including the SDN related sessions and discussions at last week’s Global Press & Analyst Summit at the Computer History Museum in Mt View, CA.  Bob Metcalfe’s closing keynote is at:

http://community.comsoc.org/blogs/alanweissberger/bob-metcalfes-closing-keynote-ethernet-innovation-summit-may-23-2013-chm-mt-vi

2013 TiECon- Part 2: Software Defined Infrastructure Presentations

Introduction:

Software Defined Infrastructure (SDI) applies to compute, storage and the network within a data center and in the cloud.  This market segment is experiencing tremendous growth and innovation.  It is facilitating increased agility, flexibility and operational cost savings for enterprises and service providers.  The first step in SDI was compute server virtualization and that’s now mainstream.  Network and Storage virtualization are the current target areas.

While Software Defined Networking (SDN) is the new hot topic, that term is being used as an umbrella by networking vendors and service providers.  The only “standardized” version of SDN is coming out of the Open Networking Foundation (ONF is NOT a standards body).  It is based on centralized control and management, with a strict separation of Control and Data planes using the Open Flow protocol (“Southbound API”) to communicate between them.  Network equipment vendors and Service Providers claiming they are ‘SDN Compatible’ have some level of programmable interfaces on their network equipment, but are usually NOT compliant with ONF architecture and Open Flow protocol (the Southbound API”). HP products are an exception- they do seem to be compatible with ONF architecture and Open Flow specification (see AM Keynote below).

This article summarizes the morning keynote and invited presentations at 2013 TiECon.  The third article in this series will cover the afternoon  SDI keynote and panel sessions.  Please refer to the TiECon SDI Track Agenda:  http://tiecon.org/sdi for program details.

AM Keynote: Prepare for Software Defined Networking by Dave Larson of HP

HP is a leader in deploying SDN-Open Flow switches with a claim of, “over 40 SDN switches and 20M Open Flow enabled ports shipped.”

In the context of SDN, the company views the network as a single logical fabric with a vendor specific “Northbound API” (from Control Plane module to Application entities) enabling applications to program the underlying network.  Those applications communicate with HP’s Virtual Applications Network SDN Controller, which  “delivers complete agility; enables cloud service centric management and orchestration through the Management layer,” according to Mr. Larson.

A fact sheet on this key SDN product is at: http://www.hp.com/hpinfo/newsroom/press_kits/2012/convergedcloud2012/FS_VAN.pdf

Image of SDN architecture courtesy of HP. Note, original text associated with Infrastructure block said, "29 Switches – over 15 million ports." This was replaced with the text, "HP Switches with Open Flow to/from SDN Controller."
Base Image Courtesy of HP

HP’s SDN architecture  is illustrated in the figure above.

Four examples of SDN applications using HP SDN products were briefly described by David Larson:

1.  Virtual Cloud Network– Enables public cloud service providers network scalable automation.  Permits an enterprise to securely connect to the cloud and apply their own ‘identity’ to their cloud environment.

2.  Sentinel Security (developed with HBO)- Provides automated, real-time network security and threat detection in enterprise and cloud networks.  Deployed in Australia public schools.

3.  Load Balancing (developed with CERN researchers)- Traffic orchestration using SDN. Goal is to improve network utilization in a high performance computing environmnet.

4.  Unified Communications & Computing (for Lync)- Automated policy for business applications running over an enterprise campus wide network. This application provides: simplified policy deployment, dynamic prioritization, an enhanced user experience.

HP’s SDN vision is to provide end-to-end solutions for campus and branch offices, WANs, multi-tenant data centers and cloud.  For the WAN,  SDN capabilities include: traffic engineering, improved quality of user experience, service automation, and quick provisioning of dynamic VPN services.

The following SDN time-line was presented by Mr. Larson:

  • 1H14:  Deploy SDN controller, Sentinel and Virtual Cloud Network apps.
  • 2015:  Deploy new SDN applications using “RESTful APIs”  (Note: there is no standard for the Northbound API, so HP is suggesting the use of Representational State Transfer (REST) web services and APIs.)
  • 2016: Deploy SDN enterprise wide

Introduction to SDI:  Guru Parulkar, PhD- Stanford & Open Network Research Center

Guru is one of the few SDN speakers that clearly tells you what he believes.  There is no hype, dancing around the issue, or talking out of both sides of his mouth.  Guru says that (pure) SDN is the best opportunity to come around in the last 20 years for the networking  industry.  Here’s why: we need a new network infrastructure to accommodate the current computing environment which has changed drastically in the last few years.

Compute servers are now mostly virtualized and with the huge move to cloud computing and storage, it is extremely difficult to support a virtual network infrastructure based on existing network equipment (which is closed,  vertically integrated, complex, and bloated).  SDN is that new network infrastructure, according to Guru.

SDN will bring a simpler data forwarding plane.  It will permit application builders to control functions such as traffic engineering, routing algorithms for path selection, and mobility policies. The resulting benefits to service providers, data center operators and enterprises include: reduction of CAPEX and OPEX, capability to deploy infrastructure on-demand, and enable innovation at many levels.

A diagram depicting software based infrastructure.

The figure to the right illustrates SDI to control a cloud service provider’s data center (DC) and core network. Cloud Orchestration software interacts with both cloud resident DC Orchestration and SDN Control (of the core network) to deliver  cloud services to customers. Such a core network would be purpose- built for this task and is NOT the public Internet. The cloud resident DC network uses SDN control over the physical DC network which interconnects servers and virtual machines.

…………………………………………………………………………………….

A multi-tenant Cloud Data Center with SDN Virtualization, shown below, was presented by Guru.  Each tenant has its own set of higher layer functions that reside above the Network OS.

Image of a cloud data center with SDN virtualization.

Guru is adamant that SDN overlay models will not yield the benefits of pure SDN and therefore should NOT be pursued.   He emphatically stated, “Everything should be redone to make use of the new SDN/ SDI infrastructure.  Warning to enterprises: Don’t try to maintain your legacy network.”

Guru concluded by saying that “SDI represents a major disruption- one that comes along only once in 20 years. It’s an opportunity for innovation and entreprenneurship.  SDI will be developed across (protocol) layers, technologies and domains.  The IT industry is now just at the beginning of a huge change brought about by SDI.”  And that is as clear a message as one can give!


SDN Use Case:   Albert Greenberg -Microsoft Cloud Services

Albert leads cloud networking services for Windows Azure (Microsoft’s cloud IaaS and PaaS offering).  He said that start-ups could benefit from the huge scale and elasticity of Azure, rather than use in house computing facilities or other public cloud offerings.

“The pace of data center innovation and growth is amazing.  We need software control across the protocol stack to manage the ongoing changes,”  he said. The Northbound API (from the control plane to application or management plane) is critically important for IT resource management.  The physical network used by Azure (internally) is flatter, higher speed (10G) and optimized for cloud services.  Consistent performance is realized and outages are largely prevented as a result.

The increased amount of storage in the data center puts greater pressure on the network, as there is much more data now to exchange and deliver to customers.  “Software is the only solution to manage growth and scale of cloud computing.”  As a result, Albert believes there’ll be plenty of innovation opportunities for SDI.  He would like to see greater progress on some fronts, especially specifications for federated control and IP address management.

While Greenberg said he likes the Open Flow concept and simplicity, Microsoft has instead used its own version of SDN (it’s actually network virtualization) in Windows Azure.  That implementation is based on home-grown “SDN” controllers and a network overlay using NVGRE (Network Virtualization using Generic Routing Encapsulation).  However, Microsoft plans to participate in the OpenDaylight consortium (http://www.opendaylight.org/) – a vendor-driven, Linux Foundation open source software project for SDN -Open Flow platforms.


Lightning Round SDN (start-up) winners -I:  

One Convergence,  Pertino, Plexxi

http://tiecon.org/content/sdi-lightning-round-winners-i

Lightning Round SDN Winners – II

Elastic Box, Cloud Velocity, Lyatiss

http://tiecon.org/content/sdi-lightning-round-winners-ii


Closing Comment:

One of the great things about the TiECON SDI sessions were  no sales pitches, vendor demos, or misleading claims of “SDN support.”  The depth of content, quality of speakers, commercial free, clear and candid remarks by both speakers and panelists made for one of the best conferences on this topic in the last couple of years.  We commend the TiECon team that organized the SDI Track sessions!


Next Up:  Stay tuned for 2013 TiECon Part 3 in this series which will feature the PM keynote on “The coming wave of Data Center Disruption brought about by SDI.”  We’ll also summarize the key points made during several SDI panel sessions and touch on Service Provider views of SDN (Ericsson presenting results of their joint SDN project with Telstra in Australia).

2013 Cloud Connect Part III: Cloud as IT Disrupter; SDN as a New Virtual Network Infrastructure

Introduction:

One consistent theme during Cloud Connect 2013 was the cloud as a disrupter of IT organizations.  During the Cloud Executive Summit workshop on April 2nd, Avery Lyford of LEAP Commerce said that there were three huge areas of disruption: the mobile cloud, Big Data (analytics) and Software Defined Networking (SDN).  Each of these areas were then explored as disrupters by three excellent speakers.   We were especially impressed with the presentation by Andre Kindness of Forrester Research who candidly stated that SDN is an evolution; not a revolution and it will take 5 to 7 years for the technology to mature.

PLUMGrid’s SDN presentation on  April 5th was also very enlightening.  It’s described later in this Cloud Connect wrap-up article.

While the majority of Cloud Connect 2013 sessions focused on building private or hybrid clouds, McKinsey & Company consultants Will Forrest and Kara Sprague proposed a very different, and extremely disruptive scenario for cloud adoption.  Like IDC, McKinsey sees the future of IT (“New IT”) in  public cloud computing.  But McKinsey goes a lot further.  The prestigious market consulting firm thinks public cloud operations may be managed by a separate IT organization, created specifically to reside outside of the existing “Old IT” shop.

Leading-Edge Cloud Research and Industry Analyst View from McKinsey & Company:

“Current IT, as we know it is no longer a game-changer,” said Mr. Forrest of McKinsey.  In fact, “spending on IT is not a differentiator anymore and it doesn’t correlate with business success,” he added.  Much of the available improvement made possible by traditional IT has already been achieved.  And IT use cases have reached diminishing marginal returns– significant increases in productivity or financial savings are unlikely for most.  Probably the greatest contribution IT can make today is to trim budgets to the minimum levels  within a given market segment.  According to Mckinsey, the highest IT priority for most companies should be to move IT spend to the industry average (rather than overspend on IT).

As a result, thought leaders in the technology world are advocating for a rethink of enterprise and corporate IT.   Cloud is seen as a key lever to decrease IT costs and reach the industry average.  Mckinsey’s emphasis on using the cloud for cost reduction is in sharp contrast to the results of Everest Group’s Enterprise Cloud Adoption survey which found that flexibility and agility were much more important (see Cloud Connect Part II article).

McKinsey sees significant disruption in many business models.  They say that CEOs recognize that future revenue growth will come from new business models.  Furthermore, economic conditions are changing, demanding business model transformation.

“New IT” is rising to fill the place of “Current IT,” according to McKinsey.  The “New IT” drives business model transformation, team and corporate productivity growth and digital-only products.

Examples of companies pursuing the “New IT” are:  Amazon transforming e-retail by driving customer preference and share of wallet gains (Amazon is the market leader among online retailers in average order size, driven by “push” sales), Deloitte teams using Yammer to collaborate and Google offering digital products (AdWords and AdSense deliver data-driven, custom advertisements, resulting in $36B of annual revenues for Google).

Image suggesting that CEOs are hoping to see improvements due to the cloud beyond just those received by having more efficient IT.
Image courtesy of McKinsey & Company

CEOs are hoping to see improvements from cloud other than current IT cost reductions, such as increased business flexibility and ability for IT to scale up (or shrink) to meet business needs.  These expectations for cloud computing are shown in the adjacent figure to the right:

CEOs really don’t believe their current IT organizations can implement the “New IT.” They’re suggesting public cloud computing for the “New IT” infrastructure and may create a separate, but parallel IT organization to manage public cloud operations.

In summary, Forrester said that “Old IT” expects cloud computing to achieve incremental cost reductions within the context of established business practices, while CEOs are looking at public cloud to create new business offerings that are flexible, agile, and scalable.

McKinsey’s Kara Sprague, stated that a survey will soon be launched to determine the effect of cloud computing on SMB customers.  “Hardware OEMs are increasingly turning to service partners to access the customers, at the same time that independent software vendors are using the SaaS model to go to the customer directly. This is bad news for VARs, integrators and distributors, many of whom are trying to either become cloud service providers themselves or move into a cloud brokerage model,”  said Ms Sprague.


In a panel titled, “Disruptive Tools and Technologies,” Scott Bils of Everest Group and Randy Bias, CTO of Cloud Scaling detailed a laundry list of disruptions brought on by cloud computing.  Those included:

  • Public cloud is creating a “shadow IT” organization focused on achieving business agility, flexibility and dramatic time-to-market compression.  “Business users stand to gain significantly by evaluating public cloud options for ‘spiky’ workloads, such as development/test environments, or for non mission-critical workloads,” said Mr. Bils.
  • Open Source Software is causing redesign of cloud resident data centers (e.g. using OpenStack or CloudStack), enables an organization to move faster, reduces vendor lock-­in and risk, eliminates licensing fees.  But it dramatically increases reliance upon the community maintaining or improving the open source code.
  • Innovation in Hardware Design, e.g. ARM processors and solid state drives in cloud resident servers, Taiwanese Original Design Manufacturers (ODMs) selling direct to IT enterprise customers.
  • Building a private or hybrid cloud requires building a “net new infrastructure,” according to Mr. Bias.  It should be able to scale up or down, based on workload demand.
  • Software Defined Networking (SDN) is a huge potential disruptor, especially in data center network architecture.  However there are several important questions that have not been answered:  What is it really?  Why is it important? And is it ready for prime time?
  • It was agreed that existing network infrastructure (e.g. IP-MPLS VPNs or private line) “is not going to disappear,” especially for cloud access.  That’s due to its ability to achieve: QoS, bandwidth guarantees, low latency, multi-cast, stability and connectivity.  Therefore, SDN will need to work with that existing network architecture, perhaps as an overlay or adjunct.

In a session titled, “SDN is Here to Stay- Now What?”  PLUMGrid CTO Pere Monclus talked about SDN as a new virtual network infrastructure.  “As a way of simplifying operations and enabling a solution view of the networking space, SDN brings the additional value needed in cloud and datacenter environments to complement current hardware trends,” he said.  PLUMGrid believes that SDN, rather than traditional switches and routers, is the glue that will hold the new network together.

SDN is the layer that decouples virtual data centers from physical data centers.  It must be exensible- in both the data and control planes- as a platform to deliver better network functionality.  Those include: multi-tenancy, self service, virtual topologies, faster provisioning, and “Network as a Service.”  When deployed, SDN will result in operational simplicity, capital efficiency, and an elastic, on-demand, self service network.  However, there are many real problems to be solved before that vision can be realized.

Image depicting architecture gridlock to platform ecosystem.
Image Courtey of PLUMgrid

The functional SDN block diagram on the right was said to transform the current network architecture “gridlock” to a “SDN Platform ecosystem,” while facilitating innovation in both the control and data planes.

……………………………………………………………………………………………………………

On that note, we conclude our three part coverage of the information packed Cloud Connect 2013 conference.  Next week we’ll be attending the Open Networking Summit- the happening of the year for SDN techies and afficionados (this author is NOT one of them). We will be reporting on what we learn to Viodi View readers.

Till next time…….

References:

http://www.cio.com/article/731525/What_Cloud_Computing_Means_For_the_Future_of_IT_Organizations?page=1&taxonomyId=3024

http://www.crn.com/news/cloud/240152247/cloud-connect-the-cloud-threatens-the-smb-channel.htm