IPv6 – Is There a Better Way?

Editor’s Note:

Cyber threats, Internet of Things, privacy and Internet freedom are often front page news and are at the forefront of public consciousness. At the same time, IPv6, started almost 20 years ago and which promoters promise will address the aforementioned issues, began to gain traction in 2014 (e.g. Google IPv6 traffic doubling in use from 2.5 to 5% of traffic). But will IPv6 live up to its promise and is it even necessary?

This is the question that, MIT graduate and Avinta CTO, Abraham Chen asked late last year after observing the parallels between seemingly disparate technologies. His query led to several months of research, refinement and peer evaluation of an idea for extending the existing IPv4 protocol to solve for the explosion of “things” in the so-called Internet of Things. The following is his abstract of a longer paper that delves into the question.


Preface:

This paper proposes tweaks to the existing protocol, IPv4, to achieve the same goals as IPv6 with less costly infrastructure upgrades and less burden on IT staff, while providing a simpler approach to offering privacy and support of the explosion of devices enabled by the Internet of Things. This study also uncovered certain philosophical disparities between Internet and telephony industries. It appears that Internet performance could be significantly elevated if some of the latter’s experience is utilized.

The following is an excerpt of the report:

Abstract

As soon as Internet became popular, talks began to spread that its assignable IPv4 address pool (about 4.096B) would be exhausted before too long. Even with two companion technologies, NAT (Network Address Translation) and DHCP (Dynamic Host Configuration Protocol), the pressure still continued to build. IPv6 was thus developed and put into use. It turns out that IPv6 is not a superset of IPv4, nor is it capable of encapsulating the latter. Thus, the two systems have run side by side.

The main motivation for IPv6 commonly conveyed to the public is to create a big enough address pool for the upcoming IoT (Internet of Things) that will exceed IPv4’s capacity. Among publicly available literatures, however, it has not been clear about the number of IoT devices. A recent Cisco online paper provides the most up-to-date forecast that by Year 2020 the worldwide population will be 7.6 billion, while IoT in use will be 50 billion which averages to 6.58 IoTs per person. These provide us a good baseline for quantitative analysis.

Mimicking PABX (Private Automatic Branch eXchange) extending PSTN (Public Switched Telephone Network) numbering plan, a scheme of reclaiming part of the well-known re-usable private network address block 192.168.0.0/16 to relieve the IPv4 pool shortage is proposed. By redefining the boundary between the public and private in the address space, the assignable public IPv4 addresses may be extended (by a multiplication factor of 256) to cover the projected IoTs. In fact, such an extended pool is so large (1048.576B) that only 1/16th of the original IPv4 public address space is sufficient to start with, freeing up the majority 15/16th of the pool for future applications.

The figure below depicts the proposed ExIP address assignment architecture:

A diagram of what it would take to extend IPv4 , as an alternative to IPv6.
Image courtesy of Abraham Chen, Avinta.

Implementing this Extended IPv4 (ExIP) address scheme consists of:

  1. Adding a new layer of simple (Semi-Public) routers to extend the Internet routing. These routers could be co-located with the existing Internet edge routers, or even be absorbed into them through software enhancement.

  2. As to encoding this proposed ExIP information in the IP packets, there is a recent IETF (Internet Engineering Task Force) draft document called EnIP (Enhanced IPv4) that utilizes the existing option to carry double IPv4 address (total of 64 bits) in the IP Header. In comparison, ExIP format needs only 40 bits to fully identify a public entity on Internet.

  3. On each customer premise, the capacity demand on RG (Residential Gateway) will be accordingly reduced, while DMZ (De-Military Zone) may be utilized to work with NAT for accomplishing optional selective end-to-end connectivity. This is analogous to AA (Auto Attendant) capability for PABX.

Although IPv6’s direct end-to-end connectivity is enticing, it removes the basic buffer against intruders offered by IPv4 based practices. A close analogy for this comparison may be drawn between telephony’s CENTREX (CENTRal office EXchange) and PABX. A telephone station on the former is directly reachable from any PSTN telephone, thus having no defense mechanism against un-wanted/telemarketer calls. The latter is slower in setting up an incoming call due to the AA process, but allows only welcomed callers to get through.

Once the above analogies between Internet and PSTN are established, several subtle issues become evident through the parallelism between the two:

A. IP address assignment practice is counterproductive to the advertised Internet intention.

Contrary to common perception, PSTN numbers are not controlled by a few regulated telephone operating companies, but by respective governmental agencies. On the other hand, Internet IP addresses are assigned by ISPs (Internet Service Providers). The latter approach ties IP addresses to many unregulated business entities with frequent unpleasant experiences that consumer has no place to report. This will become an even more serious issue upon the extensive use of IPv6, because to benefit from it, the assignment will be not only static, but also permanent.

B. Locality information in device identification facilitates connection as well as locating perpetrator.

PSTN phone numbers, carrying significant locality information about telephone equipment in use, enable the switching system to not only efficiently establish a connection, but also promptly pinpoint the origin of a call to within a finite area. IP addresses on the other hand, being grouped under respective ISPs, carry hardly any locality information, making routing less efficient. Compounded by the extensive use of DHCP, locating an Internet hacker becomes a real challenge. If IP address assignment followed the same practice as PSTN, locating an Internet hacker will be a finite task. Even if the hacker created spoofed addresses, the governing backbone routers would spot the exception immediately, thus preventing the associated packet from entering the Internet.

C. Direct addressing invades personal privacy, while exposing terminal devices to attacks.

The Extended IPv4 addressing scheme utilizing NAT and DMZ to achieve end-to-end connectivity maintains a buffer mechanism that allows shared proxy security devices the chance to work. It is not clear why IPv6, which requires individualized security reinforcing software in every IoT, may perform better.

D. Divide and Conquer is the fundamental rule of a large system.

Both the existing and the Extended IPv4 addressing schemes shield the private network IoTs from the public Internet. These conform to the same demarcation line concept that has served well for all four existing utilities, water, gas, electricity and telephony. Encompassing all IoTs within the publicly addressable space for the sake of end-to-end connectivity, IPv6 will make the entire Internet less robust, more difficult to troubleshoot and harder to defend against intrusion, simply because the system becomes overly complex by the presence of a huge number of IoTs having nothing to do with the system’s performance, except introducing distractions. Why should the demarcation concept be not applicable to the Internet?

E. Root Cause vs. Manifestations

In summary, we believe that taking a hard look beneath the many symptomatic issues of the Internet to get to their root causes is what is required at this stage of its development. We also strongly believe that lessons learned from over a century of experience in PSTN can be gainfully applied to assist in laying the foundation for a robust Internet.

For detailed analysis, please see a full document at

http://www.avinta.com/phoenix-1/home/IPv6Myth&InternetVsPSTN.pdf

Abraham Y. Chen

V.P. Engineering

Avinta Communications, Inc.

Milpitas, CA 95035-4401 USA

SDN and NFV Takeaways from Light Reading's Network Components conference in Santa Clara

Introduction:

For years, we’ve been reading and hearing about the never-ending boom in data traffic, the need for fast provisioning, agile networks, service velocity (quicker time to market for new services) for telcos, etc.  It’s been like a non-stop siren call to battle for network operators.  Yet little has been done to date to remedy the situation.

At the Nov 6, 2014 Light Reading Nex Gen Network Components conference in Santa Clara, CA, Heaving Reading analyst Simon Stanley echoed a familiar solution. He said that SDN and NFV will permit network equipment vendors and telecom carriers/ISPs to keep up with rising traffic demand, reduce OPEX, and create more flexible networks.

Standard platforms form the foundation for network hardware, Simon maintains. “Above that you’ve got a virtualization layer that essentially applies virtual resources,” Stanley said. “Instead of accessing real compute, storage and networking, these resources have become virtualized+. That gives you significant flexibility,” he added.

+ Note: The above remark assumes that the version of SDN chosen is the “overlay model,” which virtualizes or overlays the physical network.  The “classical version of SDN,” adopted by the Open Networking Foundation (ONF), doesn’t do that at all. Instead, it proposes a centralized SDN Controller which implements the Control Plane (i.e. calculates end to end paths) for many Data planes which are often called “data/packet/frame forwarding engines” that run on commodity hardware called “bare metal switches.” There is no overlay or network virtualization in classical SDN.


A representative from Advantech said that on 100 Gbps ports, virtual network functions do not scale well on commodity hardware, which results in cost inefficiencies with waste of data center or central office space and excessive energy consumption.  As no single server can handle the millions of flows embedded in a single 100GbE pipe efficiently, distributing network traffic will become an issue in a virtual infrastructure.

At the conference, Advantech announced a new 100GigE hub blade, which switches traffic between two external 100GigE CFP2 ports, up to eighteen external 10GigE SFP+ ports and twelve 40G node slots on an ATCA backplane.

An Expert’s View of SDN and NFV:

Here’s how Orange’s Christos Kolias, PhD defines SDN and NFV:

  • SDN: Abstraction of the control plane from the data plane. Key benefits are: separates control from data forwarding functionality, Network Programmability, Network Virtualization, and Intelligent Flow Management.
  • NFV: Abstraction of network functions from (dedicated) hardware. Key benefits are: elasticity, agility, scaleability, versatility, and savings on CAPEX/OPEX.  The NFV Concept, according to Christos, is illustrated in the figure below.

 

Graph showing evolution of the vehicle in the digital age.
; Click to enlarge

NFV Management and Orchestration:

“NFV is all about how you can manage and orchestrate all these new virtualized appliances,” Kolias said at the conference. Yet the ETSI NFV Industry Specifications Group (ISG), which Kolias co-founded and participates in,  hasn’t yet specified what that management and orchestration should be or the APIs that interface to such software entities.  Christos says, “It is important for the NFV community to agree on some Management & Orchestration (MANO) specification with emphasis on the interfaces and the APIs.”  Christos thinks “Openstack is a good open-source alternative for the MANO.”


SIDEBAR:  In a presentation at an IETF meeting, Mehmet Ersue, ETSI NFV MANO WG Co-chair, provided the following examples of Virtual Network Functions (VNFs) that might require MANO:

  • Switching: Broadband Network Gateway (BNG), Carrier Grade-Network Address Translation (NAT), IP routers.
  • Mobile network nodes: HLR/HSS, MME, SGSN, GGSN/PDN-Gateway, RNC.
  • Home routers and set top boxes.
  • Tunneling gateway elements (e.g. VxLAN).
  • Traffic analysis: Deep Packet Inspection (DPI).
  • Signalling: Session Border Controllers (SBCs), IP Multimedia Subsystem (IMS).
  • Network-wide functions: AAA servers, Policy control.
  • Application-level optimization: CDNs, Load Balancers.
  • Security functions: Firewalls, intrusion detection systems

Importance of NFV Service Chaining:

Many industry analysts think that service chaining* (the scheduling of multiple virtual appliance based services) is the key to automating a NFV/NVF based network. How will that be achieved?  So far, there’s no standard or specification for that functionality.  Christos doesn’t think OpenStack is a solution for that.  What is?

* Note: Christos prefers the term “service composition and insertion” to service chaining, which seems to be a more accurate description.

What Type of Special Hardware is Needed for NFV?

Another key question is what new or different hardware is needed for NFV “virtual appliances,” which will run on a generic compute server (likely built by ODMs) with commoditzed hardware. At the conference, Christos told me he thinks some type of hardware assist will be necessary, but he didn’t specify what functions would be implemented in hardware or where they might be located.  That was later clarified in an email discussion which is summarized below.

Several industry participants (e.g. Microsoft) think the compute server’s NIC(s) should be augmented to include hardware assist functions like protocol encapsulation/de-encapsulation, encoding/decoding, deep packet inspection, protocol conversion (if necessary), and some security related functions.  Christos says these should not be too “compute intensive.”

“Off-loading processing to the NIC cards could include things related to packet processing (encapsulation, encoding/decoding and may be some security-related functions) – in general not compute intensive,” he added.

Kolias believes that “Data (packet forwarding) plane acceleration could he handled by hardware acceleration,” although exactly what that hardware actually does remains to be seen. It’s important to note that the Data plane is NOT implemented in a compute server for SDN, but rather in a “bare metal switch” built from commodity hardware/SoCs and other off the shelf silicon.

The drivers, challenges, and potential applications for NFV are illustrated below (from Kolias’ presentation):

Graph showing evolution of the vehicle in the digital age.
; Click to enlarge

In addition to the challenges listed in the above figure, another huge concern for NFV implementations will be security. When individual physical box appliances are replaced by virtual appliances running on a compute server, the attack service for threats and malware increases exponentially. How will that be dealt with and how will security functions be partitioned between software and hardware?  No one seems to be worried about this now, despite increased cyber attacks in recent months and years.

The Myth of NFV Compliance:

This author and Christos wonder how ANY vendor can claim to be “NFV compliant” when there are no NFV standards/specifications in place and no testing/interoperability facility to provide the certification of compliance.   Yet those false claims have been the norm for over two years!

This author believes that without solid NFV standards/specifications AND multiple vendors passing some certification/compliance test there will be no interoperability, which defeats the purpose of all the work the ETSI NFV ISG has done to date in producing architecture reference models, functional requirements, proof of concepts, etc.

An Open Platform for NFV:

Perhaps, as step in the right direction is the formation of the Open Platform for NFV (OPNFV) – a Linux Foundation collaborative project. Here are the stated OPNFV Project Goals:

  • Develop an integrated and tested open source platform that can be used to build NFV functionality, accelerating the introduction of new products and services.
  • Include participation of leading end users to validate OPNFV meets the needs of user community.
  • Contribute to and participate in relevant open source projects that will be leveraged in the OPNFV platform; ensure consistency, performance and interoperability among open source components.
  • Establish an ecosystem for NFV solutions based on open standards and software.
  • Promote OPNFV as the preferred open reference platform.

We’ll be watching this industry initiative closely and reveal what we learn in subsequent articles covering NFV.

Yet we wonder where innovation will come from if the new network paradigm is to use open source software running on commoditized/open hardware. Where’s the value add or competitive differentiation between vendors and their “open” SDN/NFV products?

Intersection of SDN and NFV: 

The figure below depicts how Christos believes SDN and NFV might work together to achieve maximum benefit for the network operator and its customers.

Graph showing evolution of the vehicle in the digital age.
; Click to enlarge

In summary, Christos says that both NFV and SDN enable the “softwarization” of the network. Like so many others, he says that software is king and does eat everything else.  Acknowledging security threats, he cautions “beware of bugs and hackers!”

AT&T's "SDN-WAN" as the Network to Access & Deliver Cloud Services

Introduction:

For several years, we’ve wondered why there was so many alternative WANs used and proposed to access and deliver cloud computing and storage services (IaaS, PaaS, SaaS, etc.) for public, private, and hybrid clouds. The list includes: the public Internet (best effort), IP MPLS VPN, other types of IP VPNs, Carrier Ethernet for Private Cloud (MEF spec), dedicated private line to Cloud Service Provider (CSP) data center/platform/point of presence, etc.

AT&T is attempting to position its “SDN WANenhanced IP-MPLS VPN as the unified WAN solution for cloud services provided by its partners.  At IT Roadmap in San José, CA on Sept 17, 2014, Randall Davis announced that AT&T is partnering with several CSPs to use its enhanced IP-MPLS VPN WAN to enable end users to access a variety of cloud services. The impressive list of CSPs includes: Microsoft (Windows Azure), HP, IBM, Salesforce.com, Box, and CSC. That bestows credibility and confidence in AT&T’s cloud networking approach.

Network Enabled Cloud Solutions via AT&T NetBond:

Mr. Davis stated that AT&T spends ~$20B per year on CAPEX/OPEX to maintain and improve its wireless and wire-line networks. Instead of discrete network functions and equipment associated with individual services running on disparate subnetworks, AT&T’s goal is to consolidate all services to be delivered to customers onto a software based, programmable, cloud like “SDN WAN” which uses their own intellectual property (see below).

AT&T's vision of a network enabled cloud. Image courtesy of AT&T.
AT&T’s vision of a network enabled cloud. Image courtesy of AT&T.

“The User Defined Network Cloud is AT&T’s vision for the network of the future,” Davis stated. “Our goal is to provide a set of services delivered from a single cloud-like network. AT&T is tapping into the latest technologies, open source projects and open network principles to make that happen,” he said.

“It’s a fundamental new way to build a smart “cloud-like network” that addresses the many concerns of end users about the network being the bottleneck in delivery of cloud services.” Indeed, barriers to moving enterprise workloads to the cloud often involve the WAN. For example, how can the network address cloud integration complexity, a warehouse of telecom circuits, security, reliability/availability, and compliance issues?

AT&Ts “network enabled cloud,” called NetBond, allows customers to extend their existing MPLS Virtual Private Network (VPN) to the CSPs platform for the delivery of business/ enterprise applications through fast and highly secure connectivity.  AT&T says they are driving the network enabled ecosystem and working with leading CSPs such as Microsoft, Salesforce.com, HP, IBM, CSC and Equinix.

Positioned between the enterprise customer premises and the CSP’s platform/point of presence, AT&T’s NetBond provides a highly flexible and simple way for AT&T customers to utilize their enterprise VPNs to connect to a cloud computing or IT service environment in AT&T’s cloud partner ecosystem (which is growing).  This solution bypasses the public Internet entirely, thereby providing secure and seamless access to CSPs applications and data storage.

AT&Ts NetBond enables the end customer to integrate cloud services within its enterprise wide IP-MPLS VPN (from AT&T, of course).  It does so by extending its MPLS VPN to the CSPs compute/storage platform, thereby isolating traffic from other customer traffic, creating a private network connection. As a result, there’s no need for a separate IP VPN to/from the CSP.

The solution is designed around the following keys areas:

  1. Flexibility. Network bandwidth is optimized for your workloads and fully scalable
  2. Network Security and isolation. Intelligent routing directs traffic to logically separated customer environments on shared physical infrastructure.
  3. Availability and performance. The solution is built on a stable, robust and scalable technology platform resulting in up to 50% lower latency and up to 3X availability.
  4. Automation and control. The solution uses automation and a self-service portal to activate service changes in hours versus weeks.

NetBond permits both the network and cloud infrastructure to scale or contract in tandem and on-demand, rapidly accommodating workload changes. It seems to be well suited for customers who want to avoid exposure to the public Internet and risk of DDoS attacks, as well as, have a highly available and high-performance connection to their cloud resources. Davis said that “NetBond provides a scalable, highly secure, high performance, and integrated WAN solution” for access to the cloud.

Other benefits were identified:

  • Private IP address space avoids DDoS attacks
  • API controlled pre-integrated survivable network infrastructure
  • Elasticity with dynamic traffic bursting (above some pre-defined threshold)
  • AT&T sells baseline units of traffic capacity with most bursting covered
  • Bursting overages at the “95th percentile” incur an extra charge
  • Any to any instant on connectivity (zero provisioning time to reach CSP that’s partnered with AT&T)
  • Improved legacy application performance and increased throughput
  • Privacy from separation of data and control planes
  • Better availability due to simplicity of operations
  • Bursting capability eliminates gaps and gluts
  • Cost model aligns with cloud usage

AT&T NetBond Backgrounder:

AT&T’s website states: NetBond provides benefits of a private network with the flexibility of cloud. With NetBond, security, performance and control are no longer barriers to moving enterprise applications and data to the cloud.

“NetBond uses patented technology that uses Software Defined Network (SDN) capabilities, providing traffic routing flexibility and integration of VPN to cloud service providers. With AT&T NetBond, customers can expect up to 50% lower latency and up to 3x availability. In addition, network connectivity can be scaled up or down with the cloud resources resulting in bursting of up to 10 times your contracted commitment. From a security perspective, AT&T NetBond isolates traffic from the Internet and from other cloud traffic reducing exposure to risks and attacks such as DDoS.”

“AT&T VPN customers can create highly-secure, private and reliable connectivity to cloud services in minutes without additional infrastructure investments and long-term contract commitments. We also enable end to end integration with cloud service providers resulting in a common customer experience regardless of the cloud platform.”

“Because it can reduce over-provisioning, AT&T NetBond can result in savings of as much as 60% on networking costs compared to internet based alternatives. Also, customers experience true flexibility in that they only pay for what they have ordered and are able to change their billing plan at any time to reflect usage.”

For more on the technology used for AT&T’s IP MPLS VPN see this white paper:

What’s the Control Mechanism for NetBond?

AT&T uses its own version of SDN WAN with “APIs to expose control mechanisms used to order (provision) and manipulate network services.” AT&T’s SDN WAN is based on proprietary intellectual property the company refers to as “Intelligent Route Service Control Processor (IRSCP).” That technology is used to dynamically change the routing (end to end paths) in the network to respond to operational changes, new customers, more or less traffic, and to automatically recover from failed network nodes or links. Davis said that AT&T’s suppliers are using the company’s version of SDN WAN in “novel ways.” AT&T is also using open source software whenever possible, he said (we assume that to mean in their suppliers’ equipment and possibly in their network management/OSS software).

A quick web search indicates that AT&T has at least one patent on IRSCP. In 2006, AT&T Labs researchers published a paper titled, “Dynamic Connectivity Management with an Intelligent Route Service Control Point” in the Proceedings of the 2006 SIGCOMM Workshop on Internet Network Management.

Mobile Integration into Cloud Applications is Needed:

With more and more mobile apps on smart phones and tablets accessing cloud based applications, it’s essential to provide a wireless network that solves both security and performance problems. Randall hinted that AT&T’s NetBond may be extended to include wireless access in the near future. The following benefits of doing so were enumerated:

  • Faster time to market for new mobile apps
  • Access to easier solutions which can be quickly configured (no explanation provided)
  • Simpler compliance
  • Improved performance
  • Better security

Author’s Notes:

  1. Mr. Davis referred to “Project Gamma” as an early example of AT&T’s Domain 2.0 architecture. It was said to be an example of “User Defined Network Cloud (UDNC)” in that it virtualizes Ethernet connectivity and routing to automate services delivered to AT&T customers. [No reference was given or could be found for Project Gamma.]
  2. On Sept 17, 2014 (the date of Mr. Davis’ IT Roadmap-SJ presentation), Light Reading reported that AT&T will bring its User-Defined Network to Austin businesses by the end of this year.

“This is really focused on wireline services, specifically, we’re starting with Ethernet… I would expect that we’ll look at wireless too,” says Josh Goodell, VP of Network on Demand at AT&T.

Businesses with the Network on Demand Ethernet service will be able to change some network services and modify upload and download speeds via a self-service portal. This will mean that services will be changed almost instantaneously, “rather than the previous method of modifying, installing or replacing hardware to make network changes,” AT&T notes.

Addendum:

On Sept 18, 2014, AT&T and IBM announced a strategic partnership AT&T Teams with IBM Cloud to Extend Highly Secure Private Network to Clients.

AT&T NetBond services will be extended to IBM’s SoftLayer platform for stronger security and performance. This extension of the IBM and AT&T alliance will allow businesses to easily create hybrid-computing solutions. AT&T Virtual Private Network (VPN) customers can use AT&T NetBond to connect their IT infrastructure to IBM’s SoftLayer private network and cloud services. The service allows customers to benefit from highly secure connections with high reliability and performance as an alternative to relying on public Internet access.

“AT&T NetBond gives customers a broader range of options as they explore how to best leverage a hybrid cloud,” said Jim Comfort, general manager of IBM Cloud Services. “Customers can easily move workloads to and from SoftLayer as if it were part of their local area network. This added flexibility helps optimize workload performance while allowing customers to scale IT resources in a way that makes sense.”

“Businesses look to AT&T and IBM to deliver best in class solutions to meet their most demanding needs— especially when it comes to cloud,” said Jon Summers, senior vice president growth platforms, AT&T Business Solutions. “Together, we’re making the network as flexible as the cloud and giving enterprises confidence they can migrate their business systems to the cloud and still meet their security, scalability and performance requirements.”

End NOTE:  We will update this article if and when we receive a figure from AT&T that illustrates NetBond.  Stay tuned!

 

2014 Hot Interconnects Semiconductor Session Highlights & Takeaways- Part I.

Introduction:

With Software Defined: Networking (SDN), Storage and Data Center movements firmly entrenched, one might believe there’s not much opportunity for innovation in dedicated hardware implemented in silicon.  Several sessions at the 2014 Hot Interconnects conference, especially one from ARM Ltd, indicated that was not the case at all.

With the strong push for open networks, chips have to be much more flexible and agile, as well as more powerful, fast and functionally dense. Of course, there are well known players for specific types of silicon. For example: Broadcom for switch/routers;  ARM for CPU cores (also Intel and MIPS/Imagination Technologies),  many vendors for System on a Chip (SoC)- which includes 1 or more CPU core(s)-mostly from ARM (Qualcomm, Nvidia, Freescale, etc), and Network Processors (Cavium, LSI-Avago/Intel, PMC-Sierra, EZchip, Netronome, Marvell, etc),  bus interconnect fabrics (Arteris, Mellanox, PLX /Avago, etc).

What’s not known is how these types of components, especially SoC’s, will evolve to support open networking and software defined networking in telecom equipment (i.e. SDN/NFV).    Some suggestions were made during presentations and a panel session at this year’s excellent Hot Interconnects conference.

We summarize three invited Hot Interconnects presentations related to network silicon in this article. Our follow on Part II article will cover network hardware for SDN/NFV based on an Infonetics presentation and service provider market survey.

  1. Data & Control Plane Interconnect Solutions for SDN & NFV Networks, by Raghu Kondapalli, Director of Strategic Planning at LSI/Avago (Invited Talk)

Open networking, such as SDN (Software Defined Networking) and NFV (Network Function Virtualizationprovides software control of many network functions.   NFV enables virtualization of entire classes of network element functions such that they become modular building blocks that may be connected, or chained, together to create a variety of communication services.

Software defined and functionally disaggregated network elements rely heavily on deterministic and secure data and control plane communication within and across the network elements. In these environments scalability, reliability and performance of the whole network relies heavily on the deterministic behavior of this interconnect.  Increasing network agility and lower equipment prices are causing severe disruption in the networking industy.

A key SDN/NFV implementation issue is how to disaggregate network functions in a given network element (equipment type).  With such functions modularized, they could be implemented in different types of equipment along with dedicated functions (e.g. PHYs to connect to wire-line or wireless networks.  The equipment designer needs to: disaggregate, virtualize, interconnect, orchestrate and manage such network functions.

“Functional coordination and control plane acceleration are the keys to successful SDN deployments,” he said.  Not coincidently, the LSI/Avago Axxia multicore communication processor family (using an ARM CPU core) is being positioned for SDN and NFV acceleration, according to the company’s website. Other important points made by Raghu:

  • Scale poses many challenges for state management and traffic engineering
  • Traffic Management and Load Balancing are important functions
  • SDN/NFV backbone network components are needed
  • Disaggregated architectures will prevail.
  • Circuit board interconnection (backplane) should consider the traditional passive backplane vs. an active switch fabric.

Axxia 5516 16-core communications processor was suggested as the SoC to use for a SDN/NFV backbone network interface.  Functions identified included:  Ethernet switching, protocol pre-processing, packet classification (QoS), traffic rate shaping, encryption, security, Precision Time Protocol (IEEE 1588) to synchronize distributed clocks, etc.

Axxia’s multi-core SoCs were said to contain various programmable function accelerators to offer a scalable data and control plane solution.

Note:  Avago recently acquired semiconductor companies LSI Corp. and PLX Technology, but has now sold its Axxia Networking Business (originally from LSI which had acquired Agere in 2007 for $4 billion) to Intel for only $650 million in cash.  Agere Systems (which was formerly AT&T Micro-electronics- at one time the largest captive semiconductor maker in the U.S.) had a market capitalization of about $48 billion when it was spun off from Lucent Technologies in Dec 2000.

  1. Applicability of Open Flow based connectivity in NFV Enabled Networks, by Srinivasa Addepalli, Fellow and Chief Software Architect, Freescale (Invited Talk)

Mr. Addepalli’s presentation addressed the performance challenges in VMMs (Virtual Machine Monitors) and the opportunities to offload VMM packet processing using SoC’s like those from Freescale (another ARM core based SoC).   The VMM layer enables virtualization of networking hardware and exposes each virtual hardware element to VMs.

“Virtualization of network elements reduces operation and capital expenses and provides the ability for operators to offer new network services faster and to scale those services based on demand. Throughput, connection rate, low latency and low jitter are few important challenges in virtualization world. If not designed well, processing power requirements go up, thereby reducing the cost benefits,” according to Addepalli.

He positioned Open Flow as a communication protocol between control/offload layers, rather than the ONF’s API/protocol between the control and data planes (residing in the same or different equipment, respectively).  A new role for Open Flow in VMM and vNF (Virtual Network Function) offloads was described and illustrated.

The applicability of OpenFlow to NFV1 faces two challenges, according to Mr. Addepalli

  1. VMM networking
  2. Virtual network data path to VMs/

Note 1.  The ETSI NFV Industry Specification Group (ISG) is not considering the use of ONF’s Open Flow, or any other protocol, for NFV at this time.  It’s work scope includes reference architectures and functional requirements, but not protocol/interface specifications.  The ETSI NFV ISG will reach the end of Phase 1 by December 2014, with the publication of the remaining sixteen deliverables.

“To be successful, NFV must address performance challenges, which can best be achieved with silicon solutions,” Srinivasa concluded.   [Problem with that statement is that the protocols/interfaces to be used for fully standardized NFV have not been specified by ETSI or any standards body.  Hence, no one knows the exact combination of NFV functions that have to perform well]

  1. The Impact of ARM in Cloud and Networking Infrastructure, by Bob Monkman, Networking Segment Marketing Manager at ARM Ltd.

Bob revealed that ARM is  innnovating way beyond the CPU core it’s been licensing for years.  There are hardware accelerators, a cache coherent network and various types of network interconnects that have been combined into a single silicon block that is showed in the figure below:

Image courtesy of ARM - innovating beyond the core.
Image courtesy of ARM

Bob said something I thought was quite profound and dispels the notion that ARM is just a low power, core CPU cell producer: “It’s not just about a low power processor – it’s what you put around it.”  As a result, ARM cores are being included in SoC vendor silicon for both  networking and storage components. Those SoC companies, including LSI/Avago Axxia  and Freescale (see above), can leverage their existing IP by adding their own cell designs for specialized networking hardware functions (identified at the end of this article in the Addendum).

Bob noted that the ARM ecosystem was conducive to the disruption now being experience in the IT industy with software control of so many types of equipment.  The evolving network infrastructure – SDN, NFV, other Open Networking- is all about reducing total cost of ownership and enabling new services with smart and adaptable building blocks.  That’s depicted in the following illustration:

Evolving infrastructure is reducing costs and enabling new services.
Image courtesy of ARM.

Bob stated that one SoC size does not fit all.  For example, one type of Soc can contain: high performance CPU, power management, premises networking, storage & I/O building blocks.  While one for SDN/NFV might include: a high performance CPU, power management, I/O including wide area networking interfaces, and specialized hardware networking functions.

Monkman articulated very well what most already know:  that the Networking and Server equipment are often being combined in a single box (they’re “colliding” he said).  [In many cases, compute servers are running network virtualization (i.e.VMWare), acceleration, packet pre-processing, and/or control plane software (SDN model).]  Flexible intelligence is required on an end-to-end basis for this to work out well.  The ARM business model was said to enable innovation and differentiation, especially since the ARM CPU core has reached the 64 bit “inflection point.”

ARM is working closely with the Linaro Networking and Enterprise Groups. Linaro is a non-profit industry group creating open source software that runs on ARM CPU cores.  Member companies fund Linaro and provide half of its engineering resources as assignees who work full time on Linaro projects. These assignees combined with over 100 of Linaro’s own engineers create a team of over 200 software developers.

Bob said that Linaro is creating an optimized, open-source platform software for scalable infrastructure (server, network & storage).  It coordinates and multiplies members’ efforts, while accelerating product time to market (TTM).  Linaro open source software enables ARM partners (licensees of ARM cores) to focus on innovation and differentiated value-add functionality in their SoC offerings.

Author’s Note:  The Linaro Networking Group (LNG) is an autonomous segment focused group that is responsible for engineering development in the networking space. The current mix of LNG engineering activities includes:

  • Virtualization support with considerations for real-time performance, I/O optimization, robustness and heterogeneous operating environments on multi-core SoCs.
  • Real-time operations and the Linux kernel optimizations for the control and data plane
  • Packet processing optimizations that maximize performance and minimize latency in data flows through the network.
  • Dealing with legacy software and mixed-endian issues prevalent in the networking space
  • Power Management
  • Data Plane Programming API:

For more information: https://wiki.linaro.org/LNG


OpenDataPlane (ODP) http://www.opendataplane.org/ was described by Bob as a “truly cross-platform, truly open-source and open contribution interface.” From the ODP website:

ODP embraces and extends existing proprietary, optimized vendor-specific hardware blocks and software libraries to provide inter-operability with minimal overhead. Initially defined by members of the Linaro Networking Group (LNG), this project is open to contributions from all individuals and companies who share an interest in promoting a standard set of APIs to be used across the full range of network processor architectures available.]

Author’s Note:   There’s a similar project from Intel called DPDK or Data Plane Developer’s Kit that an audience member referenced during Q &A . We wonder if those APIs are viable alternatives or can be used in conjunction with the ONF’s OpenFlow API?


Next Generation Virtual Network Software Platforms, along with network operator benefits, are illustrated in the following graphic:

An image depicting the Next-Gen virtualized network software platforms.
Image courtesy of ARM.

Bob Monkman’s Summary:

  • Driven by total cost of ownership, the data center workload shift is leading to  more optimized and diverse silicon solutions
  • Network infrastructure is also well suited for the same highly integrated, optimized and scalable solutions ARM’s SoC partners understand and are positioned to deliver
  • Collaborative business model supports “one size does not fit all approach,” rapid pace of innovation, choice and diversity
  • Software ecosystem (e.g. Linaro open source) is developing quickly to support target markets
  • ARM ecosystem is leveraging standards and open source software to accelerate deployment readiness

Addendum:

In a post conference email exchange, I suggested several specific networking hardware functions that might be implemented in a SoC (with 1 or more ARM CPU cores).  Those include:  Encryption, Packet Classification, Deep Packet Inspection, Security functions,  intra-chip or inter-card interface/fabric, fault & performance monitoring, error counters?

Bob replied: “Yes, security acceleration such as SSL operations; counters of various sorts -yes; less common on the fault notification and performance monitoring. A recent example is found in the Mingoa acquisition, see: http://www.microsemi.com/company/acquisitions ”

…………………………………………………………………….

References:


End NOTE:  Stay tuned for Part II which will cover Infonetics’ Michael Howard’s presentation on Hardware and market trends for SDN/NFV.

2014 Hot Interconnects Highlight: Achieving Scale & Programmability in Google's Software Defined Data Center WAN

Introduction:

Amin Vahdat, PhD & Distinguished Engineer and Lead Network Architect at Google, delivered the opening keynote at 2014 Hot Interconnects, held August 26-27 in Mt View, CA. His talk presented an overview of the design and architectural requirements to bring Google’s shared infrastructure services to external customers with the Google Cloud Platform.

The wide area network underpins storage, distributed computing, and security in the Cloud, which is appealing for a variety of reasons:

  • On demand access to compute servers and storage
  • Easier operational model than premises based networks
  • Much greater up-time, i.e. five 9’s reliability; fast failure recovery without human intervention, etc
  • State of the art infrastructure services, e.g. DDoS prevention, load balancing, storage, complex event & stream processing, specialised data aggregation, etc
  • Different programming models unavailable elsewhere, e.g. low latency, massive IOPS, etc
  • New capabilities; not just delivering old/legacy applications cheaper

Andromeda- more than a galaxy in space:

Andromeda – Google’s code name for their managed virtual network infrastructure- is the enabler of Google’s cloud platform which provides many services to simultaneous end users. Andromeda provides Google’s customers/end users with robust performance, low latency and security services that are as good or better than private, premises based networks. Google has long focused on shared infrastructure among multiple internal customers and services, and in delivering scalable, highly efficient services to a global population.

An image of Google's Andromeda Controller diagram.
Click to view larger version. Image courtesy of Google

“Google’s (network) infra-structure services run on a shared network,” Vahdat said. “They provide the illusion of individual customers/end users running their own network, with high-speed interconnections, their own IP address space and Virtual Machines (VMs),” he added.  [Google has been running shared infrastructure since at least 2002 and it has been the basis for many commonly used scalable open-source technologies.]

From Google’s blog:

Andromeda’s goal is to expose the raw performance of the underlying network while simultaneously exposing network function virtualization (NFV). We expose the same in-network processing that enables our internal services to scale while remaining extensible and isolated to end users. This functionality includes distributed denial of service (DDoS) protection, transparent service load balancing, access control lists, and firewalls. We do this all while improving performance, with more enhancements coming.  Hence, Andromeda itself is not a Cloud Platform networking product; rather, it is the basis for delivering Cloud Platform networking services with high performance, availability, isolation, and security.”

Google uses its own versions of SDN and NFV to orchestrate provisioning, high availability, and to meet or exceed application performance requirements for Andromeda. The technology must be distributed throughout the network, which is only as strong as its weakest link, according to Amin.  “SDN” (Software Defined Networking) is the underlying mechanism for Andromeda. “It controls the entire hardware/software stack, QoS, latency, fault tolerance, etc.”

“SDN’s” fundamental premise is the separation of the control plane from the data plane, Google and everyone else agrees on that. But not much else!  Amin said the role of “SDN” is overall co-ordination and orchestration of network functions. It permits independent evolution of the control and data planes. Functions identified under SDN supervision were the following:

  • High performance IT and network elements: NICs, packet processors, fabric switches, top of rack switches, software, storage, etc.
  • Audit correctness (of all network and compute functions performed)
  • Provisioning with end to end QoS and SLA’s
  • Insuring high availability (and reliability)

“SDN” in Andromeda–Observations and Explanations:

“A logically centralized hierarchical control plane beats peer-to-peer (control plane) every time,” Amin said. Packet/frame forwarding in the data plane can run at network link speed, while the control plane can be implemented in commodity hardware (servers or bare metal switches), with scaling as needed. The control plane requires 1% of the overhead of the entire network, he added.

As expected, Vahdat did not reveal any of the APIs/ protocols/ interface specs that Google uses for its version of “SDN.” In particular, the API between the control and data plane (Google has never endorsed the ONF specified Open Flow v1.3). Also, he didn’t detail how the logically centralized, but likely geographically distributed control plane works.

Amin said that Google was making “extensive use of NFV (Network Function Virtualization) to virtualize SDN.” Andromeda NFV functions, illustrated in the above block diagram, include: Load balancing, DoS, ACLs, and VPN. New challenges for NFV include: fault isolation, security, DoS, virtual IP networks, mapping external services into name spaces and balanced virtual systems.

Managing the Andromeda infrastructure requires new tools and skills, Vahdat noted. “It turns out that running a hundred or a thousand servers is a very difficult operation. You can’t hire people out of college who know how to operate a hundred or a thousand servers,” Amin said. Tools are often designed for homogeneous environments and individual systems. Human reaction time is too slow to deliver “five nines” of uptime, maintenance outages are unacceptable, and the network becomes a bottleneck and source of outages.

Power and cooling are the major costs of a global data center and networking infrastructure like Google’s. “That’s true of even your laptop at home if you’re running it 24/7. At Google’s mammoth scale, that’s very apparent,” Vahdat said.

Applications require real-time high performance and low-latency communications to virtual machines. Google delivers those capabilities via its own Content Delivery Network (CDN).  Google uses the term “cluster networking” to describe huge switch/routers which are purpose-built out of cost efficient building blocks.

In addition to high performance and low latency, users may also require service chaining and load-balancing, along with extensibility (the capability to increase or reduce the number of servers available to applications as demand requires). Security is also a huge requirement. “Large companies are constantly under attack. It’s not a question of whether you’re under attack but how big is the attack,” Vahdat said.

[“Security will never be the same again. It’s a losing battle,” said Martin Casado, PhD during his Cloud Innovation Summit keynote on March 27, 2014]

Google has a global infrastructure, with data centers and points of presence worldwide to provide low-latency access to services locally, rather than requiring customers to access a single point of presence. Google’s software defined WAN (backbone private network) was one of the first networks to use “SDN”. In operation for almost three years, it is larger and growing faster than Google’s customer facing Internet Connectivity between Google’s cloud resident data centers and is comparable to the data traffic within a premises based data center, according to Vahdat.

Note 1.   Please refer to this article: Google’s largest internal network interconnects its Data Centers using Software Defined Network (SDN) in the WAN

“SDN” opportunities and challenges include:

  • Logically centralized network management- a shift from fully decentralized, box to box communications
  • High performance and reliable distributed control
  • Eliminate one-off protocols (not explained)
  • Definition of an API that will deliver NFV as a service

Cloud Caveats:

While Vahdat believes in the potential and power of cloud computing, he says that moving to the cloud (from premises based data centers) still poses all the challenges of running an IT infrastructure. “Most cloud customers, if you poll them, say the operational overhead of running on the cloud is as hard or harder today than running on your own infrastructure,” Vahdat said.

“In the future, cloud computing will require high bandwidth, low latency pipes.” Amin cited a “law” this author never heard of: “1M bit/sec of I/O is required for every 1MHz of CPU processing (computations).” In addition, the cloud must provide rapid network provisioning and very high availability, he added.

Network switch silicon and NPUs should focus on:

  • Hardware/software support for more efficient read/write of switch state
  • Increasing port density
  • Higher bandwidth per chip
  • NPUs must provide much greater than twice the performance for the same functionality as general purpose microprocessors and switch silicon.

Note: Two case studies were presented which are beyond the scope of this article to review.  Please refer to a related article on 2014 Hot Interconnects Death of the God Box

Vahdat’s Summary:

Google is leveraging its decade plus experience in delivering high performance shared IT infrastructure in its Andromeda network.  Logically centralized “SDN” is used to control and orchestrate all network and computing elements, including: VMs, virtual (soft) switches, NICs, switch fabrics, packet processors, cluster routers, etc.  Elements of NFV are also being used with more expected in the future.

References:

http://googlecloudplatform.blogspot.com/2014/04/enter-andromeda-zone-google-cloud-platforms-latest-networking-stack.html

https://www.youtube.com/watch?v=wpin6GKpDm8

http://gigaom.com/2014/04/02/google-launches-andromeda-a-software-defined-network-underlying-its-cloud/

http://virtualizationreview.com/articles/2014/04/03/google-andromeda.aspx

http://community.comsoc.org/blogs/alanweissberger/martin-casado-how-hypervisor-can-become-horizontal-security-layer-data-center

http://www.convergedigest.com/2014/03/ons-2014-google-keynote-software.html

https://www.youtube.com/watch?v=n4gOZrUwWmc

http://cseweb.ucsd.edu/~vahdat/

Addendum:  Amdahl’s Law

In a post conference email to this author, Amin wrote:

Here are a couple of references for Amdahl’s “law” on balanced system design:

Both essentially argue that for modern parallel computation, we need a fair amount of network I/O to keep the CPU busy (rather than stalled waiting for I/O to complete).
Most distributed computations today substantially under provision IO, largely because of significant inefficiency in the network software stack (RPC, TCP, IP, etc.) as well as the expense/complexity of building high performance network interconnects.  Cloud infrastructure has the potential to deliver balanced system infrastructure even for large-scale distributed computation.

Thanks, Amin

AT&T Outlines SDN/NFV Focus Areas for Domain 2.0 Initiative

Introduction:  The White Paper

As previously reported*, AT&T’s future Domain 2.0 network infrastructure must be open, simple, scalable and secure, according to John Donovan, AT&T’s senior executive vice president of technology and network operations.

* AT&T’s John Donovan talks BIG GAME but doesn’t reveal Game Plan at ONS 2014  

But what does that really mean?  And what are the research initiatives that are guiding AT&T’s transition to SDN/NFV?

Let’s first examine  AT&Ts Domain 2.0 white paper.

It specifically states the goal of moving to a virtualized, cloud based, SDN/NFV design based on off-the-shelf components (merchant silicon) and hardware and rejecting the legacy of OSMINE compliance and traditional telecom standards for OSS/BSS.  Yet there is no mention of the OpenFlow API/protocol we could find.

“In a nutshell, Domain 2.0 seeks to transform AT&T’s networking businesses from their current state to a future state where they are provided in a manner very similar to cloud computing services, and to transform our infrastructure from the current state to a future state where common infrastructure is purchased and provisioned in a manner similar to the PODs used to support cloud data center services. The replacement technology consists of a substrate of networking capability, often called Network Function Virtualization Infrastructure (NFVI) or simply infrastructure that is capable of being directed with software and Software Defined Networking (SDN) protocols to perform a broad variety of network functions and services.”

“This infrastructure is expected to be comprised of several types of substrate. The most typical type of substrate being servers that support NFV, followed by packet forwarding capabilities based on merchant silicon, which we often call white boxes. However it’s envisioned that other specialized network technologies are also brought to bear when general purpose processors or merchant silicon are not appropriate.”

AT&T''s vision of a user-defined cloud experience.
Image courtesy of AT&T

“AT&T services will increasingly become cloud-centric workloads. Starting in data centers (DC) and at the network edges – networking services, capabilities, and business policies will be instantiated as needed over the aforementioned common infrastructure. This will be embodied by orchestrating software instances that can be composed to perform similar tasks at various scale and reliability using techniques typical of cloud software architecture.”

Interview with AT&T’s Soren Telfer:

As a follow up to John Donovan’s ONS Keynote on AT&T’s “user-defined network cloud” (AKA Domain 2.0), we spoke to Soren Telfer, Lead Member of Technical Staff at AT&T’s Palo Alto, CA Foundry. Our intent was to gain insight and perspective on the company’s SDN/NFV research focus areas and initiatives.

Mr. Telfer said that AT&T’s Palo Alto Foundry is examining technical issues that will solve important problems in AT&T’s network.  One of those is the transformation to SDN/NFV so that future services can be cloud based.  While Soren admitted there were many gaps in SDN/NFV standard interfaces and protocols, he said, “Over time the gaps will be filled.”

Soren said that AT&T was working within the  Open Networking Labs (ON.LAB), which is part of the Stanford-UC Berkeley Open Network Research Community.  The ONRC mission from their website:  “As inventors of OpenFlow and SDN, we seek to ‘open up the Internet infrastructure for innovations’ and enable the larger network industry to build networks that offer increasingly sophisticated functionality yet are cheaper and simpler to manage than current networks.”  So for sure, ON.LAB work is based on the OpenFlow API/protocol between the Control and Data Planes (residing in different equipment).

The ON.LAB community is made up of open source developers, organizations and users who all collaborate on SDN tools and platforms to open the Internet and Cloud up to innovation.  They are trying to use a Linux (OS) foundation for open source controllers, according to Soren.  Curiously, AT&T is not listed as an ON.LAB contributor at http://onlab.us/community.html

AT&T’s Foundry Research Focus Areas:

Soren identified four key themes that AT&T is examining in its journey to SDN/NFV:

1.  Looking at new network infrastructures as “distributed systems.”  What problems need to be solved?  Google’s B4 network architecture was cited as an example.

[From a Google authored research paper: http://cseweb.ucsd.edu/~vahdat/papers/b4-sigcomm13.pdf]

“B4 is a private WAN connecting Google’s data centers across the globe. It has a number of unique characteristics:  i) massive bandwidth requirements deployed to a modest number of sites, ii) elastic traffic  demand that seeks to maximize average bandwidth, and iii) full control over the edge servers and network, which enables rate limiting and demand measurement at the edge.”

2.  Building diverse tools and environments for all future AT&T work on SDN/NFV/open networking. In particular, development, simulation and emulation of the network and its components/functional groupings in a consistent manner.  NTT Com’s VOLT (Versatile OpenFlow ValiDator) was cited as such a simulation tool for that carrier’s SDN based network.  For more on VOLT and NTT Com’s SDN/NFV please refer to: http://viodi.com/2014/03/15/ntt-com-leads-all-network-providers-in-deployment-of-sdnopenflow-nfv-coming-soon/

3.  Activities related to “what if questions.”  In other words, out of the box thinking to potentially use radically new network architecture(s) to deliver new services.  “Network as a social graph” was cited as an example.  The goal is to enable new experiences for AT&T’s customers via new services or additional capabilities to existing services.

Such a true “re-think+” initiative could be related to John Donovan’s reply to a question during his ONS keynote: “We will have new applications and new technology that will allow us to do policy and provisioning as a parallel process, rather than an overarching process that defines and inhibits everything we do.”

+ AT&T has been trying to change it’s tagline to:  “Re-think Possible” for some time now.  Yet many AT&T customers believe “Re-think” is impossible for AT&T, as its stuck in out dated methods, policies and procedures.  What’s your opinion?

According to Soren, AT&T is looking for the new network’s ability to “facilitate communication between people.”  Presumably, something more than is possible with today’s voice, video conferencing, email or social networks?  Functional test or universal tests are being considered to validate such a new network capability.

4.  Overlaying computation on a heterogeneous network system [presumably for cloud computing/storage and control of the Internet of Things (IoT)]. Flexible run times for compute jobs would be an example attribute for cloud computing.  Organizing billions of devices and choosing among meaningful services would be an IoT objective.

What then is the principle role of SDN in all of these research initiatives?  Soren said:

SDN will help us to organize and manage state.”  That includes correct configuration settings, meeting requested QoS, concurrency, etc.   Another goal was to virtualize many physical network elements (NEs).  DNS server, VoIP server and other NEs that could be deployed as Virtual Machines (VMs).

Soren noted that contemporary network protocols internalize state. For example, the routing data base for paths selected are internally stored in a router. An alternate “distributed systems” approach would be to externalize state such that it would not be internal to each network element.

However, NE’s accessing external state would require new state organization and management tools.  He cited Amazon’s Dynamo and Google’s B4 as network architectures AT&T was studying. But creating and deploying protocols that work with external state won’t be here soon.  “We’re looking to replace existing network protocols with those designed for more distributed systems in the next seven or eight years,” he added.

Summing up, Soren wrote in an email:

“AT&T is working to deliver the User Defined Network Cloud, through which AT&T will open, simplify, scale, and secure the network of the future.  That future network will first and foremost deliver new experiences to users and to businesses.

The User Defined Network Cloud and Domain 2.0, are bringing broad and sweeping organizational and technical changes to AT&T. The AT&T Foundry in Palo Alto is a piece of the broader story inside and outside of the company. At the Foundry, developers and engineers are prototyping potential pieces of the future network where AT&T sees gaps in the current ecosystem. These prototypes utilize the latest concepts from SDN and techniques from distributed computing to answer questions and to point paths towards the future network. In particular, the Foundry is exploring how to best apply SDN to the wide-area network to suit the needs of the User Defined Network Cloud.”

Comment and Analysis:

Soren’s remarks seem to imply AT&T is closely investigating Google’s use of SDN (and some version of OpenFlow or similar protocol) for interconnecting all of its data centers as one huge virtual cloud. It’s consistent with Mr. Donovan saying that AT&T would like to transform its 4,600 central offices into environments that support a virtual networking cloud environment.

After this year’s “beachhead projects,” Mr. Donovan said AT&T will start building out new network platforms in 2015, as part of its Domain 2.0 initiative.   But what Soren talked about was a much longer and greater network transformation.  Presumably, the platforms built in 2015 will be based on the results of the “beachhead projects” that Mr. Donovan mentioned during the Q &A portion of his ONS keynote speech.

Based on its previously referenced Domain 2.0 Whitepaper, we expect the emphasis to be placed on NFV concepts and white boxes, rather than pure SDN/Open Flow.  Here’s a relevant paragraph related to an “open networking router.”

“Often a variety of device sizes need to be purchased in order to support variances in workload from one location to another. In Domain 2.0, such a router is composed of NFV software modules, merchant silicon, and associated controllers. The software is written so that increasing workload consumes incremental resources from the common pool, and moreover so that it’s elastic: so the resources are only consumed when needed. Different locations are provisioned with appropriate amounts of network substrate, and all the routers, switches, edge caches, and middle-boxes are instantiated from the common resource pool. Such sharing of infrastructure across a broad set of uses makes planning and growing that infrastructure easier to manage.”

We will continue to follow SDN/NFV developments and deployments, particularly related to carriers such as AT&T, NTT, Verizon, Deutsche Telekom, Orange, etc.  Stay tuned…

Virtually Networked: The State of SDN

We have all heard about hectic activity with several initiatives on network virtualization. The potpourri of terms in this space (SDN/OpenFlow/OpenDaylight etc.) are enough to make one’s head spin. This article will try to lay out the landscape as of the time of writing and explain how some of these technologies are relevant to independent broadband service providers.

In the author’s view – Software Defined Networking (SDN) evolved with the aim of freeing the network operator from dependence on networking equipment vendors for developing new and innovative services and was intended to make networking services simpler to implement and manage.

Software Defined Networking decouples the control and data planes – thereby abstracting the physically architecture from the applications running over it. Network intelligence is centralized and separated away from the forwarding of packets.

SDN is the term used for a set of technologies that enable the management of services over computer networks without worrying about the lower level functionality – which is now abstracted away. This theoretically should allow the network operator to develop new services at the control plane without touching the data plane since they are now decoupled.

Network operators can control and manage network traffic via a software controller – mostly without having to physically touch switches and routers. While the physical IP network still exists – the software controller is the “brains” of SDN that drives the IP based forwarding plane. Centralizing this controller functionality allows the operator to programmatically configure and manage this abstracted network topology rather than having to hand configure every node in their network.

SDN provides a set of APIs to configure the common network services (such as routing/traffic management/security) .

OpenFlow is one standard protocol that defines the communication between such an abstracted control and data plane. OpenFlow was defined by the Open Networking Foundation – and allows direct manipulation of physical and virtual devices. OpenFlow would need to be implemented at both sides of the SDN controller software as well as the SDN-capable network infrastructure devices.

How would SDN impact an independent broadband service providers? If SDN lives up to its promise, it could provide the flexibility in networking that Telcos have needed for a long time. From a network operations perspective, it has the potential to revolutionize how networks are controlled and managed today – making it a very simple task to manage physical and virtual devices without ever having to change anything in the physical network.

However – these are still early days in the SDN space. Several vendors have implemented software controllers and the OpenFlow specification appears to be stabilizing. OpenDaylight is an open platform for network programmability to enable SDN. OpenDaylight has just released its first release of software code – Hydrogen and it can be downloaded as open source software today. But this is not the only approach to SDN – there are vendor specific approaches that this author will not cover in this article.

For independent broadband service providers wishing to learn more about SDN – it would be a great idea to download the Hydrogen release of OpenDaylight and play with it – but don’t expect it to provide any production ready functionality. Like the first release of any piece of software there are wrinkles to be ironed out and important features to be written. It would be a great time to get involved if one wants to contribute to the open source community.

For the independent broadband service providers wanting to deploy SDN – it’s not prime-time ready yet – but it’s an exciting and enticing idea that is fast becoming real. Keep a close ear to the ground – SDN might make our lives easier fairly soon.

[Editor’s Note; For more great insight from Kshitij about “SDN” and other topics , please go to his website at http://www.kshitijkumar.com/]

Infonetics Survey: Network Operators reveal where they plan to first deploy SDN and NFV

Introduction:

Top 5 network locations operators expect to deploy SDN and NFV by 2014
Image courtesy of Infonetics

There’s been a lot of hype and even more uncertainty related to “Carrier SDN” and in particular the use of Open Flow protocol in carrier networks – between a centralized control plane entity and data plane entities residing  in “packet forwarding” engines built from commodity silicon with minimal software intelligence.  Many carriers are interested in the ETSI NFV work, which will NOT produce any standard or specifications.  This author has been contacted by several network operators to assess their NFV plans (please note that such consulting is not free of charge).  As ETSI NFV will make contributions to ITU-T SG13 work on future networks, it may be several years before any implementable standard (ITU Recommendation) is produced.

For its just released SDN and NFV Strategies survey, Infonetics Research  interviewed network operators around the globe, which together represent ~53% of the world’s telecom capex and operating  revenue.  The objective of the survey was to determine the timing and priority of the many use cases for their software-defined network (SDN) and network function virtualization (NFV) projects.

SDN And NFV Strategies Survey Highlights:

  • Virtually all major operators are either evaluating SDNs now or plan to do so within the next 3 years
  • SDN and NFV evaluation and deployments are being driven by carriers’ desire for service agility resulting in quicker time to revenue and operational efficiency
  • The top 5 network domains named by operators when asked where they plan to deploy SDNs and NFV by 2014: Within data centers, between data centers, operations and management, content delivery networks (CDNs), and cloud services
  • 86% of operators are confident they will deploy SDN and NFV technology in their optical transport networks as well at some point, once standards are finalized
  • Study participants rated Content Delivery Networks (CDNs), IP multimedia subsystems (IMS), and virtual routers/security gateways as the top applications for NFV

“For the most part, carriers are starting small with their SDN and NFV deployments, focusing on only parts of their network, what we call ‘contained domains,’ to ensure they can get the technology to work as intended,” explains Michael Howard, co-founder and principal analyst for carrier networks at Infonetics Research.

“But momentum for more widespread use of SDN and NFV is strong, as evidenced by the vast majority of operators participating in our study who plan to deploy the technologies in key parts of their networks, from the core to aggregation to customer access,” Howard adds. “Even so, we believe it’ll be many years before we see bigger parts or a whole network controlled by SDNs.”

About The Survey:

Infonetics’ July 2013 27-page SDN and NFV survey is based on interviews with purchase-decision makers at 21 incumbent, competitive and independent wireless operators from EMEA (Europe, Middle East, Africa), Asia Pacific and North America that have evaluated SDN projects or plan to do so. Infonetics asked operators about their strategies and timing for SDN and NFV, including deployment drivers and barriers, target domains and use cases, and suppliers. The carriers participating in the study represent more than half of the world’s telecom revenue and capex.

To learn more about the report, contact Infonetics:

References:

  1. Video interview with Infonetics’ co-founder Michael Howard on What’s really driving demand for SDN/NFV
  2. SDN and NFV: Survey of Articles Comparing and Contrasting
  3. Move Over SDN – NFV Taking the Spotlight – Cisco Blog
  4. Subtle SDN/NFV Data Points
  5. “Service Provider SDN” Network Virtualization and the ETSI NFV ISG
  6. The Impact on Your IT Department of Software Defined Networking (SDN) and Network Functions Virtualization (NFV)
  7.  SDNs and NFV: Why Operators Are Investing Now (archived webinar):  

Analyst Opinions on Cisco's CRS-X Core Router & Its Impact on Competitors

Product Announcement:

The Cisco® CRS-X, which will be available this year, is a 400 Gigabit per second (Gbps) per slot core router system that can be expanded to nearly 1 petabit per second in a multi-chassis deployment. The CRS-X provides 10 times the capacity of the original CRS-1, which was introduced in 2004 as a new class of core routing system designed to scale network capacity to accommodate the proliferation in video, data and mobile traffic, which has taken place over the last decade.

With 400 Gbps per slot density, the CRS-X multichassis architecture provides network operators the ability to scale using a 400 Gbps line card with Cisco AnyPort™ technology.  That  line card uses complementary metal oxide semiconductor (CMOS) photonic technology, called Cisco CPAK™, to reduce power consumption, reduce the cost of sparing, and increase deployment flexibility.

For example, each interface can be configured for either single port 100 Gigabit Ethernet, 2×40 GE, or 10 x10 GE and either short-, long-, or extended-reach optics by selecting a specific CPAK transceiver. This flexibility simplifies network engineering and operations and helps ensure that service providers can meet the demand for 10 GE, 40 GE and 100 GE applications without replacing hardware.

Additionally, the CRS-X improves the simplicity and scale of IP and optical convergence. Service providers can now choose between deploying integrated optics or the new Cisco nV™ optical satellite. Both allow for a single IP and optical system that utilizes Cisco’s nLight™ technology for control plane automation. The nV optical satellite deployments operate as a single managed system with the Cisco CRS Family to reduce operational expense and deliver high-density 100 GE scaling.

More information is in the press release


Since the first CRS router made its debut in 2004, Cisco has brought in a total of $8 billion in revenue from the product range, according to Stephen Liu, Cisco’s director of service provider marketing.  “The CRS-X is the innovation we need to cross the $10 billion barrier,” Mr. Liu told Reuters.

Cisco’s rivals in the core Internet router sector include Juniper Networks, Huawei, and Alcatel Lucent.   Cisco was not the first vendor to offer 40Gbps per slot in a core router – Juniper took that honor. It wasn’t the first to offer 100Gbps router either – Alcatel-Lucent, Huawei, and Juniper were all there first.  Moreover, Alcatel-Lucent and Huawei each beat Cisco with 400Gbps products. However, with 54% of the global core router market, Cisco has proven that being first to market does not guarantee success.


Analyst Opinions:

Market research firm Current Analysis was quite positive about Cisco’s new CRS-X core router.  In a note to clients Current Analysis wrote:

“(We are) Very positive on Cisco’s launch of the CRS-X, because it provides existing CRS Series customers with an upgrade path to address growing scale and capacity requirements in their IP core networks. In addition to providing high-scale performance for high-density 10G, 40G and 100G-based services, the system incorporates Prime Management, nLight and new software to support network programmability in order to help service providers cope with unpredictable traffic patterns and to optimize network resources while improving time to service. The new ‘AnyPort’ technology helps reduce inventory costs by providing a common line card base card that can be flexibly configured. Closer integration between the IP and optical network is also provided, which improves resource utilization and provides a level of programmability to the transport network using the Cisco 15454 ONS platform as an extension shelf. The announcement also includes endorsements from SoftBank and Verizon, which confirmed the need for scale, resiliency and investment protection.”

UK based Ovum wrote:

“With the introduction of the CRS-X, Cisco is sending a message to its carrier customers: your investment in CRS products is being protected. The role of the core router revolves around high-performance, high-capacity packet processing. Core router vendors have been challenged to increase the capacity of their products to meet the growth in network traffic without the operator having to do a complete forklift of their existing systems.”

“Rather than simply comparing feeds and speeds against competitors, Ovum believes the key to success for the CRS-X will be the differentiation provided by coupling the product to Cisco’s Elastic Core solution and nLight technology for control plane automation and IP and optical convergence. The nV Optical Satellite capability announced with the CRS-X is an example of this type of differentiation. The nV Optical Satellite provides a single integrated management interface for control over the CRS and remotely located 100G DWDM platforms to reduce opex.”

http://ovum.com/2013/06/13/cisco-crs-x-delivers-a-message-investment-protection/

Northland Capital Markets wrote that growing pressure on carriers from cloud computing usage may prompt them to upgrade to the CRS-X:

“We see carriers/cable operators/ content providers requiring core router refresh as result of an increase in traffic generated by Cloud services and machine-to-machine connectivity. We believe Cloud computing has redefined the way applications run on the network, exposing the underlying limitations of providers’ existing networks.”

Raymond James’ thinks Cisco’s new core router will prove to be a challenge for non-router vendors as well as traditional competitors Juniper, Huawei and Alcatel-Lucent.  Finisar, Ciena and Infinera were singled out in this report excerpt:

“CRS-X will use Cisco’s internally developed CPAK optical interface, which represents a headwind for Finisar. Cisco promotes its architecture for Converged Transport Routers and cites deficiencies in alternatives (“Hollow core” – leveraging OTN and optical like Ciena’s 5400 and “Lean core” – leveraging MPLS like Juniper’s PTX), and argues that its converged solution of optical, MPLS, and routing with Cisco Prime management bringing the layers together.  Similar to Cisco, Alcatel-Lucent combined its optical and routing units into a single organization, but it offers a two-box strategy (1830 and 7950). Optical integration matters, but we don’t know pricing. Cisco has offered IP over DWDM in the past, but high prices discouraged some carriers from using these interfaces, instead opting to plug the routers into long haul optical platforms; we suspect the CRS-X will go after this application more aggressively, which could pose a threat to long haul 100G competitors such as Alcatel-Lucent, Ciena, and Infinera.”


CRS-X Puts Pressure on Cisco’s Competitors:

Current Analysis wrote in a report to clients:

  • Alcatel-Lucent needs to keep up the pressure to move upcoming IP core refresh cycles its way. The 7950 XRS has obtained nine customer wins and multiple ongoing field trials since its launch, which shows that there is a definite interest in the metro IP core proposition as well as leveraging the platform for pure IP core applications. Alcatel-Lucent should also elevate its service provider SDN vision, as its competitors are doing.
  • Juniper should provide a roadmap for its two core network solutions, the PTX Series and the T Series, where it needs to close the current performance gap (the T Series delivers 240 Gbps per slot). The capacity race often follows a ‘leapfrog’ model, where one vendor’s refresh cycle trumps another’s for a period of time; Juniper needs to counter Cisco’s latest CRS-X move carefully. Juniper also should continue to make the case for a more agile and flexible network based on its four-step SDN roadmap.
  • Huawei needs to capitalize on its IP core momentum and announce (or, at least refer to) customers that are, or will be, using the 480 Gbps/slot capabilities announced for its NE5000E IP core router. Huawei also needs to sharpen and reaffirm its SDN message with respect to its network core architecture and integrate SDN into its SingleBackbone model.
  • ZTE needs to update its T8000 roadmap and hint as to when it will deploy higher-density 100G interfaces on the platform. ZTE needs to join the fray with an SDN message of its own that builds on its current management capabilities.

Ovum believes Juniper must respond: “When Cisco’s CRS-X becomes available, Juniper will become the only one of the top four core router vendors not delivering 400Gbps-per-slot capacity in its core router product, unless it announces a capacity upgrade to its core router in the next six months. Its largest capacity core router product, the T4000, delivers only 240Gbps bandwidth per slot. Juniper’s PTX product is ready to provide 480Gbps per slot, but line cards to take full advantage of the available capacity are not yet available, and the PTX is an MPLS-optimized core switch, not an IP core router. ”

Raymond James thinks that Juniper and Alcatel-Lucent are now at a competitive disadvantage in the core router market:

“The new CRS-X can support 64 100 Gbps ports in a standard seven-foot rack, which compares to 80 for Alcatel-Lucent’s 7950 and Juniper’s 32 on its T4000. In a multishelf configuration, Cisco claims it can support 1152 slots or 922 Tbps.”


Closing Comment:

We find it quite interesting that despite the tremendous hype around SDN, it wasn’t mentioned at all in Cisco’s CSR-X product announcement.  Nor did any analysts have any SDN comments related to the CSR-X.

In a new on-line video, Cisco’s Lew Tucker talks about SDN in the context of OpenStack cloud software, but doesn’t mention the CSR-X product:  http://newsroom.cisco.com/video/1170801

2013 Ethernet Tech Summit- Market Research Panel, Carrier Ethernet & Unsung Heroes

Introduction:

“Ethernet Technology Summit attendance was up over 20% in 2013. Topics of special interest included software-defined networking (SDN), 40/100/400 GbE, venture opportunities, and market research.  Keynotes by Mellanox, Dell’Oro Group, Huawei, Ethernet Alliance, Cisco Systems, Big Switch Networks, Broadcom, and Dell all drew capacity audiences.” said Lance A. Leventhal, Program Chairperson.

The Market Research panel covered the prospects for Ethernet in the enterprise, among carriers, especially for cellular backhaul), and in the data center.  The session was chaired by Crystal Black, Channel Marketing Manager, APTARE

Panelists:

  • Michael Howard, Infonetics Research
  • Casey Quillin, Dell’Oro Group
  • Sergis Mushell, Gartner
  • Jag Bolaria, Linley Group
  • Vladimir Kozlov, LightCounting

Discussion:

An image depicting small cell backhaul.
Image Courtesy of Infonetics

1. Michael Howard of Infonetics Research talked about macro-cell and small cell backhaul. “Nearly all new Macro-cell Backhaul Connections are IP/Ethernet,” he said. “IP/Ethernet is 94% of 2012 macrocell MBH equipment spending,” Michael added. Most of macro-cells use either microwave or fiber backhaul, with macro-cell sites that aggregate small cell traffic to use the same existing macrocell fiber backhaul.  Most outdoor small cells were being deployed at street level in urban centers, with three to eight of them connecting to a macro-cell site on the top of a building.

“Small cells have been deployed since 2007 nearly all located in-building and 2G/3G”, stated Howard.  “What’s new is the outdoor deployments, where operators this year are trying and trialing many new products, new technology options, and new locations that present a myriad of challenges, such as how to negotiate for lightpost placement, connect and buy power, and meet city regulations for color, size, shape of the small cell and backhaul products,” he added.

Small cell backhaul status is summarized as follows:

  • Operators are evaluating, testing, planning outdoor small cells
  • Virtually all small cell deployments to date are 3G and in-building
  • Most operators will deploy first outdoor in the urban core with ~3 to 8 pico-cells per macrocell
  • Most wireless carriers will aggregate small cell backhaul traffic onto the nearest macro-cell site—typically connected to fiber backhaul network
  • Outdoor small cell backhaul is mostly an Ethernet NLOS–MWV–MMW (i.e. Microwave and millimeter wave) play
  • Backhaul aggregation is still a fiber play

2. Jag Bolaria of Linley Group made the following points:

  • High bandwidth available from 4G-LTE networks are enabling a continued huge increase in mobile data traffic.
  • Cloud Computing is changing Data Center architecture, especially in the areas of scalability and virtualization.
  • There are many Ethernet markets, including:  Mobile back-haul  Data Centers SMB enterprise, Carrier Ethernet, etc.
  • Data Center topology is moving from hierarchical to flat, due to more East-West (server-to-server) traffic patterns
  • Data Center (Ethernet) switches need a lot more bandwidth for connectivity between them. As more servers have 10GE interfaces, the inter-switch connection is likely to be 40GE.
  • Very large Data Centers will have multiple L2 networks with L3 tunneling to migrate between many different L2 domains.
  • A virtualized L2 network may use Equal Cost MultiPath (ECMP) to define the shortest path between switches and load balance traffic over that path.  “Open Flow” may help here,” Jag said.
  • 100GE using CFP is still too expensive and consumes too much power to be deployed on a large scale.  Jag predicts that CFP2, CFP4, silicon photonics, or Indium phosphide will be used to shrink 100GE modules.

3.  Sergis Mushell of Gartner made several forecasts, including that:

  • There are four distinct models for SDN as it applies to ICs (but they were not identified).
  • 40GE interfaces are coming to blade servers this year.
  • Fiber Channel rates will increase to 16 Gbps and 32 Gbps.
  • Silicon Photonics will be built into Data Center equipment in the near future.

4.  Casey Quillin of Dell’Oro Group talked about SANs and Data Center deployments. He said that:

  • Fiber Channel (FC) revenues are mostly at 8 Gbps, but declining.
  • Revenues are increasing for FC at speeds greater or equal to 16 Gbps.
  • Revenue from FC @ 16 Gbps is almost all from switch-to-switch connections and ASPs are high for 16 Gbps FC switch ports.
  • The total 2012 FC market was up 1% in revenue and that was mostly from FC switches as FC adapter sales fell.
  • The FC attach rate on blade servers has declined sharply and we may see FCoE (Fiber Channel over Ethernet) as a replacement.
  • FCOE switch ports will also have to support one or more DC bridging protocols, e.g. TRILL, IEEE 802.  Yet, FCoE is only for “greenfield deployments,” Casey said.

5.  Vladimir Kozlov of LightCounting (market research firm founded in 2004) tracks optical communications supply chain.  He made the following key points:

  • Overwhelming majority (~95%) of 10GE optical transceivers use SFP+ Direct Attach (uses a passive twin-ax cable assembly and connects directly into an SFP+ housing).
  • 40GE will experience “good growth” in the next 3 to 4 years
  • Data Centers are becoming more efficient in how they use bandwidth and that may result in a decrease in the number of switch/routers sold into that market segment.
  • Microwave back-haul will be 10-12% of total U.S. cellular backhaul market this year.
  • No forecast was made for fiber optic back haul,  which now only reaches 55-60% of cell sites in the U.S.
  • Market research firm iGR forecasts that growth of fiber back-haul is expected to reach a CAGR of nearly 85 percent between 2011 and 2016

Read more: Study: U.S. mobile back-haul demand to grow nearly 10x by 2016

FierceWireless http://www.fiercewireless.com/story/study-us-mobile-backhaul-demand-grow-nearly-10x-2016/2012-03-13#ixzz2QZaAeRlD

  • A LightCounting report on 40G and 100G Data Center Interconnects analyzes the impact of growing data traffic and changing architecture of data centers on market forecast for Ethernet and Fibre Channel optical transceivers.

Comment on this panel session:

Other than Ethernet frames used for mobile back-haul  there wasn’t any discussion about the Carrier Ethernet market or services.  That topic was the subject of an all day track of sessions on Wednesday. Carrier Ethernet lets wireline network operators use low cost Ethernet systems to offer data services to SMBs and larger enterprise customers. Carrier Ethernet includes carrier grade reliability, Operations, Administration and Maintenance (OAM) features, linear and ring protection switching  as well as QoS/ class of service. Carrier Ethernet is sometimes referred to as Business Ethernet and is offered over bonded copper (n X T1 or n X DSL) or fiber for higher speeds (typically 100 Mbps or greater).

Carrier Ethernet Services offered to business customers  include: Ethernet Private Line, Ethernet Tree (point to multi-point) and Ethernet LAN (multi-point to multi-point).  In addition, the MEF is positioning Carrier Ethernet 2.0 for use in wire-line access to Private Cloud services.

The problem seemed to be that there weren’t any carriers willing to participate in those sessions, so it was just equipment and silicon vendors talking to one another.

A new report forecasts the Global Ethernet Access Device market to grow at a CAGR Of 13.62% from 2012-2016.

http://www.businesswire.com/news/home/20130411006525/en/Research-Markets-Global-Ethernet-Access-Device-Market


Another highlight of the Ethernet Technology Summit was a Wednesday evening award ceremony to the “Unsung hero’s of Ethernet.”  They were: Dave Boggs who worked with Bob Metcalfe on the original 3Mb/sec Ethernet (and whose name appears on the Ethernet patent), Ron Crane who designed the first working 10 Mb/s coax based Ethernet (which later became standardized by IEEE 802.3 as 10Base5), Tat Lam who worked on the original version of Ethernet and early 10 Mb/s transceivers and long time IEEE ComSoc contributor Geoff Thompson for his  hard work, long term support and leadership of Ethernet standards work in IEEE 802 (he was chair/vice-chair of the 802.3 WG for many years), TIA and the ISO.

The Unsung Heroes etched crystal awards were paid for by the IEEE Santa Clara Valley section (the largest in the world).  They include an image of Bob Metcalfe’s original sketch of the Ethernet system.

Note: this author has been a member of the IEEE SCV Executive Committee for many years and decades.  More info at:

http://www.24-7pressrelease.com/press-release/ieee-santa-clara-valley-section-honoring-ethernets-unsung-heroes-at-ethernet-technology-summits-40th-anniversary-of-ethernet-awards-ceremony-336450.php