Our last article described VMware’s view of network virtualization, provided definitions and advantages of same, with different opinions expressed on the main advantage(s). There was a reference to a follow on article which would illustrate various configurations of network virtualization. We attempt to do that in this article by describing figures from VMware, ETSI NFV ISG, and Intel’s SDN/NFV reference designs.
Network Virtualization Illustrations
The figure below (from VMware) shows a Leaf/Spine L3 fabric deployment in support of Network Virtualization. The fabric connects compute cabinets with Hypervisors (below), Infrastructure cabinets with controller nodes, and Edge cabinets that interface to the outside world (e.g. the Internet, private lines, IP VPNs, Carrier Ethernet, etc).
Because the virtual network is decoupled from the Data Center switch fabric, the latter can be built without the former complicating or restricting its design. VMware believes that the most scalable, robust, and cost effective architecture (to date) for such a Data Center switch fabric is the Layer 3 (L3 or IP Network layer) Leaf/Spine fabric design shown above.
Such a L3 Leaf/Spine fabric is constructed using standard IP routing protocols (e.g. OSPF, IS-IS, BGP) between the Leaf and Spine switches. This fabric can be put together using commonly available IP networking equipment such as L3 switches that support IP forwarding and 1/10/40G Ethernet MAC framing.
A Leaf switch is connected to all Spine switches to provide multiple high bandwidth paths to any other rack. The Leaf switch selects a path for each new flow between any pair of Virtual Machines (VMs). That’s done in hardware at line rate (e.g. 1/10/40G b/sec). This path selection is referred to as Equal Cost Multi Path (ECMP), which is supported by any standard and commonly available L3 switch. The selected Spine switch receives the traffic from the Leaf and forwards to the destination Leaf based on IP routing (looking at the destination IP address in the tunnel headers).
The Hypervisor nodes, running a programmable vswitch, attach the Leaf switch like any standard server — with a Network Interface Card (NIC) that has an IP address. The IP address on the NIC is used to dynamically build tunnels between other Hypervisors and Gateway nodes. The NSX Controller programs these tunnels dynamically as the environment changes.
Note that there is no special protocol between the Hypervisor and Leaf switch; just IP/Ethernet frames. If there was NIC Ethernet PHY bonding used on the Hypervisors, then Ethernet Link Aggregation Control Protocol (LACP) would be required.
2. ETSI Network Function Virtualization (NFV) Industry Specification Group (ISG)
The ETSI NFV ISG’s charter is to issue recommendations that will be input into existing Standards Development Organizations (SDOs) like ITU-T, along with industry forum like the ONF.
The figure below illustrates the NFV reference architecture. This diagram serves as a starting point for the NFV Architecture Working Group, but has not yet been finalized. NFV is broken into broad functional domains including the Applications Domain (where Network Functions reside), and the underlying framework, consisting of the HyperVisor, Compute, Infrastructure Network, and Management and Orchestration domains. The NFV architecture explicitly is defined to be complementary to SDN. However, recognizing the early stage in the SDN life-cycle, it is desirable to realize the benefits of NFV based on existing network architectures.
3. Intel Reference Designs
Intel recently introduced three referenced designs targeted at SDN/NFV implementations for both the control and data planes.
The Intel® Open Network Platform Switch Reference Design
Codenamed “Seacliff Trail,” the Intel® Open Network Platform (ONP) Switch Reference Design is based on scalable Intel processors, Intel® Ethernet Switch 6700 series and Intel® Communications Chipset 89xx series, and is available now. The ONP Switch Reference Design will include Wind River Open Network Software (ONS), an open and fully customizable network switching software stack using Wind River Linux. Wind River ONS allows for key networking capabilities such as advanced tunneling as well as modular, open control plane and management interface supporting SDN standards such as OpenFlow and Open vSwitch. Common, open programming interfaces allow for automated network management, and coordination between the server switching elements and network switches enabling more cost-effective, secure, efficient and extensible services.
The Intel® Data Plane Development Kit (Intel® DPDK) Accelerated Open vSwitch
Network architectures have traditionally been optimized for large packet throughput to meet the needs of enterprise end-point applications. Intel is executing a project aimed at improving small packet throughput and workload performance that can be achieved on the Open vSwitch using the Intel DPDK. Intel is specifically re-creating the kernel forwarding module (data plane) to take advantage of the Intel® DPDK library. The Intel® DPDK Accelerated Open vSwitch is planned to initially be released with the Intel® ONP Server Reference Design in the third quarter of this year.
The Intel® Open Network Platform Server Reference Design
This server reference platform, codenamed “Sunrise Trail,” is based on the Intel® Xeon® processor, Intel 82599 Ethernet Controller and Intel Communications Chipset 89xx series. The ONP Server Reference Design enables virtual appliance workloads on standard Intel architecture servers using SDN and NFV open standards for datacenter and telecom. Wind River Open Network Software includes an Intel DPDK Accelerated Open vSwitch, fast packet acceleration and deep packet inspection capabilities, as well as support for open SDN standards such as OpenFlow, Open vSwitch and OpenStack. The project is in development now: the first alpha series is slated to be available in the second half of 2013.
Intel does NOT distinguish between SDN and NFV in their whitepaper, which describes how to use their three reference designs. Intel’s SDN/NFV architecture consists of four layers: orchestration, network applications, network controller, and node-as shown in the figure to the right.
These layers have been proposed by Intel and have not yet been accepted (or even submitted as a contribution) by ETSI NFV ISG or the ONF which is standardizing SDN-Open Flow. Further details are in the Intel whitepaper referenced above.
It should be quite obvious to users that VMware’s network virtualization is an actual implementation, while the ETSI NFV and Intel illustrations of network virtualization are high level architectural diagrams, i.e. at the concept stage. That’s what one would expect in this very early process of standardizing network virtualization, with no solid specifications likely for at least one or two years.
Please read the comments underneath my last article for different opinions, perspectives and links to relevant ETSI NFV specification work.