2014 Hot Interconnects: Hardware for Software Defined Networks & NFV – Part II.

Introduction:

This closing Hot Interconnects session on  Hardware for Software Defined Networks (SDN) & Network Function Virtualization (NFV) was very informative. It revealed very interesting carrier market research and raised quite a few questions/ open issues related to dedicated network hardware to be used in SDN/NFV networks.  It was not, however, a “Venture Capital Forum” as the session was labeled in the program agenda [VC Forum-Fireside Dialogue: Mind the Silicon Innovation Gap].

Presentation & Discussion:

Infonetics co-founder and principal analyst Michael Howard led off the session with a presentation that contained a very intriguing time-line for network operators experiments and deployments of SDN/NFV (see chart below).

Operator SDN and NFV timeline, according to Infonetics.
Image courtesy of Infonetics

Author’s note:  We believe that it won’t be till 2018 or later for the SDN/NFV market to accelerate due to customer confusion of so many proprietary/vendor specific approaches.  Our assumption is based on standards being in place which will facilitate multi-vendor interoperability for SDN/NFV.

………………………………………………….

Here are the key points made by Mr. Howard during his presentation:

  • 2015 will be a year of field trials and a few commercial deployments.  Carriers will gather information and evaluate network and subscriber behavior during those trials.
  • 2016-2020 before operators deploy several SDN & NFV use cases, then more each year.  Commercial SDN/NFV deployment market will begin to ramp up (and vendors will start making money on the new technologies).
  • Infonetics includes SDN optimized network hardware in their view of 2020 Carrier Network architecture.  Distributed control, real-time analytics, policy inputs are characteristics of Centralized Control & Orchestration- to “control the controllers” in an end-to-end SDN based network.
  • NFV is all about moving implementation of carrier services from physical routers to Virtual Network Functions (VNFs), which run as software on commercial servers.*
  • Virtual Enterprise (vE)-CPE is top NFV use case for carrier revenue generation.

*  Note: The movement of services and associated functionality from hardware routers to VNFs that are implemented in software on commercially available compute servers is a very significant trend that appears to be gaining momentum and support.

Stephen Perrin of Heavy Reading wrote in a blog post: “Virtual routers in the WAN and NFV are tightly coupled trends. Routing functions are being virtualized in NFV, Operators are eyeing edge/access functions, at least initially; and key will be ensuring performance in virtualized networks.”

………………………………………………

Infonetics found that some of the top carrier ranked service functions proposed for VNFs are: carrier grade Network Address Translation (NAT),  Content Delivery Network (CDN), IP-MPLS VPN & VPN termination, Intrusion Detection Systems (IDS) & Prevention (IPS), broadband remote access server (BRAS or B-RAS), firewall, load balancing, QoS support, Deep Packet Inspection (DPI), and WAN optimization controller.

Here is a supportive quote from Infonetics’ recent Service Provider Router Market Study:

“In our Routing, IP Edge, and Packet-Optical Strategies: Global Service Provider Survey, July, 2014, we interviewed router purchasing decision makers at 32 operators worldwide — incumbent, competitive, mobile, and cable operators from EMEA, Asia Pacific, North America, and CALA, that together control 41% of worldwide service provider capex.

In this survey, we focused on plans for moving router functions from physical routers to software (known as vRouters, which run on commercial servers); 100GE adoption; plans for P-OTS (packet-optical transport systems); and metro architectural changes typically embedded in NG-COs (next generation central offices—big telco Central Offices (COs) spread around a metro area).

In our first-time measure of the SDN-NFV hardware-focus to software-focus trend that affects the router market directly, 60% to 75% of respondents are either definitely or likely moving eight different functions from physical edge routers to software vRouters running on commercial servers in mini data centers in NG-COs. This will shift some edge router spending to software and NFV infrastructure, but will not replace the need for edge routers to handle traffic.”

MH post conference clarification: to be more exact, many of the network functions now running on physical routers will be moved to virtualized network functions, or VNFs, that run on commercial servers. The vRouter likely won’t include the VNFs.   This is just a terminology definition that is still being formed in the industry.

With 60% to 75% of routing functions being moved to VNFs running on commercial servers, it seems that the majority of SDN/NFV development efforts will be on the software side.  How then will hardware be optimized for SDN/NFV?

Some of the silicon hardware functions being proposed for SDN/NFV networks include: encryption, DPI, load balancing and QoS support.  Open Flow, on the other hand, won’t be implemented in hardware because a hardware based state machine wouldn’t be easy to change quickly.

How much hardware optimization is needed if generic processors are used to implement most vRouter functions?  While that’s an open question, it’s believed that hardware optimization is most needed at the network edge (that assumes dumb long haul pipes).

Intel’s DPDK (Data Plane Development Kit) was mentioned as a way to “speed up network intelligence of equipment.” [DPDK is a set of software libraries that can improve packet processing.]

Some open issues for successful network hardware placement and optimization include:

  • What dedicated network hardware functions, if any, will be put in a commercial compute server that’s hosting one or more vRouters?
  • What hardware/silicon functions will go into dedicated edge or core routers/switches?
  • Lack of visibility of QoS across mutiple network hops/nodes. How to ensure end to end QoS/SLAs?
  • How should QoS functions be partitioned between dedicated hardware (e.g. packet classification and priority queueing) vs. software implementation?
  • Will dongles be attached to commercial servers to monitor network performance?
  • What types of timing references and clocking are needed in vRouters and dedicated networking hardware?
  • How will NFV infrastructure be orchestrated, coordinated, and managed?

Summary & Conclusions:

  • Top drivers for operator investments in NFV (from Infonetics survey) are:
  1. Service agility to increase revenue
  2. Capex reduction (use commercial servers, not purpose-built network gear)
  3. More efficient and automated operations
  • Revenue from new or enhanced services is the top driver for SDN as well
  • PE (Provider Edge) router is under attack by virtual PE vRouters
  • Move services move from physical networking equipment to VNFs on servers
  • vE-CPE is top NFV use case for revenue generation

In closing, Mr. Howard stated:

“The move to SDN and NFV will change the way operators make equipment purchasing decisions, placing a greater focus on software. Though hardware will always be required, its functions will be refined, and the agility of services and operations will be driven by software.”

2014 Hot Interconnects Semiconductor Session Highlights & Takeaways- Part I.

Introduction:

With Software Defined: Networking (SDN), Storage and Data Center movements firmly entrenched, one might believe there’s not much opportunity for innovation in dedicated hardware implemented in silicon.  Several sessions at the 2014 Hot Interconnects conference, especially one from ARM Ltd, indicated that was not the case at all.

With the strong push for open networks, chips have to be much more flexible and agile, as well as more powerful, fast and functionally dense. Of course, there are well known players for specific types of silicon. For example: Broadcom for switch/routers;  ARM for CPU cores (also Intel and MIPS/Imagination Technologies),  many vendors for System on a Chip (SoC)- which includes 1 or more CPU core(s)-mostly from ARM (Qualcomm, Nvidia, Freescale, etc), and Network Processors (Cavium, LSI-Avago/Intel, PMC-Sierra, EZchip, Netronome, Marvell, etc),  bus interconnect fabrics (Arteris, Mellanox, PLX /Avago, etc).

What’s not known is how these types of components, especially SoC’s, will evolve to support open networking and software defined networking in telecom equipment (i.e. SDN/NFV).    Some suggestions were made during presentations and a panel session at this year’s excellent Hot Interconnects conference.

We summarize three invited Hot Interconnects presentations related to network silicon in this article. Our follow on Part II article will cover network hardware for SDN/NFV based on an Infonetics presentation and service provider market survey.

  1. Data & Control Plane Interconnect Solutions for SDN & NFV Networks, by Raghu Kondapalli, Director of Strategic Planning at LSI/Avago (Invited Talk)

Open networking, such as SDN (Software Defined Networking) and NFV (Network Function Virtualizationprovides software control of many network functions.   NFV enables virtualization of entire classes of network element functions such that they become modular building blocks that may be connected, or chained, together to create a variety of communication services.

Software defined and functionally disaggregated network elements rely heavily on deterministic and secure data and control plane communication within and across the network elements. In these environments scalability, reliability and performance of the whole network relies heavily on the deterministic behavior of this interconnect.  Increasing network agility and lower equipment prices are causing severe disruption in the networking industy.

A key SDN/NFV implementation issue is how to disaggregate network functions in a given network element (equipment type).  With such functions modularized, they could be implemented in different types of equipment along with dedicated functions (e.g. PHYs to connect to wire-line or wireless networks.  The equipment designer needs to: disaggregate, virtualize, interconnect, orchestrate and manage such network functions.

“Functional coordination and control plane acceleration are the keys to successful SDN deployments,” he said.  Not coincidently, the LSI/Avago Axxia multicore communication processor family (using an ARM CPU core) is being positioned for SDN and NFV acceleration, according to the company’s website. Other important points made by Raghu:

  • Scale poses many challenges for state management and traffic engineering
  • Traffic Management and Load Balancing are important functions
  • SDN/NFV backbone network components are needed
  • Disaggregated architectures will prevail.
  • Circuit board interconnection (backplane) should consider the traditional passive backplane vs. an active switch fabric.

Axxia 5516 16-core communications processor was suggested as the SoC to use for a SDN/NFV backbone network interface.  Functions identified included:  Ethernet switching, protocol pre-processing, packet classification (QoS), traffic rate shaping, encryption, security, Precision Time Protocol (IEEE 1588) to synchronize distributed clocks, etc.

Axxia’s multi-core SoCs were said to contain various programmable function accelerators to offer a scalable data and control plane solution.

Note:  Avago recently acquired semiconductor companies LSI Corp. and PLX Technology, but has now sold its Axxia Networking Business (originally from LSI which had acquired Agere in 2007 for $4 billion) to Intel for only $650 million in cash.  Agere Systems (which was formerly AT&T Micro-electronics- at one time the largest captive semiconductor maker in the U.S.) had a market capitalization of about $48 billion when it was spun off from Lucent Technologies in Dec 2000.

  1. Applicability of Open Flow based connectivity in NFV Enabled Networks, by Srinivasa Addepalli, Fellow and Chief Software Architect, Freescale (Invited Talk)

Mr. Addepalli’s presentation addressed the performance challenges in VMMs (Virtual Machine Monitors) and the opportunities to offload VMM packet processing using SoC’s like those from Freescale (another ARM core based SoC).   The VMM layer enables virtualization of networking hardware and exposes each virtual hardware element to VMs.

“Virtualization of network elements reduces operation and capital expenses and provides the ability for operators to offer new network services faster and to scale those services based on demand. Throughput, connection rate, low latency and low jitter are few important challenges in virtualization world. If not designed well, processing power requirements go up, thereby reducing the cost benefits,” according to Addepalli.

He positioned Open Flow as a communication protocol between control/offload layers, rather than the ONF’s API/protocol between the control and data planes (residing in the same or different equipment, respectively).  A new role for Open Flow in VMM and vNF (Virtual Network Function) offloads was described and illustrated.

The applicability of OpenFlow to NFV1 faces two challenges, according to Mr. Addepalli

  1. VMM networking
  2. Virtual network data path to VMs/

Note 1.  The ETSI NFV Industry Specification Group (ISG) is not considering the use of ONF’s Open Flow, or any other protocol, for NFV at this time.  It’s work scope includes reference architectures and functional requirements, but not protocol/interface specifications.  The ETSI NFV ISG will reach the end of Phase 1 by December 2014, with the publication of the remaining sixteen deliverables.

“To be successful, NFV must address performance challenges, which can best be achieved with silicon solutions,” Srinivasa concluded.   [Problem with that statement is that the protocols/interfaces to be used for fully standardized NFV have not been specified by ETSI or any standards body.  Hence, no one knows the exact combination of NFV functions that have to perform well]

  1. The Impact of ARM in Cloud and Networking Infrastructure, by Bob Monkman, Networking Segment Marketing Manager at ARM Ltd.

Bob revealed that ARM is  innnovating way beyond the CPU core it’s been licensing for years.  There are hardware accelerators, a cache coherent network and various types of network interconnects that have been combined into a single silicon block that is showed in the figure below:

Image courtesy of ARM - innovating beyond the core.
Image courtesy of ARM

Bob said something I thought was quite profound and dispels the notion that ARM is just a low power, core CPU cell producer: “It’s not just about a low power processor – it’s what you put around it.”  As a result, ARM cores are being included in SoC vendor silicon for both  networking and storage components. Those SoC companies, including LSI/Avago Axxia  and Freescale (see above), can leverage their existing IP by adding their own cell designs for specialized networking hardware functions (identified at the end of this article in the Addendum).

Bob noted that the ARM ecosystem was conducive to the disruption now being experience in the IT industy with software control of so many types of equipment.  The evolving network infrastructure – SDN, NFV, other Open Networking- is all about reducing total cost of ownership and enabling new services with smart and adaptable building blocks.  That’s depicted in the following illustration:

Evolving infrastructure is reducing costs and enabling new services.
Image courtesy of ARM.

Bob stated that one SoC size does not fit all.  For example, one type of Soc can contain: high performance CPU, power management, premises networking, storage & I/O building blocks.  While one for SDN/NFV might include: a high performance CPU, power management, I/O including wide area networking interfaces, and specialized hardware networking functions.

Monkman articulated very well what most already know:  that the Networking and Server equipment are often being combined in a single box (they’re “colliding” he said).  [In many cases, compute servers are running network virtualization (i.e.VMWare), acceleration, packet pre-processing, and/or control plane software (SDN model).]  Flexible intelligence is required on an end-to-end basis for this to work out well.  The ARM business model was said to enable innovation and differentiation, especially since the ARM CPU core has reached the 64 bit “inflection point.”

ARM is working closely with the Linaro Networking and Enterprise Groups. Linaro is a non-profit industry group creating open source software that runs on ARM CPU cores.  Member companies fund Linaro and provide half of its engineering resources as assignees who work full time on Linaro projects. These assignees combined with over 100 of Linaro’s own engineers create a team of over 200 software developers.

Bob said that Linaro is creating an optimized, open-source platform software for scalable infrastructure (server, network & storage).  It coordinates and multiplies members’ efforts, while accelerating product time to market (TTM).  Linaro open source software enables ARM partners (licensees of ARM cores) to focus on innovation and differentiated value-add functionality in their SoC offerings.

Author’s Note:  The Linaro Networking Group (LNG) is an autonomous segment focused group that is responsible for engineering development in the networking space. The current mix of LNG engineering activities includes:

  • Virtualization support with considerations for real-time performance, I/O optimization, robustness and heterogeneous operating environments on multi-core SoCs.
  • Real-time operations and the Linux kernel optimizations for the control and data plane
  • Packet processing optimizations that maximize performance and minimize latency in data flows through the network.
  • Dealing with legacy software and mixed-endian issues prevalent in the networking space
  • Power Management
  • Data Plane Programming API:

For more information: https://wiki.linaro.org/LNG


OpenDataPlane (ODP) http://www.opendataplane.org/ was described by Bob as a “truly cross-platform, truly open-source and open contribution interface.” From the ODP website:

ODP embraces and extends existing proprietary, optimized vendor-specific hardware blocks and software libraries to provide inter-operability with minimal overhead. Initially defined by members of the Linaro Networking Group (LNG), this project is open to contributions from all individuals and companies who share an interest in promoting a standard set of APIs to be used across the full range of network processor architectures available.]

Author’s Note:   There’s a similar project from Intel called DPDK or Data Plane Developer’s Kit that an audience member referenced during Q &A . We wonder if those APIs are viable alternatives or can be used in conjunction with the ONF’s OpenFlow API?


Next Generation Virtual Network Software Platforms, along with network operator benefits, are illustrated in the following graphic:

An image depicting the Next-Gen virtualized network software platforms.
Image courtesy of ARM.

Bob Monkman’s Summary:

  • Driven by total cost of ownership, the data center workload shift is leading to  more optimized and diverse silicon solutions
  • Network infrastructure is also well suited for the same highly integrated, optimized and scalable solutions ARM’s SoC partners understand and are positioned to deliver
  • Collaborative business model supports “one size does not fit all approach,” rapid pace of innovation, choice and diversity
  • Software ecosystem (e.g. Linaro open source) is developing quickly to support target markets
  • ARM ecosystem is leveraging standards and open source software to accelerate deployment readiness

Addendum:

In a post conference email exchange, I suggested several specific networking hardware functions that might be implemented in a SoC (with 1 or more ARM CPU cores).  Those include:  Encryption, Packet Classification, Deep Packet Inspection, Security functions,  intra-chip or inter-card interface/fabric, fault & performance monitoring, error counters?

Bob replied: “Yes, security acceleration such as SSL operations; counters of various sorts -yes; less common on the fault notification and performance monitoring. A recent example is found in the Mingoa acquisition, see: http://www.microsemi.com/company/acquisitions ”

…………………………………………………………………….

References:


End NOTE:  Stay tuned for Part II which will cover Infonetics’ Michael Howard’s presentation on Hardware and market trends for SDN/NFV.

2014 Hot Interconnects Highlight: Achieving Scale & Programmability in Google's Software Defined Data Center WAN

Introduction:

Amin Vahdat, PhD & Distinguished Engineer and Lead Network Architect at Google, delivered the opening keynote at 2014 Hot Interconnects, held August 26-27 in Mt View, CA. His talk presented an overview of the design and architectural requirements to bring Google’s shared infrastructure services to external customers with the Google Cloud Platform.

The wide area network underpins storage, distributed computing, and security in the Cloud, which is appealing for a variety of reasons:

  • On demand access to compute servers and storage
  • Easier operational model than premises based networks
  • Much greater up-time, i.e. five 9’s reliability; fast failure recovery without human intervention, etc
  • State of the art infrastructure services, e.g. DDoS prevention, load balancing, storage, complex event & stream processing, specialised data aggregation, etc
  • Different programming models unavailable elsewhere, e.g. low latency, massive IOPS, etc
  • New capabilities; not just delivering old/legacy applications cheaper

Andromeda- more than a galaxy in space:

Andromeda – Google’s code name for their managed virtual network infrastructure- is the enabler of Google’s cloud platform which provides many services to simultaneous end users. Andromeda provides Google’s customers/end users with robust performance, low latency and security services that are as good or better than private, premises based networks. Google has long focused on shared infrastructure among multiple internal customers and services, and in delivering scalable, highly efficient services to a global population.

An image of Google's Andromeda Controller diagram.
Click to view larger version. Image courtesy of Google

“Google’s (network) infra-structure services run on a shared network,” Vahdat said. “They provide the illusion of individual customers/end users running their own network, with high-speed interconnections, their own IP address space and Virtual Machines (VMs),” he added.  [Google has been running shared infrastructure since at least 2002 and it has been the basis for many commonly used scalable open-source technologies.]

From Google’s blog:

Andromeda’s goal is to expose the raw performance of the underlying network while simultaneously exposing network function virtualization (NFV). We expose the same in-network processing that enables our internal services to scale while remaining extensible and isolated to end users. This functionality includes distributed denial of service (DDoS) protection, transparent service load balancing, access control lists, and firewalls. We do this all while improving performance, with more enhancements coming.  Hence, Andromeda itself is not a Cloud Platform networking product; rather, it is the basis for delivering Cloud Platform networking services with high performance, availability, isolation, and security.”

Google uses its own versions of SDN and NFV to orchestrate provisioning, high availability, and to meet or exceed application performance requirements for Andromeda. The technology must be distributed throughout the network, which is only as strong as its weakest link, according to Amin.  “SDN” (Software Defined Networking) is the underlying mechanism for Andromeda. “It controls the entire hardware/software stack, QoS, latency, fault tolerance, etc.”

“SDN’s” fundamental premise is the separation of the control plane from the data plane, Google and everyone else agrees on that. But not much else!  Amin said the role of “SDN” is overall co-ordination and orchestration of network functions. It permits independent evolution of the control and data planes. Functions identified under SDN supervision were the following:

  • High performance IT and network elements: NICs, packet processors, fabric switches, top of rack switches, software, storage, etc.
  • Audit correctness (of all network and compute functions performed)
  • Provisioning with end to end QoS and SLA’s
  • Insuring high availability (and reliability)

“SDN” in Andromeda–Observations and Explanations:

“A logically centralized hierarchical control plane beats peer-to-peer (control plane) every time,” Amin said. Packet/frame forwarding in the data plane can run at network link speed, while the control plane can be implemented in commodity hardware (servers or bare metal switches), with scaling as needed. The control plane requires 1% of the overhead of the entire network, he added.

As expected, Vahdat did not reveal any of the APIs/ protocols/ interface specs that Google uses for its version of “SDN.” In particular, the API between the control and data plane (Google has never endorsed the ONF specified Open Flow v1.3). Also, he didn’t detail how the logically centralized, but likely geographically distributed control plane works.

Amin said that Google was making “extensive use of NFV (Network Function Virtualization) to virtualize SDN.” Andromeda NFV functions, illustrated in the above block diagram, include: Load balancing, DoS, ACLs, and VPN. New challenges for NFV include: fault isolation, security, DoS, virtual IP networks, mapping external services into name spaces and balanced virtual systems.

Managing the Andromeda infrastructure requires new tools and skills, Vahdat noted. “It turns out that running a hundred or a thousand servers is a very difficult operation. You can’t hire people out of college who know how to operate a hundred or a thousand servers,” Amin said. Tools are often designed for homogeneous environments and individual systems. Human reaction time is too slow to deliver “five nines” of uptime, maintenance outages are unacceptable, and the network becomes a bottleneck and source of outages.

Power and cooling are the major costs of a global data center and networking infrastructure like Google’s. “That’s true of even your laptop at home if you’re running it 24/7. At Google’s mammoth scale, that’s very apparent,” Vahdat said.

Applications require real-time high performance and low-latency communications to virtual machines. Google delivers those capabilities via its own Content Delivery Network (CDN).  Google uses the term “cluster networking” to describe huge switch/routers which are purpose-built out of cost efficient building blocks.

In addition to high performance and low latency, users may also require service chaining and load-balancing, along with extensibility (the capability to increase or reduce the number of servers available to applications as demand requires). Security is also a huge requirement. “Large companies are constantly under attack. It’s not a question of whether you’re under attack but how big is the attack,” Vahdat said.

[“Security will never be the same again. It’s a losing battle,” said Martin Casado, PhD during his Cloud Innovation Summit keynote on March 27, 2014]

Google has a global infrastructure, with data centers and points of presence worldwide to provide low-latency access to services locally, rather than requiring customers to access a single point of presence. Google’s software defined WAN (backbone private network) was one of the first networks to use “SDN”. In operation for almost three years, it is larger and growing faster than Google’s customer facing Internet Connectivity between Google’s cloud resident data centers and is comparable to the data traffic within a premises based data center, according to Vahdat.

Note 1.   Please refer to this article: Google’s largest internal network interconnects its Data Centers using Software Defined Network (SDN) in the WAN

“SDN” opportunities and challenges include:

  • Logically centralized network management- a shift from fully decentralized, box to box communications
  • High performance and reliable distributed control
  • Eliminate one-off protocols (not explained)
  • Definition of an API that will deliver NFV as a service

Cloud Caveats:

While Vahdat believes in the potential and power of cloud computing, he says that moving to the cloud (from premises based data centers) still poses all the challenges of running an IT infrastructure. “Most cloud customers, if you poll them, say the operational overhead of running on the cloud is as hard or harder today than running on your own infrastructure,” Vahdat said.

“In the future, cloud computing will require high bandwidth, low latency pipes.” Amin cited a “law” this author never heard of: “1M bit/sec of I/O is required for every 1MHz of CPU processing (computations).” In addition, the cloud must provide rapid network provisioning and very high availability, he added.

Network switch silicon and NPUs should focus on:

  • Hardware/software support for more efficient read/write of switch state
  • Increasing port density
  • Higher bandwidth per chip
  • NPUs must provide much greater than twice the performance for the same functionality as general purpose microprocessors and switch silicon.

Note: Two case studies were presented which are beyond the scope of this article to review.  Please refer to a related article on 2014 Hot Interconnects Death of the God Box

Vahdat’s Summary:

Google is leveraging its decade plus experience in delivering high performance shared IT infrastructure in its Andromeda network.  Logically centralized “SDN” is used to control and orchestrate all network and computing elements, including: VMs, virtual (soft) switches, NICs, switch fabrics, packet processors, cluster routers, etc.  Elements of NFV are also being used with more expected in the future.

References:

http://googlecloudplatform.blogspot.com/2014/04/enter-andromeda-zone-google-cloud-platforms-latest-networking-stack.html

https://www.youtube.com/watch?v=wpin6GKpDm8

http://gigaom.com/2014/04/02/google-launches-andromeda-a-software-defined-network-underlying-its-cloud/

http://virtualizationreview.com/articles/2014/04/03/google-andromeda.aspx

http://community.comsoc.org/blogs/alanweissberger/martin-casado-how-hypervisor-can-become-horizontal-security-layer-data-center

http://www.convergedigest.com/2014/03/ons-2014-google-keynote-software.html

https://www.youtube.com/watch?v=n4gOZrUwWmc

http://cseweb.ucsd.edu/~vahdat/

Addendum:  Amdahl’s Law

In a post conference email to this author, Amin wrote:

Here are a couple of references for Amdahl’s “law” on balanced system design:

Both essentially argue that for modern parallel computation, we need a fair amount of network I/O to keep the CPU busy (rather than stalled waiting for I/O to complete).
Most distributed computations today substantially under provision IO, largely because of significant inefficiency in the network software stack (RPC, TCP, IP, etc.) as well as the expense/complexity of building high performance network interconnects.  Cloud infrastructure has the potential to deliver balanced system infrastructure even for large-scale distributed computation.

Thanks, Amin

Network Neutrality is Dead: Netflix deal with AT&T; VZ Throttling- FCC?

Netflix announced Tuesday that it had agreed to pay AT&T for a direct “peering” connection to AT&T’s network. The two companies arranged the deal this past May and have been working since then to connect their respective networks.

AT&T had been pressing Netflix to pay for an upgraded connection between their networks since at least March when Netflix asked for a free peering arrangement.

“We reached an interconnect agreement with Netflix in May and since then have been working together to provision additional interconnect capacity to improve the viewing experience for our mutual subscribers,” an AT&T spokeswoman said in a statement.

“We’re now beginning to turn up the connections, a process that should be complete in the coming days,” Netflix spokeswoman Anne Marie Squeo said.

AT&T and other broadband ISPs believe that Netflix should bear the cost for the recent surge in video traffic. Netflix replied that broadband providers should be responsible for making sure that their subscribers get the content they are viewing online in a reliable and consistent manner.

Netflix, as well as Google’s YouTube, provides a regularly updated video quality report that ranks the speed of ISPs.  AT&T’s services have typically ranked low on Netflix’s list, although Verizon’s services haven’t fared much better since it signed a peering agreement with Netflix.

[Internet traffic flows between different networks generally in one of two ways, through transit, in which a smaller network passes its traffic through a larger one to connect to the broader Internet; and peering, in which large networks connect with each other. Traditionally, smaller networks paid larger ones for transit services, but peering didn’t require any kind of payment from one company to another. Instead, both networks are responsible for their own costs of interconnecting.]

When there was still some notion of network neutrality, interconnection deals between content and network providers/ISPs were free.  But now, broadband providers require fees paid to them. Some fear that moves like this could have a serious problem on startup content provider (which may not be able to pay those fees), especially if the content they provide is used by many over the Internet.  That negates the concept of network neutrality.

Netflix has protested the move to these so-called “paid peering” arrangements.  The company has noted that it is sending to ISPs’ customers only the data they are demanding in the form of streamed movies.  Furthermore, it has argued that the only reason ISPs can demand to be paid for peering is because of the limited competition for broadband access.

Despite its protests, Netflix has now announced three paid peering relationships. Earlier this year, it inked deals with both Verizon and Comcast. If AT&T were to block or throttle Netflix, many of its customers would have no other place to turn for Internet access. The company has called on the FCC to ban paid peering arrangements as part of a broader move to curb ISP practices as part of a new, stronger net neutrality policy.

Net neutrality advocacy groups are pushing  the FCC to intervene in these situations and stop broadband providers from asking for interconnection fees.  Yet the FCC has done nothing and hasn’t even finalized its new rules for net neutrality.

John Naughton wrote in The Observer:

“The principle that all bits traversing the network should be treated equally was a key feature of the internet’s original design. It was also one of the reasons why the internet became such an enabler of disruptive innovation. Net neutrality meant that the bits generated by a smart but unknown programmer’s application, for instance the web, file-sharing, Skype and Facebook, would be treated the same as bit streams emanating from a giant corporation.  Neutrality kept the barrier to entry low.”


Slowing down or “throttling” Internet traffic has been another contentious issue for the FCC.  In a July 25th letter to Verizon CEO Dan Mead, FCC Chairman Tom Wheeler voiced his objections to plans Verizon announced last week to begin throttling LTE customers on unlimited plans that use an exorbitant amount of data. Verizon said the change will only affect about 5 percent of its users and it is being done in the name of network management.

Chairman Wheeler took issue with how Verizon described the issue, stating that he believes Verizon may be misinterpreting the FCC’s rules on network management.

“Reasonable network management concerns the technical management of your network; it is not a loophole designed to enhance your revenue streams,” Wheeler wrote, saying that it is “disturbing to me that Verizon Wireless would base its network management on distinctions among its customers’ data plans, rather than on network architecture or technology.”

Wheeler noted that legitimate network management purposes could include:

“Ensuring network security and integrity, including by addressing traffic that is harmful to the network; addressing traffic that is unwanted by end users (including by premise operators), such as by providing services or capabilities consistent with an end user’s choices regarding parental controls or security capabilities; and reducing or mitigating the effects of congestion on the network.”

Opinion: 

We wonder what enforcement power the FCC has to stop wireless (or wire-line) carriers from throttling users data throughput.   If a subscriber has a unlimited data  plan, why aren’t they notified about throttling before they sign up and once they are coming close to exceeding their monthly limit of bytes streamed or downloaded from the Internet?

It seems the Wheeler-led FCC is prepared to let the industry settle these contentious issues and play a very passive role.  That’s not what communications regulator should, IMHO!

FCC Acts to Improve Rural Broadband Service with $100M Fund- Census Blocks Released

Roughly 10% of the U.S., mostly in remote rural areas, is eligible to take advantage of $100 million the Federal Communications Commission is allocating for improvements in rural broadband service. The project gets its money from the FCC’s $4.5 billion Connect America Fund. The commission is providing bidding incentives for proposals that exclusively serve tribal lands.

The FCC this week released a list of the U.S. Census blocks—roughly the size of a city block [half are smaller than than a tenth of a square mile, while the largest is greater than 8,500 square miles]—that would qualify for a piece of the $100 million fund the agency created earlier this month. The objective is to get broadband service to unserved and under-served populations, which are primarily rural. The commission’s map of the eligible areas has been updated with more detailed views.

Seventy-five million is dedicated for testing the construction of networks that provide 25 Mbps down and 5 up, and another $15 million will go “to test interest in delivering service at 10:1 speeds in high cost-areas,” defined as those where the monthly cost per location of providing service is between $52.50 and $207.81. The third set of funds comprises $10 million for 10/1 service “in areas that are extremely costly to serve,” or those where service of at least 3 Mbps up and 768 kbps down is unavailable, and that would exceed $207.81 monthly.

More than 1,000 entities have expressed interest in the projects, including utilities, wireless operators, and CLEC affiliates of local telcos, according to the FCC.

The Utilities Telecom Council Inc. applauded the FCC for including utilities.

“By encouraging utilities and others to provide broadband that is robust, affordable and reliable, the FCC is creating new opportunities to promote economic growth and expanded access to health and safety, education, and essential services in our rural communities,” said Connie Durcsak, President and CEO of UTC, in a statement.

Read more at:

http://www.tvtechnology.com/news/0086/rural-broadband-areas-defined-for-fcc–million-experiment-fund/271527

http://www.lightreading.com/regulation/fcc-commits-$100m-to-rural-experiments/d/d-id/709930

Abolishing the Fear of Failure- Do What You're Afraid to Do!

Introduction:

On June 26th, Stanford Education & Psychology Professor John D. Krumboltz told a sold out Commonwealth Club-Silicon Valley audience to “stop being afraid of failure. Learn to accept and even enjoy it,” he said in his opening remarks. Prof Krumboltz is the co-author of the book, Fail Fast, Fail Often- How Losing Can Help You Win.  It was Oprah Winfrey’s favorite book for 2014.

The highlights of his short talk, along with answers to a few questions from the moderator (Alison van Diggelen) are summarized in this article. The points noted are relevant to start-up companies, entrepreneurs, job seekers, and people of all ages in managing their personal lives.

Key Messages from Prof. Krumboltz:

  • Don’t be afraid to fail. It’s how you feel about failure that matters.
  • Do what you are afraid of. Don’t let someone talk you out of doing that which you are afraid of.
  • Make failure a low cost proposition.
  • Learn from failures as well as successes.
  • If you learn something from trying, you did not fail.
  • What if you are rejected or criticized for failing? “People that reject or criticize you are NOT those you’d want to be with anyway.” Common Goals we should strive for:
    • Help others so that they don’t fail. [A film clip of Portland Trailblazers coach Mo Cheeks helping a 12 year old girl who had temporarily forgotten the words to the National Anthem was cited as an example of this.]
    • Learn to create more meaningful, satisfying lives. [Happy and successful people spend less time planning and more time acting. They get out into the world, try new things, and make mistakes, and in doing so, they benefit from unexpected experiences and opportunities.]
    • Listen to others concerns, then help them take the appropriate action(s). Recognize that people often need your help and with that, they can be successful.
    • Get people to take actions likely to generate unplanned opportunities.
    • Do what you fear and you will overcome that fear. Don’t just talk or read about it. Do it! “If it’s to be, it’s up to me.” he said.
    • Overcome unrealistic fears. [Prof. didn’t say how to do this]

Fears of Taking Risks when looking for a job:

  • Applicant is perceived to be over qualified.
  • If I fail (to get the job), others might refuse to talk to me.
  • Even if fear comes true (and you don’t get the job), you are no worse off.

Conclusions:

Try to get over procrastination or avoidance of doing what you fear. Instead, go out and try it. Don’t be afraid of making mistakes. Focus on opportunities, not problems.

Selected Q & A:

1.  When is it OK to “call it quits” after numerous failures?

Answer: It’s OK to quit, but then try something else. Learn from your failure(s).

2.  When is it too soon to try again after a huge failure? [The question actually asked: “When is it too soon to get back on the horse after you’ve fallen off it several times?”].

The Professor didn’t answer this question. Instead, he recounted a horseback riding incident from his youth. He also reminisced about asking a 15 year old girl for a kiss when he was 14. That was a big risk for him at the time. She said that if they were to kiss, she was afraid they’d never stop. Even though, they didn’t kiss, the young John felt joyful and cherished that moment for the rest of his life.

Excerpt from the book Fail Fast- Fail Often:

Failing quickly in order to learn fast—or what Silicon Valley entrepreneurs commonly call failing forward—is at the heart of many innovative businesses. The idea is to push ahead with a product as soon as possible to gather feedback and learn about opportunities and constraints so that you can take the next step.

This mind-set is at the heart of the brilliant work of Pixar Animation Studios. When Ed Catmull, the cofounder and president of Pixar, describes Pixar’s creative work, he says it involves a process of going from “suck” to “non-suck.” The moviemaking process begins with rough story boards where a few good ideas are buried amid tons of half-baked concepts and outright stinkers. The animation team then works its way through thousands of corrections and revisions before they arrive at a final cut. By giving themselves permission to fail again and again, animators weed out the bad ideas as quickly as possible and get to the place where real work can occur.

As Andrew Stanton, the director of Finding Nemo and WALL-E , describes, “My strategy has always been: Be wrong as fast as we can. Which basically means, we’re gonna screw up, let’s just admit that. Let’s not be afraid of that. But let’s do it as fast as we can so we can get to the answer. You can’t get to adulthood before you go through puberty. I won’t get it right the first time, but I will get it wrong really soon, really quickly.”

Giving yourself permission to make a mess of things is particularly important if you do any sort of creative work. (We should note that all people are creative—which is to say that they live in the real world, form ideas, come up with solutions to problems, have dreams, and forge their own path; your own life is your ultimate creation.)

Broadband TV Conference- Part 3: The Problem and Solution for WiFi Delivered Video Content -Why Can't We Watch Any Content, on Any Device in Any Room in the Home?

Introduction and Backgrounder:

The number of mobile devices in the home is exploding. Most “Pay TV” operators (like Comcast Xfinity, Verizon FioS, and AT&T U-Verse) are supporting multiple screen viewing as part of their “TV Everywhere” services. The content is mostly OTT VoD, video clips, or real time sporting events available by subscription (e.g. MLB.TV, NHL.com or ESPN3) that’s played on mobile devices, gaming consoles and even connected TVs.

In almost all cases, the in-home WiFi network delivers the streaming video content to the “second screen.”   Mobile devices will not likely use 3G/4G wireless access to watch videos, because that would consume a good chunk of the wireless subscribers monthly data plan.  Some second screens, like the Kindle Fire and iPod Touch, only use WiFi for wireless communications.  Furthermore, there is no charge for WiFi home video distribution (other than the OTT subscriptions the user has with the video streaming provider, e.g. MLB.TV, Netflix, Amazon Prime, Hulu+, Apple TV, etc).

[dropshadowbox align=”right” effect=”lifted-both” width=”250px” height=”” background_color=”#ffffff” border_width=”1″ border_color=”#dddddd” ]Note 1:  The U-verse Wireless Receiver is a wireless STB which is connected to the TV using an HDMI, component, composite or coaxial cable. It uses the WiFi home network to connect to a WiFi Access Point (AP) that plugs into the U-verse Residential Gateway via an Ethernet cable. The WiFI AP is also a “video bridge,” in that it extracts the TV content (SD/HD/apps) from the Residential Gateway, decodes it into the correct format, and delivers that content wirelessly over the in-home Wi-Fi network to the U-verse Wireless Receiver which plugs into the TV. The quality of SD/HDTV videos is expected to be a lot better than OTT video streaming, so would be adversely effected by any WiFi home network performance degradation.[/dropshadowbox]Most Wi-Fi home network implementations are optimized for best effort, peak data rate streaming. However, video is very sensitive to packet loss, latency and jitter, which results in artifacts on the consumers’ second screens (How many times have you noticed the OTT video picture freezing or sharply degrading in quality? Or loss of lip synch?). In addition, whole-home WiFi coverage and a consistent signal become mandatory for a good “user quality of experience.” Consumers will generally have their mobile devices, notebook PCs, STBs and TVs located in various nooks and corners of the home. They expect consistent video and audio quality whenever they’re watching videos on any screen in the home (or even in the back yard).

In addition to OTT streaming via WiFi in the home to notebook PCs and mobile devices, WiFi is sometimes used for delivering broadcast and on demand pay TV content. For example AT&T offers a “Wireless U-Verse receiver” for watching SD and HD TV plus apps that are included in the residential subscriber’s U-Verse TV package or bundle.1


Fundamental Problem with WiFi Delivery of Video Content in the Home:

Consumers have been led to believe they can watch any video content on any TV/device, in any room of the home. AT&T has been advertising this claim repeatedly in their TV commercials for U-Verse TV (Have you seen the one where the Dallas Maverick’s Mark Cuban invites players into his house to watch live basketball games on his tablet?). Google reports that 77% of consumers use mobile devices while watching TV each day. Touch screen mobile devices were said to have superior User Interfaces (UI’s) for search and socializing. Therefore, many people use them for watching and sharing videos while at home.

Ideally, video reception quality should not vary much depending on location in the home, but it does. AirTies claims the user experience is not nearly up to expectations when watching WiFi delivered video content within the home.  They say the primary bottleneck is poor WiFi performance – even with the latest IEEE 802.11ac silicon in the sending/ receiving WiFi enabled equipment/devices.

AirTies Presentation Overview:

Ozgur Yildirim, Vice President and General Manager – North America Business Unit- AirTies discussed this topic during his Broadband TV session on June 4. 2014.

Ozgur’s excellent presentation included actual measurements in a typical home. He also discussed network level limitations of WiFi, including: range performance, capacity impact of mobile devices,  interference from neighbors and streaming from DVR to 2nd TV. Finally, Ozgur presented a WiFi mesh-network home network solution to the problems inclusive of range extender/ boosters and other WiFi network enhancements. AirTies currently sells such a home network to Service Provider customers in Europe (see Comment and Analysis section below for further details on AirTies).

The primary problems with WiFi distribution of video and audio content is that it’s difficult for the WiFi signal to penetrate walls or reach corners within a typical home. That was supposed to be fixed with IEEE 802.11n and now 802.11ac, but not according to Mr. Yildirim.  Here’s why:

  1. In conventional WiFi, all wireless traffic to/from the Internet or between clients goes over a single WiFi Access Point (AP) which is embedded in a WiFi router, Video Bridge, or Residential Gateway. For “n” devices in the home, there are “n” point-to-point wireless links to the WiFi AP, which creates a star topology.
  2. WiFi capacity degrades logarithmically over distance and walls (RF signals at 5GHz – used by 802.11ac- are prone to absorption by walls which effectively reduce signal levels (i.e. results in a lower S/N ratio at the receiver).
  3. The slowest WiFi link pulls down the entire WiFi network capacity, which is shared amongst all the devices accessing that wireless network. Therefore, there is less effective bandwidth to distribute to mobile devices and personal digital recorders within the home as you add/use slower devices.
  4. Your neighbor’s WiFi signal was said to “consume air time,” which is something we hadn’t heard before! Ozgur provided this explanation via email after the conference:

“WiFi uses “Carrier Sense Multiple Access” (CSMA) – only one user can transmit at any one time, while others must wait. Since they all ‘share’ time and bandwidth this way, one ‘bad apple’ device taking too long will hurt all others. ‘Airtime’ is also shared with neighbors on the same channel. There are only three channels in 2.4 GHz – if you have more than two neighbors with WiFi home networks you share channels with them.”


Actual Tests of WiFi Home Network Performance under Various Conditions:

In an actual wireless home networking test in Istanbul, Turkey (headquarters of AirTies), sharing the WiFi aggregate bandwidth between three devices was said to reduce aggregate bandwidth/ total capacity by 65%. With a single device in the room, the WiFi capacity was measured to be 800M b/sec. When an iPad 4 (2X2 MIMO IEEE 802.11n), MacBook (3X3 MIMO IEEE 802.11ac), bridge (3X3 MIMO IEEE 802.11ac) the aggregate capacity dropped to 292 M b/sec in the same room.

Ozgur said that “much worse results would be obtained if the iPad was removed from the room.”  Ozgur provided this explanation via email to clarify that last statement:

“The iPad represents the legacy “slow” 802.11n client in the configuration described. It pulls down the entire network capacity- even within the same room. Recall that the single 802.11ac client got 800Mb/sec of WiFi capacity. If we were to put two 802.11ac clients in the same room, each client would 400Mb/sec. But when the iPad is introduced as a legacy (802.11n) client that does not support 802.11ac, the total WiFi capacity went down to 290Mb/sec.”

“Moving the iPad to a far location (with respect to the AP) in the home results in that (relatively slow) legacy client will get significantly slower due to poor WiFi reception. This results in the iPad taking much longer time to send packets which means much less time is left over for faster 802.11ac clients to access the home WiFi network.”


Worse, when moving one device upstairs, the total capacity was reduced to 92%, with an effective bit rate of only 68 Mb/sec. Wi-Fi link speed at the edge was said to be critical for performance in this case.

Almost as bad is “device-to-device” streaming performance -say from a Personal Digital Recorder/Network Attached Storage (PDR/NAS) to an iPad or other second screen.  That reduces total WiFi capacity by 40% to only 320M b/sec. With three devices in the same room the capacity drops to 175M b/sec. If the PVR (using 3X3 MIMO and 802.11ac) is moved upstairs, it drops to 38M b/sec.  [Remember, that total WiFi capacity is shared by all devices using that wireless network.]

A Solution for Mutli-Screen Video Streaming over WiFi Home Networks:

AirTies solution is a WiFi Mesh home network, which enables streaming video to multiple screens with much better video quality. That was said to outperform conventional Wi-Fi (with the star topology described above) by up to 10X. That WiFi Mesh configuration, along with conventional WiFi, is illustrated in the figure below:

An image of AirTies Mesh network configuration.
Image courtesy of AirTies

It connects each WiFi device/node to a WiFi AP and routes IP packets over the best path available at the time. Mobile WiFi devices connect to the closest AP at maximum capacity speed.

In conclusion, Ozgur said that such a “Wireless mesh network enables an ideal user experience. You can watch any content on any device, in any room, with premium (perceived) video quality.”

Comment and Analysis:

AirTies sells their technology to OEM partners, including several European telco TV providers. One of their products is called the Air 4641- a dual pack Wireless Digital Bridge “to optimize wireless video delivery throughout the home.” They also sell other products and solutions, such as a “wireless extender” which extends a WiFi home network’s coverage range and cleans up wireless signals (i.e. increases the signal to noise ratio).

This past March at TV Connect 2014, the company demonstrated HEVC adaptive bit rate video streaming, delivered over the public Internet to STBs, with Envivio (a provider of software-based video processing and delivery solutions) and Octoshape (a leader in cloud based OTT video streaming technology).


In contrast to the WiFi mesh network solution proposed by AirTies, a WiFi semiconductor company named Quantenna Communications Inc. published a white paper in March 2013, titled “Right Wi-Fi® Technology for Multi-Media Distribution.” It details and recommends how to get the best performance from 5 GHz IEEE 802.11ac for multi-media/video distribution within the home without using a mesh network topology. There’s no mention of a mesh network topology.

We thought this excerpt was especially noteworthy:

“For mobile devices, power is the most important, next is cost and lastly performance. In contrast, for whole home video distribution and general access points, higher performance connectivity with continuous error free distribution is a must. Error free video in the presence of interference cannot be compromised.”

References:

The Evolution of Wireless Home Networks, by Ece Gelal, Eren Soyak, Ozgur Yildirim of Airties

Interview with Burak Onat, AirTies Product Manager (multicast live video streaming demo with Octoshape)

End Note:  Please contact the author if you wish to pursue a consulting arrangement related to any of the topics summarized in the three Viodi View articles, or discussed at the BroadbandTV Conference last week in Santa Clara, CA.  Thanks.  alan@viodi.com

 

Broadband TV Conference Part 2: How to Measure Streaming Video Quality

Introduction:

This second article on the 2014 Broadband TV Conference summarizes a presentation by OPTICOM’s CEO on streaming video quality measurements. We think that topic will be very important for many players in the OTT streaming video and connected TV markets. In particular, we believe it’ll be quite valuable for adaptive bit rate OTT and mobile video streaming providers, in order to measure and then attempt to improve the Quality of Experience (QoE) of their customers.

Perceptual Quality Measurement of OTT Streaming Video TV Services, Michael Keyhl, CEO of OPTICOM

How do you measure streaming video quality? Very few seem to have good metrics on video Quality of Experience (QoE) for viewers, even though it impacts many participants in the OTT, SD/HD video content delivery business. The stakeholders involved in QoE for video subscribers/consumers include: content provider, OTT provider, pay TV providers (cable, satellite, telco), network operators (especially for mobile video consumption), device makers, video codec providers and mobile apps companies including Internet videos in their apps.

Michael Keyhl, CEO of OPTICOM addressed this important topic in a very enlightening Broadband TV conference session. Germany-based OPTICOM develops algorithms for measuring video quality and licenses that technology to test equipment, video analytics and other  OEM partner companies.

Mr. Keyhl said that existing standardized video quality measurements barely suffice when considering OTT streaming video. Fundamentally, all traditional objective testing standards are based on analyzing short video sequences -of only a few seconds in length. The Mean Opinion Scores are quite low (below 5) for OTT video quality measured that way.  Michael said  that “snapshots of 10 second videos are inadequate to assess re-buffering and long term streaming behavior.” Hence, there’s a need for new type(s) of subjective testing methods and procedures.

In an attempt to greatly improve perceptual video testing standard for streaming video services (including ABR), OPTICOM created Perceptual Evaluation of Video Quality- Streaming (PEVQ-S).  It was described as an “advanced framework algorithm for full-reference picture quality analysis in video streaming environments.” The rules (but not implementation) have been standardized by ITU-T as J.247: Objective perceptual multimedia video quality measurement in the presence of a full reference. Related follow on work on video quality measurements is taking place in the Video Quality Experts Group (VQEG) which produces inputs to various ITU Study Groups for recommendations they’re developing.

As opposed to the lightweight “No Reference” video quality testing type, Full Reference testing is more processing intensive, but offers the highest accuracy and is standardized by ITU. It’s based on differential analysis – comparing the degraded video signal with the original source video with access to the reference/studio source video. OPTICOM‘s PEVQ/ITU-­T J.247 is the standard for Full Reference Video Quality Measurement as noted above.

The different types of Adaptive Bit Rate (ABR) video streaming methods are illustrated in the chart below. As you can see, there are many combinations and permutations for video quality measurements.

A diagram showing different streaming methods.
Image courtesy of OPTICOM.

Note: In Adaptive Bit rate (ABR) video streaming, the transmitted bit rate, resolution and other aspects of each “media segment” varies according to bandwidth and resources available at the client (receiving device). Video quality significantly depends on the client behavior, such as negotiating bit rate with the video server depending on dynamically allocated bandwidth, streaming protocol and re-buffering. The Media Presentation Description (MPD) is used to convey that information from client to server.


OPTICOM’s PEVQ was said to be validated for many different types of video- not just OTT ABR- using subjective testing. The validated video formats were based on ITU-R recommendation BT.500 – originally named “CRT TV Quality Testing (SD)” and ITU-T Recommendation P.910 – “Multimedia (QCIF, CIF, VGA) and IPTV (HD 720/1080) Testing.”

Based on a fundamental requirement analysis to understand adaptive streaming artifacts, the design of a novel test method was described. A four layer OTT quality model was presented with these four layers (top to bottom): Presentation, Transmission, Media Stream, Content.

Michael said an OTT Video Quality Measurement technique needs to have the following characteristics/attributes:

  • be related to content quality as a reference;
  • accurately score encoding and transcoding artifacts =Media Stream Quality;
  • measure and compare the picture quality for different frame sizes and frame rates = Media Stream/Transmission Quality;
  • continuously track the different bit rates and evaluate how smooth the video player is able to interact with the video server in a congested network= Transmission Quality;
  • take into account the player and endpoint device characteristics as well as the viewing environment = Presentation Quality.

The architecture of OPTICOM’s novel approach to measuring streaming video quality was said to be able to “overcome the limitations of standardized perceptual video metrics with regard to adaptive streaming of longer video sequences, while maintaining maximum backward compatibility (and thus accuracy) with ITU-T J.341/J.247 for short term analysis.”

An end-to-end functional block diagram of streaming video source/destination measurement using PEVQ-­S is shown in the illustration below.  The four OTT quality layers are shown at the bottom of the figure.

Block diagram showing possible quality of service impairments from source to sink.
Image Courtesy of OPTICOM

Conclusions: 

  1. There’s a clear need for a streaming video quality measurement (VQM) technique which accurately evaluates video subscriber/consumer QoE
  2. QoE concepts must be completely reformulated and we must reinterpret video quality in the context of multi-screen use scenarios.
  3. Currently, there is no standard for subjective and/or objective VQM of ABR video streaming
  4. PEVQ-­S is proposed to resolve that problem, based on advancing existing standards, while maintaining maximum backward compatibility and validated accuracy.
  5. PEVQ-­S is well suited to evaluate all 4 OTT Quality Layers (from bottom to top): Content, Media Stream, Transmission, and Presentation
  6. PEVQ-­S allows for analysis of common ABR protocols and formats, various video codecs at various bit rates. It can analyze video at different frame sized and frames per second.
  7. PEVQ-­S is licensed by OPTICON to leading OTT, Middleware, and Test & Measurement vendors.  It will soon be built into many such products. OPTICOM says they have over 100 licensed OEM customers.

OPTICOM’s Demo:

OPTICOM had a demo at the conference where they measured ABR video quality under various simulated reception conditions. We certainly could detect a difference in quality during different time periods of the stream. The quality of each of the video segments were measured and recorded.

We think that such measurements would be especially useful for mobile OTT video streaming where RF reception varies depending on the wireless subscribers location and physical environment.

End Note:

Time and space constraints do not permit me to highlight all the excellent sessions from this two day conference. Such a complete report is possible under a consulting arrangement. Please contact the author using the form below, if interested:

Broadband TV Conference Overview & Summary of MPEG-DASH Video Streaming Standard

Introduction:

The fifth annual Broadband TV Conference, held June 3-4, 2014, in Santa Clara, CA dealt with many key issues on a variety of subject matter, in commercial free panel sessions and individual presentations. The multi-track conference covered topics such as:

  • Is Television As We Know it Sustainable?
  • The Future of Second Screen, Augmented TV and TV Apps
  • OTT Devices – Is the Dominance of the TV Fading?
  • Where is TV Everywhere? Analyzing the Business, The Rollouts, The Hype…and the Reality
  • Which Technologies Will Change Television and Connected Viewing?
  • The State of Over-the-Top Deployments – What Can We Learn From “WrestleMania”?
  • A new Video Streaming Standard and new methods to measure video quality
  • Why point-to-point/star topology WiFi (even with IEEE 802.11ac chips) is not suitable for multi-screen viewing in the home/premises

Broadband TV and multi-platform services are now rapidly redefining the television landscape and the industry finds itself on the precipice of a massive shift in value. In particular, on demand over the top (OTT) Internet video on demand (VoD) is being complemented by linear/real time OTT video as well as downloaded/stored videos for later playback.

Some of the mega-trends that are driving the shift are the following:

  • Content owners have more choice in distribution (satellite, cable, telco TV, broadband Internet via subscription or add supported)
  • Advertisers are targeting consumers in ways never before possible (especially on mobile devices).
  • On-demand and binge viewing is rapidly growing in popularity (particularly on smart phones and tablets).
  • Original digital content is enabling broadband TV service providers to grow their user base and create ‘stickier’ services.
  • The broad reach of social media technologies are giving content owners new ways to interact with audiences and consumers in turn are now able to directly influence the success or failure of programming.
  • Streaming video is not only for OTT content on second screens, but also for connected TVs and 4K TVs (which will likely first be used ONLY to view OTT content on demand).
  • OTT video streaming quality has markedly improved due to a combination of factors, which include: better video compression (HEVC and the older H.264 MPEG4 AVC), adaptive bit rate streaming (based on HTTP), CDNs (like Akamai’s) and local caching of video content, higher broadband access speeds (both wireless & wire-line).

The highlights of selected sessions are summarized in this multi-part article. Each article will deal with one session. We emphasize technology topics rather than marketing and content distribution issues.


DASH- A New Standard for OTT Video Streaming Delivery, by Will Law of Akamai

The vital importance of this new video streaming standard was emphasized by Will Law of Akamai Technologies during his opening remarks: “DASH intends to be to the Internet world … what MPEG2-TS and NTSC have been to the broadcast world.”

[Note: DASH stands for Dynamic Adaptive Streaming over HTTP]

Video/multi-media streaming over the Internet (from web based video server to streaming client receiving device) was said to be a “feudal landscape.”  There are a proliferation of standards and specs, like Adobe Flash (with or without HDS), Apple HSL, HTML5 live streaming, Microsoft’s Smooth Streaming/ Silverlight,  MLB.TV’s proprietary streaming methods, etc.

That may now change with DASH, according to Will.  It has the potential to harmonize the industry if the major video streaming players converge and adopt it. DASH can support a wide range of end points that receive streaming video in different formats- from 4K TVs to game players, tablets, smart phones, and other mobile devices.

MPEG-DASH is an international standard — ISO/IEC 23009- for the adaptive delivery of segmented content and “Dynamic Adaptive Streaming over HTTP.” Apple was one of many collaborators who worked together under the Motion Picture Experts Group (MPEG) to generate the DASH standard.

There are four components in the DASH standard- ISO/IEC 23009:

  • Part 1: Media Presentation Description (MPD) and Segment Formats – Corrigendum completed; 1st Amendment is in progress. MPD is expressed as a XML file.
  • Part 2: Conformance and Reference Software (Finished 2nd study of DIS)
  • Part 3: Implementation Guidelines (Finished study of PDTR)
  • Part 4: Format Independent Segment Encryption and Authentication (FDIS)

The objectives of ISO/IEC 23009 DASH were the following:

  • Do only the necessary, avoid the unnecessary
  • Re-use what exists in terms of codecs, formats, content protection, protocols and signaling
  • Be backward-compatible (as much as possible) to enable deployments aligned with existing proprietary technologies
  • Be forward-looking to provide ability to include new codecs, media types, content protection, deployment models (ad insertion, trick modes, etc.) and other relevant (or essential) metadata
  • Enable efficient deployments for different use cases (live, VoD, time-shifted, etc.)
  • Focus on formats describing functional properties for adaptive streaming, not on protocols or end-to-end systems or implementations
  • Enable application standards and proprietary systems to create end-to-end systems based on DASH formats
  • Support deployments by conformance and reference software, implementation guidelines, etc.

The scope of the MPEG DASH specification is shown in the illustration below:

An image showing where DASH fits in the streaming ecosystem.
Image courtesy of Akamai Technologies

There are six profiles defined in ISO/IEC 23009.  A profile serves as a set of restrictions on the Media Presentation Segment, which provides information for adaptive streaming of the content by client downloading of media segments from a HTTP server.  Different addressing schemes supported include: segment timeline, segment template, and segment base.  For more information, see Media presentation description and segment formats for DASH.

The important market benefits of MPEG DASH were said to be:

  • Independent ISO standard – not owned by any one company
  • Multi-language/multi-format late-binding audio
  • Common encryption
  • Templated manifests
  • Efficient delivery from non-segmented origin files
  • Efficient ad insertion (critical for ad supported video’s)
  • Industry convergence for streaming delivery
  • Vibrant ecosystem of encoders and video/audio player builders

The DASH Industry Forum:

The ISO/IEC MPEG-DASH standard was approved by ISO/IEC in April 2012 – only two years from when work started.  After that, leading video/multi-media streaming companies got together to create this industry forum to promote and catalyze the adoption of MPEG-DASH and help transition it from a specification into a real business. The DASH Industry Forum (DASH-IF) grew out of a grassroots DASH Promoters Group and was formally incorporated in September 2012. Today it has 67 members spread throughout the world. Objectives of this forum include:

  • Publish interoperability and deployment guidelines
  • Promote and catalyze market adoption of MPEG-DASH
  • Facilitate interoperability tests
  • Collaborate with standard bodies and industry consortia in aligning ongoing DASH standards development and the use of common profiles across industry organizations

A harmonized version of DASH, with pre-selected options, is DASH-AVC/264. Will said it was a common version of DASH that everyone could use. Ongoing work for DASH-AVC/264 includes: multichannel audio, HEVC video, 4K/UHD video, live (linear) streaming, support of various video players, backend interfaces, DRM, and Ad Insertion. There are many MPEG-DASH products today as per the following chart:

A sampling of some of the DASH products available today.
Image courtesy of Akamai Technologies

A DASH MSE Reference client, delivered as an open source player, is available from Github. Released under the BSD-3 license, it leverages the Media Source Extensions and Encrypted Media Extensions of the W3C. Enabled in Chrome v23+ and IE11+. It is free to use and extend by the app developer.

In summary, Will stated why Akamai likes MPEG-DASH. The key benefits are:

  • industry convergence for streaming delivery
  • multi-language/multi-format late-binding audio
  • common encryption
  • templated manifests
  • efficient delivery from non-segmented origin files
  • adopted by both Microsoft and Adobe as their forward streaming
  • technology
  • efficient ad insertion
  • vibrant ecosystem of encoders and player builders

Comment and Analysis:

While Akamai is best known for its Content Delivery Network (CDN) that speeds up the flow of Internet packets (especially video) using its distributed network technologies, the Cambridge, MA-based company has recently been focusing on the booming OTT video industry.

Launched last year, Akamai’s cloud based VoD video transcoding service turns single video files into versions that are suitable for playback on a specific screen/end point client device.  Akamai also offers its own cloud based video streaming servicefor both live and on-demand videos.   One would suspect they’ll use MPEG DASH video streaming (as well as older methods) and encourage other Internet video streaming sources and sinks to do likewise.

“In the old world of streaming, you had one device that content providers were targeting – it was either a PC or a Mac,” said Akamai’s EMEA product manager Stuart Cleary. “Now it’s a much more complex environment for a content provider to get their video out.”

Using a single standard for video streaming -such as MPEG-DASH- would simplfy that environment, although developers would have to choose the correct options for the targeted client/end point TV screen or device.   Evidently, Akamai aims to be a major player in the cloud based OTT video delivery market place.

Reference:

Technologies that will offer higher quality viewing experience & enable new OTT services  (includes summary of Will Law’s presentation at 2013 OTTCon – the previous name for BroadbandTV conference)

End Note:

Time and space constraints do not permit me to highlight all the excellent sessions from this two day conference. Such a complete report is possible under a consulting arrangement. Please contact the author using the form below, if interested:

Meeker: Mobile is King of Internet Access and Content

Mary Meeker of KPCB puts out an Internet Trends report every year that is chocked full of interesting data on Internet, social, mobile and e-commerce trends.  In this year’s report at the Code conference in Southern California last week, Ms. Meeker said that while growth in overall Internet usage was slowing (especially in developed countries), it has increased rapidly for mobile.
Meeker said that:
  • Mobile data consumption is up 81 percent due to many more people using tablets and smartphones, especially to watch video. See graph below.
  • Mobile access now accounts for 25 percent of global web usage, up from 14 percent a year ago.  Mobile internet traffic is growing at a rate of 1.5 times that of conventional broadband.
  • Meeker sees it growing at an annual rate of 81 percent, with mobile video largely driving that growth.
  • Global mobile internet usage leaped from 14 percent to 25 percent between May 2013 and May 2014.
  • In North America, it jumped from 11 percent to 19 percent and in Europe it increased from 8 percent to 16 percent.
Mobile-Growth
Image courtesy of KPCB

Comment: This author finds it remarkable that “We now spend more time on mobile than on print and radio combined.”

In 2013, people spent 20 percent of their time on mobile devices, yet only 5 percent of the ad spending was allocated to mobile Internet access.   One would expect the latter to increase substantially in the years ahead. Meeker estimates there’s $30-billion per year to be made in mobile ads.  Therefore, advertisers, marketers, and media companies will try to get a good chunk of that ad revenue.

Meeker lists community, content, and commerce as the “Internet Trifecta.” With the ever expanding number of consumers online, there is a natural desire to connect with others through content. Marketers who provide context to the content they are creating and sharing are the ones who are able to increase connectivity within their communities of interest and grow stronger, which leads to brand loyalty.

Meeker said that there’s now clear evidence that people want to share information more privately. Mobile messaging services like WhatsApp (bought by Facebook for $19B), Tencent (QQ Instant Messenger in China) and Line (a South Korean-Japanese proprietary application for instant messaging) are growing at exponential rates — a trend that companies like Facebook and other social networking companies have noticed.

People were said to be “media junkies,” sharing articles via social media and tapping into streaming services. Apps are replacing linear TV channels as the way to consume video, with Americans aged 16 to 34 watching just 41 percent of their TV live, she said.

Google’s YouTube is also booming with consumers. “They are increasingly loving short-form video,” she said. “Consumers even love ads.” Indeed, 22 percent of video watching globally is done on mobile devices. On-demand mobile video apps, such as WatchESPN, BBC iPlayer, and HBO Go are all gaining popularity with mobile users. She says that 40 percent of Internet TV watchers are already using mobile devices (This author finds that to be incredible as most people we know do not watch Internet TV on their mobile devices except for video clips).

Meeker observed that 84 percent of mobile owners use devices while watching TV. They use them, in order of popularity for Web surfing, shopping, checking sports scores, looking up information about what they’re watching, and talking to friends/family or tweeting about the program. (That is something I certainly relate to as I do it all the time).

The country to watch is China, according to Meeker.  China has more Internet users than any other country by far – about 618 million Internet users last year. Approximately 80 percent of those only access the Internet via mobile devices. Four of the world’s 10 largest Internet companies are Chinese, up from one a year ago. [This author thinks they are Tencent, Baidu, Rakuten, and Alibaba].

In conclusion, the mobile Internet will continue to experience solid growth.  Therefore, it is imperative for Internet and e-commerce companies to develop content that resonates well with mobile audiences.


References:

1] Meeker’s Slide deck:

http://www.slideshare.net/kleinerperkins/internet-trends-2014-05-28-14-pdf

2] On-line Articles:

http://bits.blogs.nytimes.com/2014/05/28/state-of-the-internet-still-growing-but-more-mobile-than-ever/?_php=true&_type=blogs&_r=0

http://blog.hubspot.com/marketing/internet-trends-report-2014-mary-meeker