Do You Hear What I Hear?

[dropshadowbox align=”right” effect=”lifted-both” width=”200px” height=”” background_color=”#ffffff” border_width=”1″ border_color=”#dddddd” ]Although he didn’t mention any commercial designs for the metadata that is inherent in such a signature, it is not hard to imagine the value of hearing loss metadata to Amazon.  [/dropshadowbox]Some 80% of those with hearing loss do not use a hearing aid, according to Philip Hilmes of L126/Amazon at SMPTE’s Entertainment Technology in the Internet Age Conference. Given yesterday’s announcement of the Amazon Fire phone, Hilmes’ brought interesting insight into the topic of immersive and personal audio and how Amazon is using feedback from multiple sensors to create a better sound experience.

The idea is to adjust audio objects (I like to think of audio objects as different audio layers – ambient noise, dialog, music, etc. that can be adjusted both in volume and speaker location) to give each listener the best experience possible, given their circumstances. As Amazon says on their web site for their new phone:

“Fire phone uses the power of Dolby Digital Plus to create an immersive audio experience. Dolby Digital Plus auto-adjusts volume, creates virtual surround sound, and delivers easier-to-understand dialogue in movies and TV shows. Fire phone is designed to automatically optimize the audio profile based on what you’re doing, such as watching a movie or listening to music.”

The sensors on their new phone allow for Amazon to understand the environment that the user is experiencing (e.g. whether it is noisy, quiet, light or dark). The multiple cameras allow for the creation of a 3D profile of a person’s head and can detect where their ears are, relative to the speakers. They also optimize based on the type and brand of speakers (are they headphones, speakers from a TV or other external device, the phone’s speakers, etc.).

A practical reason for improving relative audio quality is to reduce the trouble calls that Amazon receives from customers regarding its streaming content. By dynamically adjusting dialog to a higher level relative to background effects and music, they are aiming to solve problems before they occur.

Hilmes described a system that adjusts to the user’s ability as well; essentially a sophisticated equalizer that adjusts for hearing loss. He explained how this audio signature could live in the cloud, such that a person’s hearing profile could follow him from device-to-device. Although he didn’t mention any commercial designs for the metadata that is inherent in such a signature, it is not hard to imagine the value of hearing loss metadata to Amazon. It also seems like this approach could also be valuable for preventing hearing loss (e.g. a profile that parents could set to limit the music volume for their kids).

A screenshot of different audio profiles in a demonstration from Dolby Labs.
Image from ETIA 2014 at Stanford University

From a content provider perspective, transmitting audio objects effectively means mixing the audio at the consumer device, instead of at the time of production. Roger Charlesworth of Charlesworth Media and Executive Director of the DTV Audio Group suggested that, long-term, an audio object approach promises to streamline production for the content producer.  He cited the importance of metadata and the biggest impediment to adoption as cultural, as it means changing the live production workflow. He suggested the transition to IP for the audio workflow will be a big reason for the eventual success of audio objects.

From a camera to consumer infrastructure perspective, Sripal Mehta of Dolby Labs described an approach that uses the existing Dolby 5.1 or 7.1 infrastructures with what he termed “bolt-on additions.” Dolby’s market research indicates that viewers don’t want to be sound mixer technicians, but want pre-selected presentations that they set and forget. As an example, he showed a video of a Red Bull sponsored hockey game that provided viewers with different perspectives and the ability for the viewers to decide upon the presentation they wanted to see/hear. These perspectives included:

  • A German announcer
  • An American announcer
  • A biased (fan) announcer – step aside Hawk Harrelson
  • One with a British comedian who didn’t know or care to know the game in almost a Mystery Science Theater way (quite funny)

To this last point, audio objects offer the potential to create both better quality content, as well content that is compelling to audiences who would otherwise not be interested (e.g. the British comedian made the game entertaining to non-hockey fans).

The Autonomous Vehicle and What It Means

Editor’s Note:

An automobile industry executive and subject matter expert, who wishes to remain anonymous, wrote the article that follows this preface. It is in response to my June 2nd article that speculated on Google’s long-term plans for the autonomous vehicle. This article provides additional insight into the AV market with some excellent references, while having some more fun imagining the type of vehicles we may see in the future.

Graph showing evolution of the vehicle in the digital age.
Image courtesy of Michael Robinson and ED Design

This article also introduces images from ED Design’s Michael Robinson, a Hall of Fame vehicle designer and leader in “Experiential Design”. He is at the forefront of determining what autonomous vehicles (whether on wheels, rails or wings) will look like and their impact on society. He wants to ensure that, in addition to achieving a safety goal of zero accidents, the autonomous vehicle doesn’t kill the love affair people have had with their cars (check out the presentation he gave to the Passenger Experience Conference in April of this year).

More importantly, he wants the autonomous vehicle to be an extension of the future digital home; an environment that stimulates emotions and thoughts and not one that is simply a mobile couch potato transporter. As he points out, removing the steering wheel changes everything as far as vehicle design and he even suggests a scenario where regulators outlaw steering wheels and driver-less cars are mandatory in 2040 (coincidentally, the same year as my story takes place).

It is important for broadband providers to stay abreast of the direction of the AV market and the thinking of visionaries like Robinson and the anonymous author of the following article, as this mobile Internet of Things, known as autonomous vehicles, will have an impact on broadband networks at some level. Broadband providers will either find new opportunities in this arena or let the Googles of the world grab the opportunity.


The Autonomous Vehicle and What It Means by Anonymous Contributor from the Automobile Industry

Ever since the Google Car made its debut in May, we have been inundated with articles on the autonomous vehicle (AV), for good or for bad.

An image showing how swarms of insects and birds as an analogy for what autonomous vehicles will do in the future.
Image courtesy of Michael Robinson and ED Design

The fact of the matter is that the AV is here to stay. This is most definitely confirmed by Carlos Ghosn in his address to the French Automobile Club on Tuesday, June 3. Mr. Ghosn lauded the UN’s accomplishment of successfully pushing through an amendment to Article 8 of the 1968 Convention on Road Traffic which allows for AV driving if, and only if, AV “systems can be overridden or switched off by the driver.” In his address he stated that “the problem isn’t technology, it’s legislation, and the whole question of responsibility that goes with these cars moving around … and especially who is responsible once there is no longer anyone inside.”

Knowing that the AV is not going away, governments have begun addressing the AV legal framework, such as California in the United States. More recently, UK Science Minister David Willets has called for a change in UK road laws to accommodate the AV. Therefore, if governments are using monetary resources to develop legal frameworks, then the AV is not a passing fad, but a paradigm shift in the way we will live and view transportation for the next one-hundred years.

With that said, what the AV means to our way of life is very simple. The automobile will no longer be viewed as a status symbol because most people will not own automobiles. Instead, the AV will be looked at as a service. We will reserve our AVs through reservation service providers based on the litmus test of Time, Place, and Occasion (TPO). For example, I have made a short list of AVs which could be available based on a TPO for Yokohama, Japan:

  • No Thrills (Basic AV to get you to/from Points A and B. Has reclining sofa chairs and relaxing music and images so you can sleep well during the commute. Imagine going to work in an Enya video.)
  • Shopping Mall (Large Size AV with security compartments for valuables. Great for people who enjoy shopping at different stores but who don’t want the worry of getting anything stolen.)
  • Family Trip (For families who want to go somewhere for a weekend or holiday. Has essentials for short trips, such as refrigerator, food storage, Internet, DVD, and Radio.)
  • Work Commute (For people working during their commute. Has all the desk essentials, TV Conferencing Equipment, plus coffee maker, tea pot, toaster, and breakfast, lunch, or dinner foods)
  • Business Meeting (Same as Work Commute but a larger size AV arranged in boardroom style)
  • Car Pool (Same as Work Commute but a larger size AV so people have room to work and not disturb one another. Great for people working in the same office building or business area.)
  • Image showing what a vehicle might look like without a steering wheel.
    Image courtesy of Michael Robinson and ED Design

    Tea Time (The tea time AV could come in three sizes: S, M, L. It would be like a restaurant booth equipped with all the tea time essentials, such as water, pot, cakes, sandwiches, scones, and a variety of tea and coffee. For those traveling in Yokohama’s China Town, it could be equipped for Chinese tea time.)

  • Game Center (Japanese love to play video games. This AV could come in three sizes: S, M, L)
  • Karaoke Kar (A Karaoke AV complete with its own Karaoke system and beverages. For those at the legal drinking age, it would come with alcohol.)

And for the #1 Japanese AV……

  • LOVE MOTEL (Yep, You got it! A Japanese-style love hotel on wheels. Equipped with a waterbed and all the love hotel essentials. Need I say More.)

Broadband TV Conference Part 2: How to Measure Streaming Video Quality

Introduction:

This second article on the 2014 Broadband TV Conference summarizes a presentation by OPTICOM’s CEO on streaming video quality measurements. We think that topic will be very important for many players in the OTT streaming video and connected TV markets. In particular, we believe it’ll be quite valuable for adaptive bit rate OTT and mobile video streaming providers, in order to measure and then attempt to improve the Quality of Experience (QoE) of their customers.

Perceptual Quality Measurement of OTT Streaming Video TV Services, Michael Keyhl, CEO of OPTICOM

How do you measure streaming video quality? Very few seem to have good metrics on video Quality of Experience (QoE) for viewers, even though it impacts many participants in the OTT, SD/HD video content delivery business. The stakeholders involved in QoE for video subscribers/consumers include: content provider, OTT provider, pay TV providers (cable, satellite, telco), network operators (especially for mobile video consumption), device makers, video codec providers and mobile apps companies including Internet videos in their apps.

Michael Keyhl, CEO of OPTICOM addressed this important topic in a very enlightening Broadband TV conference session. Germany-based OPTICOM develops algorithms for measuring video quality and licenses that technology to test equipment, video analytics and other  OEM partner companies.

Mr. Keyhl said that existing standardized video quality measurements barely suffice when considering OTT streaming video. Fundamentally, all traditional objective testing standards are based on analyzing short video sequences -of only a few seconds in length. The Mean Opinion Scores are quite low (below 5) for OTT video quality measured that way.  Michael said  that “snapshots of 10 second videos are inadequate to assess re-buffering and long term streaming behavior.” Hence, there’s a need for new type(s) of subjective testing methods and procedures.

In an attempt to greatly improve perceptual video testing standard for streaming video services (including ABR), OPTICOM created Perceptual Evaluation of Video Quality- Streaming (PEVQ-S).  It was described as an “advanced framework algorithm for full-reference picture quality analysis in video streaming environments.” The rules (but not implementation) have been standardized by ITU-T as J.247: Objective perceptual multimedia video quality measurement in the presence of a full reference. Related follow on work on video quality measurements is taking place in the Video Quality Experts Group (VQEG) which produces inputs to various ITU Study Groups for recommendations they’re developing.

As opposed to the lightweight “No Reference” video quality testing type, Full Reference testing is more processing intensive, but offers the highest accuracy and is standardized by ITU. It’s based on differential analysis – comparing the degraded video signal with the original source video with access to the reference/studio source video. OPTICOM‘s PEVQ/ITU-­T J.247 is the standard for Full Reference Video Quality Measurement as noted above.

The different types of Adaptive Bit Rate (ABR) video streaming methods are illustrated in the chart below. As you can see, there are many combinations and permutations for video quality measurements.

A diagram showing different streaming methods.
Image courtesy of OPTICOM.

Note: In Adaptive Bit rate (ABR) video streaming, the transmitted bit rate, resolution and other aspects of each “media segment” varies according to bandwidth and resources available at the client (receiving device). Video quality significantly depends on the client behavior, such as negotiating bit rate with the video server depending on dynamically allocated bandwidth, streaming protocol and re-buffering. The Media Presentation Description (MPD) is used to convey that information from client to server.


OPTICOM’s PEVQ was said to be validated for many different types of video- not just OTT ABR- using subjective testing. The validated video formats were based on ITU-R recommendation BT.500 – originally named “CRT TV Quality Testing (SD)” and ITU-T Recommendation P.910 – “Multimedia (QCIF, CIF, VGA) and IPTV (HD 720/1080) Testing.”

Based on a fundamental requirement analysis to understand adaptive streaming artifacts, the design of a novel test method was described. A four layer OTT quality model was presented with these four layers (top to bottom): Presentation, Transmission, Media Stream, Content.

Michael said an OTT Video Quality Measurement technique needs to have the following characteristics/attributes:

  • be related to content quality as a reference;
  • accurately score encoding and transcoding artifacts =Media Stream Quality;
  • measure and compare the picture quality for different frame sizes and frame rates = Media Stream/Transmission Quality;
  • continuously track the different bit rates and evaluate how smooth the video player is able to interact with the video server in a congested network= Transmission Quality;
  • take into account the player and endpoint device characteristics as well as the viewing environment = Presentation Quality.

The architecture of OPTICOM’s novel approach to measuring streaming video quality was said to be able to “overcome the limitations of standardized perceptual video metrics with regard to adaptive streaming of longer video sequences, while maintaining maximum backward compatibility (and thus accuracy) with ITU-T J.341/J.247 for short term analysis.”

An end-to-end functional block diagram of streaming video source/destination measurement using PEVQ-­S is shown in the illustration below.  The four OTT quality layers are shown at the bottom of the figure.

Block diagram showing possible quality of service impairments from source to sink.
Image Courtesy of OPTICOM

Conclusions: 

  1. There’s a clear need for a streaming video quality measurement (VQM) technique which accurately evaluates video subscriber/consumer QoE
  2. QoE concepts must be completely reformulated and we must reinterpret video quality in the context of multi-screen use scenarios.
  3. Currently, there is no standard for subjective and/or objective VQM of ABR video streaming
  4. PEVQ-­S is proposed to resolve that problem, based on advancing existing standards, while maintaining maximum backward compatibility and validated accuracy.
  5. PEVQ-­S is well suited to evaluate all 4 OTT Quality Layers (from bottom to top): Content, Media Stream, Transmission, and Presentation
  6. PEVQ-­S allows for analysis of common ABR protocols and formats, various video codecs at various bit rates. It can analyze video at different frame sized and frames per second.
  7. PEVQ-­S is licensed by OPTICON to leading OTT, Middleware, and Test & Measurement vendors.  It will soon be built into many such products. OPTICOM says they have over 100 licensed OEM customers.

OPTICOM’s Demo:

OPTICOM had a demo at the conference where they measured ABR video quality under various simulated reception conditions. We certainly could detect a difference in quality during different time periods of the stream. The quality of each of the video segments were measured and recorded.

We think that such measurements would be especially useful for mobile OTT video streaming where RF reception varies depending on the wireless subscribers location and physical environment.

End Note:

Time and space constraints do not permit me to highlight all the excellent sessions from this two day conference. Such a complete report is possible under a consulting arrangement. Please contact the author using the form below, if interested:

Broadband TV Conference Overview & Summary of MPEG-DASH Video Streaming Standard

Introduction:

The fifth annual Broadband TV Conference, held June 3-4, 2014, in Santa Clara, CA dealt with many key issues on a variety of subject matter, in commercial free panel sessions and individual presentations. The multi-track conference covered topics such as:

  • Is Television As We Know it Sustainable?
  • The Future of Second Screen, Augmented TV and TV Apps
  • OTT Devices – Is the Dominance of the TV Fading?
  • Where is TV Everywhere? Analyzing the Business, The Rollouts, The Hype…and the Reality
  • Which Technologies Will Change Television and Connected Viewing?
  • The State of Over-the-Top Deployments – What Can We Learn From “WrestleMania”?
  • A new Video Streaming Standard and new methods to measure video quality
  • Why point-to-point/star topology WiFi (even with IEEE 802.11ac chips) is not suitable for multi-screen viewing in the home/premises

Broadband TV and multi-platform services are now rapidly redefining the television landscape and the industry finds itself on the precipice of a massive shift in value. In particular, on demand over the top (OTT) Internet video on demand (VoD) is being complemented by linear/real time OTT video as well as downloaded/stored videos for later playback.

Some of the mega-trends that are driving the shift are the following:

  • Content owners have more choice in distribution (satellite, cable, telco TV, broadband Internet via subscription or add supported)
  • Advertisers are targeting consumers in ways never before possible (especially on mobile devices).
  • On-demand and binge viewing is rapidly growing in popularity (particularly on smart phones and tablets).
  • Original digital content is enabling broadband TV service providers to grow their user base and create ‘stickier’ services.
  • The broad reach of social media technologies are giving content owners new ways to interact with audiences and consumers in turn are now able to directly influence the success or failure of programming.
  • Streaming video is not only for OTT content on second screens, but also for connected TVs and 4K TVs (which will likely first be used ONLY to view OTT content on demand).
  • OTT video streaming quality has markedly improved due to a combination of factors, which include: better video compression (HEVC and the older H.264 MPEG4 AVC), adaptive bit rate streaming (based on HTTP), CDNs (like Akamai’s) and local caching of video content, higher broadband access speeds (both wireless & wire-line).

The highlights of selected sessions are summarized in this multi-part article. Each article will deal with one session. We emphasize technology topics rather than marketing and content distribution issues.


DASH- A New Standard for OTT Video Streaming Delivery, by Will Law of Akamai

The vital importance of this new video streaming standard was emphasized by Will Law of Akamai Technologies during his opening remarks: “DASH intends to be to the Internet world … what MPEG2-TS and NTSC have been to the broadcast world.”

[Note: DASH stands for Dynamic Adaptive Streaming over HTTP]

Video/multi-media streaming over the Internet (from web based video server to streaming client receiving device) was said to be a “feudal landscape.”  There are a proliferation of standards and specs, like Adobe Flash (with or without HDS), Apple HSL, HTML5 live streaming, Microsoft’s Smooth Streaming/ Silverlight,  MLB.TV’s proprietary streaming methods, etc.

That may now change with DASH, according to Will.  It has the potential to harmonize the industry if the major video streaming players converge and adopt it. DASH can support a wide range of end points that receive streaming video in different formats- from 4K TVs to game players, tablets, smart phones, and other mobile devices.

MPEG-DASH is an international standard — ISO/IEC 23009- for the adaptive delivery of segmented content and “Dynamic Adaptive Streaming over HTTP.” Apple was one of many collaborators who worked together under the Motion Picture Experts Group (MPEG) to generate the DASH standard.

There are four components in the DASH standard- ISO/IEC 23009:

  • Part 1: Media Presentation Description (MPD) and Segment Formats – Corrigendum completed; 1st Amendment is in progress. MPD is expressed as a XML file.
  • Part 2: Conformance and Reference Software (Finished 2nd study of DIS)
  • Part 3: Implementation Guidelines (Finished study of PDTR)
  • Part 4: Format Independent Segment Encryption and Authentication (FDIS)

The objectives of ISO/IEC 23009 DASH were the following:

  • Do only the necessary, avoid the unnecessary
  • Re-use what exists in terms of codecs, formats, content protection, protocols and signaling
  • Be backward-compatible (as much as possible) to enable deployments aligned with existing proprietary technologies
  • Be forward-looking to provide ability to include new codecs, media types, content protection, deployment models (ad insertion, trick modes, etc.) and other relevant (or essential) metadata
  • Enable efficient deployments for different use cases (live, VoD, time-shifted, etc.)
  • Focus on formats describing functional properties for adaptive streaming, not on protocols or end-to-end systems or implementations
  • Enable application standards and proprietary systems to create end-to-end systems based on DASH formats
  • Support deployments by conformance and reference software, implementation guidelines, etc.

The scope of the MPEG DASH specification is shown in the illustration below:

An image showing where DASH fits in the streaming ecosystem.
Image courtesy of Akamai Technologies

There are six profiles defined in ISO/IEC 23009.  A profile serves as a set of restrictions on the Media Presentation Segment, which provides information for adaptive streaming of the content by client downloading of media segments from a HTTP server.  Different addressing schemes supported include: segment timeline, segment template, and segment base.  For more information, see Media presentation description and segment formats for DASH.

The important market benefits of MPEG DASH were said to be:

  • Independent ISO standard – not owned by any one company
  • Multi-language/multi-format late-binding audio
  • Common encryption
  • Templated manifests
  • Efficient delivery from non-segmented origin files
  • Efficient ad insertion (critical for ad supported video’s)
  • Industry convergence for streaming delivery
  • Vibrant ecosystem of encoders and video/audio player builders

The DASH Industry Forum:

The ISO/IEC MPEG-DASH standard was approved by ISO/IEC in April 2012 – only two years from when work started.  After that, leading video/multi-media streaming companies got together to create this industry forum to promote and catalyze the adoption of MPEG-DASH and help transition it from a specification into a real business. The DASH Industry Forum (DASH-IF) grew out of a grassroots DASH Promoters Group and was formally incorporated in September 2012. Today it has 67 members spread throughout the world. Objectives of this forum include:

  • Publish interoperability and deployment guidelines
  • Promote and catalyze market adoption of MPEG-DASH
  • Facilitate interoperability tests
  • Collaborate with standard bodies and industry consortia in aligning ongoing DASH standards development and the use of common profiles across industry organizations

A harmonized version of DASH, with pre-selected options, is DASH-AVC/264. Will said it was a common version of DASH that everyone could use. Ongoing work for DASH-AVC/264 includes: multichannel audio, HEVC video, 4K/UHD video, live (linear) streaming, support of various video players, backend interfaces, DRM, and Ad Insertion. There are many MPEG-DASH products today as per the following chart:

A sampling of some of the DASH products available today.
Image courtesy of Akamai Technologies

A DASH MSE Reference client, delivered as an open source player, is available from Github. Released under the BSD-3 license, it leverages the Media Source Extensions and Encrypted Media Extensions of the W3C. Enabled in Chrome v23+ and IE11+. It is free to use and extend by the app developer.

In summary, Will stated why Akamai likes MPEG-DASH. The key benefits are:

  • industry convergence for streaming delivery
  • multi-language/multi-format late-binding audio
  • common encryption
  • templated manifests
  • efficient delivery from non-segmented origin files
  • adopted by both Microsoft and Adobe as their forward streaming
  • technology
  • efficient ad insertion
  • vibrant ecosystem of encoders and player builders

Comment and Analysis:

While Akamai is best known for its Content Delivery Network (CDN) that speeds up the flow of Internet packets (especially video) using its distributed network technologies, the Cambridge, MA-based company has recently been focusing on the booming OTT video industry.

Launched last year, Akamai’s cloud based VoD video transcoding service turns single video files into versions that are suitable for playback on a specific screen/end point client device.  Akamai also offers its own cloud based video streaming servicefor both live and on-demand videos.   One would suspect they’ll use MPEG DASH video streaming (as well as older methods) and encourage other Internet video streaming sources and sinks to do likewise.

“In the old world of streaming, you had one device that content providers were targeting – it was either a PC or a Mac,” said Akamai’s EMEA product manager Stuart Cleary. “Now it’s a much more complex environment for a content provider to get their video out.”

Using a single standard for video streaming -such as MPEG-DASH- would simplfy that environment, although developers would have to choose the correct options for the targeted client/end point TV screen or device.   Evidently, Akamai aims to be a major player in the cloud based OTT video delivery market place.

Reference:

Technologies that will offer higher quality viewing experience & enable new OTT services  (includes summary of Will Law’s presentation at 2013 OTTCon – the previous name for BroadbandTV conference)

End Note:

Time and space constraints do not permit me to highlight all the excellent sessions from this two day conference. Such a complete report is possible under a consulting arrangement. Please contact the author using the form below, if interested:

Google’s Potential End Game – Transport and Organize the World’s People, Not Just Information

The Year 2040 – Somewhere in Silicon Valley

[dropshadowbox align=”right” effect=”lifted-both” width=”250px” height=”” background_color=”#ffffff” border_width=”1″ border_color=”#dddddd” ]Read More

[/dropshadowbox]It’s 8:07 am and my next door neighbor, cheapskate Charlie, has been waiting outside his door for a few minutes for his ride, which is guaranteed to be at his house within a 10 minute window. He looks at his garage and is reminded that he will soon be renting it as storage space to his neighbor, Rich.

As the electric Gee-Auto arrives, Charlie notes that another neighbor, tightwad Tom, is joining him today and on their journey they will pick up parsimonious Paula. Despite sharing a vehicle with two to three people each day, the efficiency of a packet network of autonomous vehicles has reduced his average commute time from 30 minutes to 23 minutes, eliminated the need for auto insurance and given Charlie the opportunity to play his virtual piano on his morning commute, instead of focusing on the car in front of him.

Parsimonious Paula likes the Gee-Mobile service as she no longer has to rely on the discontinued and obsolete county transit. Her monthly subscription to the Gee-Mobile service is comparable to what she used to pay for a monthly transit pass and she doesn’t have to walk half-a-mile in the rain to catch a bus.  It would make bringing groceries home easier, but Gee-Autos have been delivering goods directly to homes for decades.

It’s 8:15 am and across the street, just like every workday, a Gee-Auto meets my spendthrift neighbor, Rich, at his doorstep exactly as he opens his front door. He hops in the Gee-Auto and waiting for him is a morning latte, a freshly toasted bagel, along with morning news, entertainment and education tuned especially for his viewing, listening and olfactory pleasure.

The garage is no longer needed for car storage when one has a vehicle-on-demand service.
The garage is no longer needed for car storage when one has a vehicle-on-demand service.

Rich has a tinge of disappointment that his 15 minute commute (which used to be 30 minutes before the arrival of self-driving, always-connected vehicles) couldn’t be just a little longer, as he really enjoys this daily ritual of breakfast and relaxation in a moving pod. That disappointment is soon forgotten, as he realizes today is the day when a contractor and his team of droids will begin the conversion of his garage into a tricked-out, man-cave.

Along the way, the Gee-Auto’s speed is constantly and automatically adjusted to traffic conditions. The queuing algorithms are working especially well these days and intersections that were formerly regulated by stoplights are now sophisticated roundabouts and it will be a non-stop trip for Rich. There is one stop for the Gee-Auto transporting Charlie and that is to drop off Paula at her banana stand.

Like most days, Rich and Charlie arrive within a few minutes of each other at the Acme Anvil Company (Charlie is the CFO and Rich is in marketing). They wave adieu to Tom, who works about a half-mile away, and go about their day. In the meantime, the Gee-Auto that had transported Rich to work slips into the median, where an embedded wireless charging pod rapidly recharges the hybrid super capacitor-graphene battery system, before receiving its next assignment to pick up groceries for delivery to another Gee-Mobile subscriber.

[Note: The above scenario of an automated people mover seems ridiculous, but it wasn’t too long ago that the idea of talking to one’s phone to get directions would be absolute lunacy. The idea of an on-demand transit system providing door-to-door transport goes back to at least the mid-1970s, as the first major expansion for Silicon Valley’s public transit system was such a service, Dial-a-ride (dial-a-ride used the old school telephone to beckon a mini-bus directly to one’s residence). Dial-a-ride didn’t scale, however, as the staffing and equipment costs were greater than the traditional public transit approach of aggregating people at transit stops.]

Technology to Make the Science Fiction, Fact

Although fictional, the above story isn’t science fiction, as the technology now exists to make the above scenario real. Many companies could potentially implement such a people transport system, including car manufacturers, auto-rental and logistic companies, but it is likely to be outsiders (Amazon, Walmart, Google, etc.) that disrupt this multi-trillion dollar industry.

The focus of this article is Google and how the elements it already has in place could be stitched together to create an end-to-end, subscription (as well as Pay Per Ride) people transport service that generates tens of billions of new revenue, while building upon its existing businesses.

One of the oft-cited barriers to the autonomous car is the question of who is liable in the case of an accident (e.g., the manufacturer, the driver, etc.)? A subscription model doesn’t remove liability factor, but by taking a holistic view of the driving experience and owning the “last mile” transport method, Google could greatly reduce its exposure.

Like its cloud services, Google would have complete control over the design (ensure no single points of failure), the maintenance (no mechanical error by ensuring equipment is always up-to-date) and the software (e.g. secure it from hacking).

Further, removing the constraint of having to accommodate a driver would allow for a rethinking of a vehicle’s design (see the above video). There is no need for a steering wheel, which could change the form factor, while improving the safety of the passenger who occupies the driver seat.

The need for windows goes away and could be replaced with electronic screens, such that one could choose the environment that he wants to see (think advertising space for Google). Without windows, presumably the vehicle’s body could be made stronger (e.g. more cross-members where the windows would have been). Additionally, the seats could be placed backwards as there is no longer a need to face forward.

[dropshadowbox align=”right” effect=”lifted-both” width=”150px” height=”” background_color=”#ffffff” border_width=”1″ border_color=”#dddddd” ]With a 5% market share, the annual revenues could exceed $32B.[/dropshadowbox]Last week’s announcement that they have designed their own prototype car is consistent with other initiatives, like Google Fiber, where they want to control the entire experience. A custom design also reduces vehicle cost by eliminating overhead that an individual consumer normally pays when she buys a car from a dealer (which passes on the sales, marketing, engineering and other overhead costs of the manufacturer, along with the dealer costs, etc.).

Google, along with other entities, have a number of initiatives that set the stage for a subscription-based, autonomous transport system, including:

  • Google has proven it can create an autonomous vehicle that can drive hundreds of thousands of miles without an accident.
  • Google’s Waze application, coupled with their Google Maps already provides a real-time view of traffic allowing drivers to select the best route. Having a vehicle automatically make the decisions as to the best route is the next step (and safer). The more vehicles that are directed in this manner, the better, in terms of route optimization (i.e. traffic reduction); the Gee-Auto and their control become more and more like the Internet, as the underlying signaling improves the throughput of the overall transportation network.
  • Google, as well as Amazon and others, are investing heavily in on-demand delivery of goods. This effort is a great testing ground to understand the best routing of vehicles. As Google is wont to do, they are also building the associated apps and signaling technology via the broadband network to ensure orders are relayed through the delivery chain. If Google can prove this model with a driver, then eliminating the driver via an autonomous car makes the model work that much better.
  • The idea of a subscription service for a car rental isn’t new, as evidenced by the rise of ZipCar in urban areas. The autonomous car would allow this concept to spread into suburban areas, as the cars would automatically appear at the subscriber’s house [Added 12/23/14 – the idea of an on-demand, shared, last-mile car service may become reality in 2015, as Singapore is looking to open up one of its neighborhoods to such an experiment].
  • Relay Rides, Uber and Lyft provide models for the electronic dispatch of vehicles – albeit with human drivers – using a Smartphone or tablet. It isn’t a stretch to envision the elimination of the driver. It is important to note that Google Ventures is already an investor in Relay Rides and Uber [Note, since this article was published, Uber has suggested that the elimination of the driver could be part of their long-term plans – one industry executive even predicted that Uber might purchase a auto manufacturer, so that they could control the experience and have cars that last a million miles].
  • The concept of a relatively low-cost ($24k), electronically controlled electric pod car is close to reality with the soon to be released vehicle from LIT Motors; a small San Francisco start-up that promises to disrupt the auto industry with its Silicon Valley business model.
  • Building a car with screens, instead of windows, provides Google with an opportunity for more “ad-space”. This is ad-space that is not only location-aware, but location-directed (e.g. sensing the rider might be hungry for a certain food item, it would be easy to automatically reroute to one’s favorite restaurant and provide incentives for stopping at said restaurant).
  • A Google Fiber/Wireless backbone, although not necessary, could be tuned to off-load signaling information emanating from the vehicles peer-to-peer communications systems. These two networks (P2P vehicle and the Fiber backbone) could become an integrated central nervous system for the network of vehicles. [Added 6/6/2014] Google’s request for a Statutory Temporary Authority from the FCC for the nationwide testing of millimeter frequencies (77 GHz) looks to be part of an effort to detect objects around a car. [Added 8/25/2014] Further, the conversation on Vehicle to Vehicle communications continues with the NHSTA’s release of it’s Advanced Notice of Proposed Rulemaking.

Tens of Billions of New Revenue – It Moves the Needle

A picture of an electric vehicle from LIT Motors at CES 2014.
A picture of a drive-by-wire, electric vehicle from LIT Motors at CES 2014.

Why would Google ever want to jump into such a seemingly tangential business model of being a Subscription Vehicle on Demand service provider? Simply, a project of this sort could move their revenue needle, produce great margins and augment their advertising business. As importantly, the notion of organizing the world’s atoms is akin to its initial mission of organizing the world’s information.

For simple modeling purposes, let’s assume the IRS reimbursement rate of 55.5 cents per mile (gas, maintenance, amortized car payments, etc.) and that the average person drives 10k miles per year (AAA estimates 59.5 to 97.5 cents per mile for 10k miles/year for a small to large sedan, respectively). That would mean $5,550 a year in transport costs per car or approximately $460 per month.

It isn’t too difficult to imagine a 3 tier subscription offering, similar to what Google is doing with their broadband offering to meet the needs of the various customer profiles:

  • The Parsimonious Paula Offer$125/month – Gee-Auto guaranteed within 10 minutes – have to share with others, advertisements, plus goods delivery within 8 hours  – 500 miles per month (overages apply).
  • The Mainstream Mary Offer$300/month – Gee-Auto guaranteed within 5 minutes, sometimes have to share depending upon demand, limited advertisements, plus goods delivery within 4 hours – 1,000 miles/month limit (overages apply).
  • The Regal Rich Offer$1,000/month – Gee-Auto is ready when the person opens their door, no sharing with others and no advertisements, plus goods delivery within 1 hour – Unlimited distance per month

Further, assume take rates of 10% for the Regal Rich offer, 40% for the Parsimonious Paula offer and 50% for the Mainstream Mary offer, the weighted average would be $300 per month per subscriber (33% less than the assumed conservative average of $460/month in transportation costs).

Assuming a 5% market share of today’s 18+ population, this would mean approximately 9 million subscribers or about $2.7B monthly or > $32B annual business, not counting any uplift to existing businesses (e.g. advertising, broadband, etc.), on-demand business (taxi-replacement business) or fleet/logistic replacement.

Because of the sharing nature of the business, Google’s costs would be lower than the IRS reimbursement rate of 55 cents/mile. Even the most expensive option in the above scenario would be shared (e.g. once a Gee-Auto pod drops off one person, it could pick up another nearby person). Assuming a sharing ratio of 1/3 (one Gee-Auto for every 3 people [8/19/2014 update – When I wrote this, the 1/3 ratio was a gut-feel guess. As it turns out, some MIT scientists using  mathematical algorithms and real-data from  Singapore determined that a 1/3 ratio is about right as summarized here about their white paper]), the costs, based on the IRS figures would be $153/subscriber/month (1/3 of the single driver’s cost of $460) or almost 50% gross margin ($153 costs versus $300 in revenue/subscriber); not a bad business and with $30B+ in revenue, a business that is approximately 50% of Google’s current business.

Granted, there would be significant capital costs to such an endeavor, but, because electronics and software are the significant cost components associated with the above scenario, cost reductions would more closely follow Moore’s Law than the traditional cost for building automobiles. There are also costs associated with upgrading roads, etc. that would need to be factored in as part of a capital build. Still, by building this on a city-by-city basis over time, much like Google Fiber, the capital costs would drop with each deployment. Even at $20k per vehicle, the capital costs to create 3 million vehicles would be $60B; not insignificant, but within the realm of possibility given current costs for low-end electric cars.

There are several upsides, both the aforementioned uplift to their existing businesses, as well as opportunities to reduce expenses relative to traditional transportation systems, as well as find new revenues:

  • Lower Insurance Costs: Google would probably self-insure, given the sheer volume of business, as well as the confidence they would have in their technology and the indemnification clauses their attorneys would include in their subscription agreements. Self insuring would remove the costs of the insurance company middleman. Additionally, given the potential improvements in safety from autonomous vehicles (Google suggests that human error causes 90% of the 1.2 million vehicle deaths each year), the effective cost of insurance would be lower than the costs for insuring human-driven autos.
  • Lower Operational Costs: Being all-electric, the operational costs from maintenance and fuel would be less than traditional hydrocarbon vehicles. Additionally, it wouldn’t be a stretch for Google to create a network of its own power stations (which, as alluded to in the above story, could be in medians and other non-usable areas).
  • Local Subsidies: At $125 per month, the Parsimonious Paula tier is more than 10% cheaper than the existing Silicon Valley public transportation option (a monthly pass on Silicon Valley’s VTA is $140). Given that public transit authorities operate bus systems at a loss, it might be cheaper for a transit authority to pay Google on a variable cost basis and retire the bus systems (particularly in suburban areas). Google probably would run the transit system without subsidies, as the political benefits of saving the local taxpayers money would outweigh the marginal revenue.

Policy Implications at the Local Level  – From First Mover Advantage to Must-Have

A picture of a Google truck at a customer install. Note, the lawn sign promoting the Google Fiber project.
Image courtesy of Google

One of the brilliant insights from the Google Fiber management team is its understanding of the importance of speed; not just speed in broadband access, but speed to market. The longer it takes to deploy Google Fiber, the higher the costs of make-ready and the more opportunity competitors have to thwart its efforts. As such, one of the most important factors in determining where they deploy Google Fiber is the willingness of local cities and agencies to work with them to smooth out the barriers to deployment (e.g. obtaining permits, rights-of-way, etc.).

The Google Fiber project has forged the sort of local relations that would be necessary to implement such a revolutionary approach to transportation. A project of this scale would require working with local government to support infrastructure improvements, such as distributed power charging stations (or some equivalent, such as solar roadways), improvements in traffic light signaling (making it more dynamic, based on real-time traffic demands or [link added 8/17/14] eliminating it as seen in this video) and other road improvements (e.g. roundabouts).

It’s not too difficult to imagine Google pursuing a nationwide competition like it did when it introduced the Google Fiber concept. If Google were to target a community with a population of 100,000 adults and assuming a 5% subscription rate, with a 1/3 ratio of vehicle per subscriber, they would be looking at 1,700 vehicles and assuming a near-term cost of $100k per vehicle, this would be a $170M investment; an amount that is pricey, but would provide a good field test and refine the commercial project, just like Kansas City did for Google Fiber (this is in the realm of possibility, as Google recently announced that they will be building 100 prototype vehicles for testing purposes).

Like the Google Fiber project, which received over 1,000 applications from communities of all sizes, a Gee-Auto contest would grab the attention of forward-looking cities and Google would probably have its choice of cities to pilot such a project. By staying on a city or regional basis, Google might be able to avoid the regulatory reach of some federal and state agencies. As they cross beyond county or state lines, however, the regulatory environment would become more complicated.

Assuming the above tack where Google starts local, policy makers would have many things to ponder over such an application including:

  • How to create an open network, such that vehicles from multiple operators can traverse the same roadway and still communicate in such a way that all operate in a seamless fashion, regardless of the underlying transport technology?
  • Who controls the signaling system and should that entity be a private operator (e.g. Google), a quasi-private entity or a municipal entity?
  • Should the entity that controls the signaling system be able to prioritize traffic, based on subscription tier, (e.g. public safety vehicles would still get first priority), etc.?
  • How to charge operators for the use of the roadways (e.g. pay per mile) and pay for ongoing infrastructure upgrades as well as upgrades that such a transportation system would entail?
  • What are the privacy implications of knowing a person’s movement at such a micro-level?

A shared vehicle society has long-term implications for local planning officials and could change how they plan for parking, design roads and the economic implications of the hollowing out of the traditional automobile trade.

  1. The Gee-Mobile service could potentially reduce the number of parking spots at a given building. Garages in single family residences might no longer be necessary in the autonomous auto world. At night or other times of slack demand, the Gee-Auto would park itself in unoccupied locations, which wouldn’t have to be near a residence. Further, because a Gee-Auto is dynamically assigned, parking lots could be structured to eliminate the space between cars (Last-In, First-Out). Additionally, parking lots and charging stations could be located in what are currently unusable spaces (e.g. in a median).
  2. The roads could be optimized for the autonomous vehicle. For instance, because it would be possible to create a narrower vehicle (LIT Motors, as an example), as well as pack the vehicles closer together, it might be possible to effectively create, say 3 lanes where there are two. These high density lanes could dispense with painted lines, as electronics would keep the autonomous vehicles in place. These virtual lanes would only be for the higher speed, autonomous traffic and not traditional motorists [Added 8/25/2014 – similarly the number of lanes for a given direction could be dynamically assigned, depending upon time of day – e.g. a 4 lane road might use 3 lanes for one direction in the morning and change the direction of those lanes when the traffic pattern changes in the afternoon).
  3. The local economic impact of the reduction of traditional automobiles will be huge. Of course, gasoline taxes to pay for infrastructure go away (an issue with electric cars that need to be addressed, regardless). The bigger impact might be on the restructuring of local economies. From the local auto shop to the gas station to the car dealer to the insurance agent, the traditional automobile has a huge economic impact on a community and the lost revenue would have to be made up with new opportunities from existing and new employers.

One aspect that a local economic agency could tout when trying to get those new jobs is the superior quality of life (e.g. not having to fight traffic, lower cost of transport, freedom for senior citizens and those with physical disabilities to leave their domicile without depending upon others, etc.) Like with gigabit broadband there will be a first-mover advantage for those communities that successfully implement an autonomous vehicle network. Eventually, however, being a “Smart Transport Community” will become a must-have.

The Big Question

Although all the technological elements of the so-called Gee-Mobile service exist today and the pricing is even within striking range, the bigger barriers will probably be business model and regulatory. It looks like there is a path to a business model (particularly as autonomous vehicle costs fall). Google has proven that it can work with local governments with its Google Fiber initiative, which would be helpful in getting past regulatory concerns. The biggest question in the above story is what jobs will cheapskate Charlie, parsimonious Paula, tightwad Tom and regal Rich will be driven to in 2040?

Meanwhile – Back in the Year 2040

It’s 8:30 a.m. and, at the same time Rich and Charlie arrive at their office, I am sitting down to work from home in a virtual environment via my 10 terabit connection. Just as I am about to start, I am pleasantly surprised by the appearance of a Gee-Air, the flying drone that whisked my 113 year old mother from her engagement residence (the term retirement home was retired from the vernacular decades before), located some 60 miles away. She had decided to surprise me with freshly made cinnamon rolls for breakfast. But that’s a story for another time.

Stanford President John Hennessy Educating SBU Alumni on Broad Range of Topics

Moderator’s Comment:

I would like to express my utmost gratitude to Professor Hennessy for our 1 hr 35 minute conversation on March 2, 2014, at the SBU Alumni Northern California meeting in Palo Alto, CA.  The Professor treated the audience to an enlightening seminar on many diverse and interesting topics that spanned education, technology and the value of liberal arts/ reading classic literature. Not only was his presentation crystal clear and very informative, but his relaxed style and down to earth discourse made it most enjoyable as well.    Alan J Weissberger – BS 1968 Math & Electrical Science {SUNY @ Stony Brook)

The individual videos for this outstanding event are posted at:

http://www.viodi.tv/category/history-2/john-hennessy-history-2/

while the entire presentation can be watched in the player, below:

Acknowledgement:  This author and moderator is indebted to Ken Pyle for his skillful videography of this event and his great effort in working with me to add titles and captions to each of the 25 video segments.

Quotes from the Professor and Audience:


From Professor Hennessy:

My conversation on March 2nd with Dr Alan J. Weissberger  and an attentive audience of Stony Brook alumni was truly enjoyable. From the terrific introduction, to the insightful and well-prepared questions, we covered a wide variety of topics. The audience questions and discussion were equally enlightening. I hope my fellow alumni enjoyed the afternoon as much as I did!  Many thanks.
John L. Hennessy – PhD 1977 Computer Science& President -Stanford University


Hi Alan,
You did a great job moderating today and Professor Hennessy was very engaging. It could have gone all day and it would kept me interested. Some impressions:

  1. Inspiring. First call on my way home was to my 15 year old son and encouraged to continue pursuing his dream of Stanford.
  2. Education – very insightful regarding the role of online, flip education and MOOC.
  3. Manufacturing. When he mentioned manufacturing coming back to the United States and it would have to go to lower cost places, I thought of places like Stockton and the Central Valley. Interestingly, Seems like there could be some interesting collaboration between Stanford, SJSU and the City of San Jose to make those sorts of things happen.

I look forward to listening to the interviews again as I edit the videos.

Ken Pyle, Viodi.TV


I too express my gratitude and thanks to Professor Hennessy for attending the SB University Alumni meeting on March 2nd in Palo Alto, and engaging in a truly enlightening conversation with Dr. Alan Weissberger.  Prof. Hennessy’s insightful commentary on current state of research and development, manufacturing, and online education was invaluable and made the event so special. Thanks to Alan and others for asking enlightening and stimulating questions, which showed the tremendous interest of all those who attended. Hope to see all in our next SBU Alumni next event.

Shashi Agarwal  – PhD Materials Science 1979


Alan—Thanks for a great event!!
Your detailed preparation for today’s interview made for a wonderful afternoon with Prof. Hennessy.  Really appreciate his perspectives on Stanford, Stony Brook and the value of higher education.  He was very generous with his time.

Best regards, Jean Bozman – BA 1973


The SBU alumni meeting on March 2nd was an outstanding event.  It was a privilege and honor to hear from such an esteemed professional (Prof Hennessy), who has a keen insight on so many important things going on in our world.  The event was exceptionally organized and kudos to Alan J. Weissberger, the moderator.  Very rarely do you get to see a moderator who can control the flow of the conversation, handle the crowd, and engage the speaker in a thought provoking discussion all at once.  After hearing Prof. Hennessy speak, my mind was recharged and once again excited about the possibilities of the future!

Byung Sa – BA 2011 History


The Northern California Alumni event on March 2nd was a resounding success for our local chapter. With the assistance of Janet Friello-Masini and Matthew Colson from Stony Brook Alumni Relations and the local chapter leadership of Shashi Agarwal and Alan Weissberger, the event was a memorable highlight in our short history of existence. The venue was top notch as was the luncheon menu. The real highlight of the day was the selection of Professor Hennessy as our keynote speaker. Want to thank Alan for all his work in securing the speaker and in organizing the interview style session with pertinent well thought out questions! Also, the Q and A with the audience illustrated the interest of all the attendees and the generosity of Professor Hennessy time and insight. The history of Silicon Valley and technology in our lives certainly was the topic of the day. But, the more rewarding part of the day was how well the organizers were able to have all of us engaged in the event. Thanks again for all your efforts!”

Alan “Coach” Koch – BA 1971  Secondary Education

Silicon Photonics – Cisco and Intel see "Light at the End of the Tunnel"

Introduction:

Among the many presentations on Silicon Photonics (SiPh) at the excellent 2013 Open Server Conference,  two were of special interest:

  • Joel Goergen of Cisco called for a radically new data center architecture that used SiPh to interconnect components and modules, rather than circuit cards or racks of equipment.
  • Mario Paniccia of Intel focused on using SiPh for rack level interconnects, but called attention to total solution cost as a critical issue to be solved.

The other presentations – from SiPh component vendors, potential customers (Oracle), and a market researcher (Ovum)- all agreed on the promise and potential of SiPh, but differed greatly on the technology details, link distance, receiver vs transceiver, and “sweet spot” for a volume market.

Silicon Photonics is a new approach to using light (photons) to move huge amounts of data at very high speeds with extremely low power over a thin optical fiber rather than using electrical signals over a copper cable.  It’s been in the research stage at Intel for over 10 years, while a few component/module companies have already shipped SiPh receivers (but not integrated transmitter/receivers or transceivers yet).

For a description of all the SiPh (and other) presentations at the 2013 Open Server Summit, please visit their web site for the conference program.  You will also find catchy quotes there like: “Only silicon photonics holds the promise of making 100G more cost-effective than 10G and 40G nets,” by Andy Bechtolsheim, Arista Networks, Oct 2012.


Using Integrated Silicon Photonics for Higher Speed Interconnect Technology – A Frame Work for The Next Generation, by Joel Goergen of Cisco:

Exponentially increasing Internet traffic along with the Internet of Things (IoT) will place a huge burden on next generation, cloud resident data centers. The new requirements include: higher system performance, coping with higher power consumption via more effective cooling concepts, faster interconnect speeds (between components, modules, cards, and racks). The challenge for designers is to provide faster compute/storage/networking systems with more effective bandwidth/performance per Watt and with highly efficient cooling. Hopefully, all that can be provided at improved cost/ performance/power efficiency to the owner of the data center.

Goergen sees the prime use of SiPh as a high speed/low latency interconnect for individual components and modules used for compute, memory and storage (possibly networking as well, but that was not mentioned). Attributes of this future system include: lots of links, very low latency, lower power consumption, minimum protocols, secure and easy to scale.  

The realization of that vision is shown in the figure below

Dis-aggregated set of things becomes interconnected through Silicon Photonics.
Silicon Photonics Simplifying Interconnections

A huge advantage of this “SiPH to connect everything” approach is “intelligent power,” which includes power efficiency, monitoring and capability to repurpose power from one area to another. The focus would be on “power distribution to the chip level,” according to Joel. His stated bottom line was that “total ASIC power is screaming for alternative system architectures.”

An illustration of “intelligent power” within a future data center is shown in illustration below:

Silicon Photonics has the potential to enable intelligent powering, improving overall data center power efficiency.
SiPh WIll Improve Power Efficiency

The advantages of this novel approach include optimized cooling in a decentralized environment and more effective use of Data Center facility space.   Joel proposed to localize the CPU/Memory/Storage farms and contain the heat based on that area of the building.  The result would be to keep like components together, be able to change farm types as the Data Center grows or as needs change. It would also better manage costs for electrical and cooling in distribution. He said that such a distributed architecture would drive new, enhanced cooling technologies.

Author’s Note:

The emphasis on power and cooling is of utmost importance as this is often cited as the number one problem with large, high performance Data Centers. Joel is proposing use of SiPh to mitigate that problem.

In summary, this presentation proposes use of SiPh for a high speed/low latency interconnect for components and modules within Data Center equipment.  The concept of cards and racks are replaced by interconnected components/modules.

The benefits were said to include:

  • Drive Higher Voltages to the chip due to reduction in the DC Voltage (IR) drop
  • Inteligent Power Distribution-  Not just Efficiency or Monitoring
  • Liquid Cooling at the chip / at the system- Hotter components and higher densities are coming
  • Dis-Integrate the Data Center Components – Target the most effective way to organize, optimize power and cooling using Photonic Interconnects as the frame work

Revolutionizing Computing and Communications with Silicon Photonics, by Mario Paniccia-PhD Physics of Intel

Intel claims that Silicon photonics offers a way to extend silicon manufacturing to higher speeds and thus provide low cost opto-electronic solutions and tremendous bandwidth. The results would be advances in a wide range of applications in servers, high-performance computing, and networking. Recent developments point to practical applications in the near term. For example, a new optical connector and fiber technology support data rates up to 1.6 terabits per second.

Mario unequivocally stated that the “sweet spot” for SiPh deployment was rack level interconnects on the order of six to 12 inches. [Other SiPh speakers talked about distances of 2km and more].  He indicated that Mega Data Centers, High Performance Computing (HPC) and the NSA Data Center in Utah were all interested in SiPh for that application. SiPh promises include: increased performance and energy efficiency with lower system cost and thermal density. This will “enable new form factors,” he added.

Paniccia claims that any interconnect link >= 25G b/sec at a distance of >= 2m will need a photonic link. But such fiber optic interconect links are expensive and dominate HPC/Mega Data Center costs. The challenge is total systems cost, which includes the photonics (laser, packaging, assembly) as well as the cables and connectors.  “Current cost constraints limit use of photonics in and around servers,” Mario said.

According to Paniccia,  “The goal of SiPh is to bring the advantages of semiconductor processing to optical communications.  In particular, high volume, low cost, highly integrated functions and scalable speeds.”

“Intel has built optical devices in silicon that operate >40G b/sec,” according to Mario.  A crucial point is that SiPh building blocks are now being integrated into a complete system.  These include: lasers, data encoders, light detectors, and other functions.  Intel is using a “hybrid Silicon laser” along with advanced packaging and assembly techniques. This is in sharp contrast to the other SiPh vendors which all use separate off-chip laser light sources.

In 2009, Intel demonstrated a 50G b/sec SiPh link that was organized as 4 wavelengths X 12.5G b/sec/ channel.  Silicon germanium was used as a photo-detector. Intel quietly pursued their research without making other public demonstrations until this year.

  1. This January, Intel and Facebook announced they were collaborating on “Future Data Center Rack Technologies
  2. In April  2013, Intel showed a live demo of a 100G b/sec SiPh link at their IDF conference.  It was claimed to be “a completely integrated module that includes silicon modulators, detectors, wave-guides and circuitry.” [Intel believes this is the only module in the world that uses a hybrid silicon laser.  For more on this topic see Panel at the end of the article].
  3. Intel CTO Justin Rattner also displayed the new photonics cable and connector that Intel is developing with Corning at IDF. This new connector has fewer moving parts, is less susceptible to dust and costs less than other photonics connectors. Intel and Corning intend to make this new cable and connector an industry standard. Rattner said the connector can carry 1.6 terabits/sec. You can watch the video here 
  4. In September 2013, Intel showcased the above referenced MXC cable and connector developed with Corning, capable of 1.6 terabits/sec per cable with up to 64 fibers. They also demonstrated a 300m SiPh link @ 25G b/sec over multimode fiber.
  5. At ECOC later that month, Intel demonstrated 25G b/sec SiPh transmission but at a much longer 820m.

But what really significant is Intel’s emphasis that a total systems approach, was needed to make SiPh a viable interconnect technology.  That includes photonics, cables, connectors, and structured wiring/assembly which includes optical patch panels to interconnect servers in a rack.

Mario concluded by saying that Intel plans to make SiPh real and that the future for the technology was very bright. We take his words very seriously!

Closing Comment and Analysis:

This author has followed Intel closely since first applying for a job there in the summer of 1973. I’ve also worked for the company as a consultant in the late 1980s and mid 1990s.  We have never before seen Intel pursue a research project for more than three years without either bringing it to market or killing it (neural computing was a late 1980s hot project that was killed as that market was not there- and still isn’t).  SiPh is quite an exception to that practice as it’s been in the research phase at Intel for over 10 years!

But Intel may be announcing SiPh products very soon.  This past January, they announced they’re working with Facebook on 100G b/sec rack interconnects for Data Centers.

And we couldn’t help notice this Intel job advertisement for a SiPh Market Development Manager.

Would Intel be hiring such a person if a product announcement was not forthcoming in the near future?  

SiPh could be one of the most exciting developments in large Data Centers and HPC in years.  It could aid, abet and accelerate the movement to cloud computing.  The technology also has the potential to drastically change the architecture of compute, memory, storage and network equipment within the Data Center, as Joel Goergen of Cisco proposes.  That would be creative destruction for Cisco who has a huge market in all types of Data Center equipment.

–>Stay tuned for more SiPh developments coming this year and next.  We are watching all aspects of this technology very closely.


For a list of Intel’s SiPh research achievements please visit:

http://www.intel.com/content/www/us/en/research/intel-labs-silicon-photonics-research.html


PANEL: Hybrid Silicon Laser Project

Intel and the University of California Santa Barbara (UCSB) announced the demonstration of the world’s first electrically driven Hybrid Silicon Laser. This device successfully integrates the light-emitting capabilities of Indium Phosphide with the light-routing and low cost advantages of silicon. The researchers believe that with this development, silicon photonic chips containing dozens or even hundreds of hybrid silicon lasers could someday be built using standard high-volume, low-cost silicon manufacturing techniques. This development addresses one of the last hurdles to producing low-cost, highly integrated silicon photonic chips for use inside and around PCs, Servers, and Data Centers.

http://www.intel.com/content/www/us/en/research/intel-labs-hybrid-silicon-laser.html

The Vision of TelcoVision 2013

[Editor’s Note; For more great insight from Kshitij about “Big Data” and other topics , please go to his website at http://www.kshitijkumar.com/]

A portrait of Big Data guru, Kshitij Kumar.
Kshitij Kumar

TelcoVision (formerly TelcoTV) was held in Las Vegas, NV Oct 23-25 this year. While the name has changed, much else was similar to previous years. The intent with the name change, of course, was to reflect that the sessions and audience interest is not just in TV, but all services provided by Telcos. Content this year was progressive, attendance and exhibits were pretty steady and the hallway chatter was interesting.

Amongst a plethora of interesting topics, the three main trends of discussion at TelcoVision could be summarized as OTT/IP/Multiscreen, new revenue opportunities via Telemedicine, Wireless, etc. and Big-Data/Analytics.

The multiscreen/OTT sessions were well attended and the ecosystem surrounding OTT was visible at almost every booth in the exhibit hall. IP-based delivery to devices beyond the traditional set-top-box was a topic debated in several sessions, as well as in hallway conversations. Topics ranged from the delivery of on-demand versus live content on iPads and other tablets to the streaming technology requirements to the issue of content rights for streaming to all these devices. One issue that was discussed in a couple of panels related to the advertising possibilities on such devices – while it came up on a couple of panels, the situation appears far from clear, at least from the point of view of rural Telcos and overbuilders.

Several sessions covered new revenue opportunities – collaboration with health institutions has long been part of the local focused strategy for Telcos, and Telemedicine seems to be generating a lot of interest, both with its requirements for high quality, high-capacity transport and the ability to improve the quality of life for Telco customers. Wireless services (free local WiFi as well as traditional Wireless) were both debated on panels.

One new area of interest was Big Data and Analytics. Network analytics was the topic of discussion at several sessions and garnered interest across the board. The analytics sessions were fairly well attended.

The Social and Web analytics session, on which this author spoke, was interesting from the aspect of lively audience interaction after the individual presentations, despite it being one of the last sessions of the conference, just before the closing keynotes. It goes to show how much Telcos care about their customer sentiment, interests and revenue opportunities from the same. This being a topic that the larger service providers are still grappling with, it’s good to see rural Telcos being interested in getting involved at the cutting edge of these technologies.

Overall, it was a pretty useful show – TelcoVision is off to a good start.

Multi-Screen Video Content and OTT Partnerships Enabled by New Video Network Architectures – Part 2 of 2

Introduction:

This is the second of a 2 part article on the 2013 OTTCON.  The first article looked at how Pay TV providers could offer OTT content on second screen devices and also how OTT and local providers (Pay TV or ISPs) could partner together to offer OTT content to subscribers.  This second article examines how video network infrastructures need to evolve to support both Pay TV and OTT content.

Video Network Architectures:

With live linear, Video on Demand (VoD) and now OTT content, delivery of multiple concurrent video services has become increasingly complex for pay TV service providers.  Nonetheless, providing access to quality video must remain a core competency of Service Providers (SP), else they’ll lose customers to competing offerings.   With the amount of content available today, SPs’ network infrastructures need to be able to handle network capacity issues in order to seamlessly deliver video content from the cloud to TVs, PCs and mobile devices.  That could involve costly network investments.

Service providers not only need to expand the accessibility of quality content to new screens, but they need to do this while meeting consumers’ expectations of a seamless content viewing experience when switching from one type of video to the other.  Quality of Service (QoS) will have a very strong impact on viewer engagement.

Content protection is another top concern for SPs.  It preserves content revenues among their subscriber base. Ensuring that only authorized subscribers access certain content can be difficult, especially with the expansion of viewing platforms. It is critical that video SPs  address content security, entitlements and authorization.

As viewing continues to increase on new screens and platforms, multi-screen services will continue to be a priority. The various screens will require different video formats that need to be well-managed and secure to provide a seamless ‘video anywhere’ experience.

All of the above factors will require new video architectures with enhanced hardware and software platforms needed to deliver OTT content.  A high level view of an OTT architecture (courtesy of Discretix) is shown below.  The figure does not include 3G/4G wireless access to OTT content because almost all mobile devices will access OTT content via an in-home WiFi network which connects to a Residential Gateway.

A depiction of the complicated architecture of multiscreen video.
Image Courtesy of Discretix

A more comprehensive video network architecture was described by Microsoft and Alcatel-Lucent in a 2013 OTTCON session titled, “Strategies of Unlocking Additional Values from OTT.”  That session was directed at existing Pay TV SPs that wanted to deliver OTT content along with their existing linear and video on demand (VoD) programming.

OTT value creation was said to have three underlieing pillars:

  • Integrated TV platform across experiences and devices
  • Build OTT experiences tied to core TV proposition
  • Web analytics and Internet speed applied to TV

The figure below depicts how a SP video network could adapt to support OTT content delivery.  Among the key network functions are:

  • Redistribute STB functions to CE/clients and the home network
  • A highly distributed Content Delivery Network (CDN)* for uni-cast scaling and multi-cast video (which has been demonstrated by Ericsson)
  • Session based personalization to create new value for consumers, content owners and advertisers
  • Agile back office architecture to launch and evolve services cost-effectively

*A CDN is a large distributed system of servers deployed in multiple data centers across the Internet.  It provides lower latency content services to end-users with higher availability and performance than the “best effort” Internet

Image depicting what needs to be done to adapt to HAS and Internet control.
Image courtesy of Alcatel Lucent & Microsoft

 

The Unified Video Network illustration below shows unified content distribution, distributed caching (of video content) and re-purposed video servers to permit SPs to reuse existing assets.

Image depicting a unified video delivery network.
Image courtesy of Alcatel Lucent and Microsoft

Personalization in the core SP network is shown in the figure below.

An image depicting what needs to be done in the core network to enable personalization.
Image courtesy of Alcatel Lucent and Microsoft

Microsoft’s Media Room* was said to be the market leading IPTV software platform with 50 deployments in 23 countries, including AT&T (U.S), Deutsche Telekom (Germany), and Sonus (Canada).  Alcatel -Lucent claims to be number one in video network and systems integration with 30 network operators using their equipment, according to the company.

Microsoft and Alcatel-Lucent platforms are used by AT&T U-Verse which announced second screen video content delivery last July, but has yet to make it available (as noted in the comments directly below the Part I article.

http://viodi.com/2013/04/01/multi-screen-video-content-and-ott-partnerships-enabled-by-new-video-network-architectures/

*Editor’s Note: It will be interesting to see what happens with Alcatel – Lucent’s ability to resell/integrate Mediaroom if the rumors prove true that Alcatel Lucent rival Ericsson will acquire the Mediaroom platform from Microsoft.

Addendum:

While not participating in OTTCON, Chinese telecom equipment vendors ZTE and Huawei also have a well-established role in the IPTV market, based on strong and growing IPTV platforms within China (e.g. China Telecom).

“There is a strong split in the IPTV middleware market between system integrators providing an entire solution and specialists in applications and customer experience,” according to Sam Rosen, ABI Research practice director for TV & video. “With the exception of Cisco, who recently purchased NDS, the system integrator’s role in customer experience will likely decline over the next few years; instead, this role will be left to client-centric middleware companies with better user experience,” added Rosen. The analyst stated that Viaccess-Orca (a subsidiary of France Telecom), Netgem and others, each have their own unique philosophy on how to create an IPTV system. ABI Research forecasts that IPTV households will grow from 80 million in 2012 to 117 million in 2017, with growth driven by Asia-Pacific.

http://www.abiresearch.com/press/microsoft-extends-lead-to-23-of-the-iptv-middlewar

2013 IDC Directions Part III- Where Are We Headed with Software-Defined Networking (SDN)?

Introduction:

In the third article on the IDC Directions 2013 Conference (March 5th in Santa Clara, CA), we take a hard look at Software Defined Networking as presented by Rohit Mehra, IDC VP for Network Infrastructure.

Note: Please see 2013 IDC Directions Part I for an explanation of the “3rd Platform” and its critical importance to the IT industry and Part II on New Data Center Dynamics and Requirements


Background:

IDC firmly believes that the “3rd Platform” is the way forward and that the network is the vital link between cloud computing and mobility.  “The Cloud is evolving into a comprehensive, integrated application delivery model incorporating all four elements of the 3rd platform,” said Mr. Mehra.

  • Cloud Apps require network agility, flexibility and must support higher east-west traffic flows (between servers in a cloud resident data center).
  • Mobile access is crucial with the proliferation of mobile devices (e.g. smart phones and tablets) and continued exponential growth of mobile data traffic.
  • Variable end points and different traffic patterns must be supported.
  • Social networking is being integrated with other enterprise applications. This is resulting in increased volumes of cloud data exchanges with client devices and more server-to-server traffic flows.
  • Big Data/Analytics results in scale-out computing which needs scale-out networking. Greater application-to-network visibility will be required.

As a result of these strong 3rd platform trends, Mr. Mehra said, “Application access/delivery is dependent on the  cloud resident data center and enterprise network.  Both will need to become more dynamic and flexible with SDN.”

IDC asked IT managers: What was the main reason you needed to Re-Architect The Network to support Private Cloud? The top three reasons were:

  • We needed to ensure security between virtual servers
  • We needed more bandwidth to support the virtualized applications
  • The network became a bottleneck to new service provisioning

Rohit said that SDN could address those issues and was gaining traction in the data center.  “”SDN provides better alignment with the underlying applications, along with improved flexibility and command of the network,” he said.  Through SDN models, companies will likely find it easier to implement virtual cloud hosting environments, according to Rohit.

A recent IDC study SDN Shakes Up the Status Quo in Datacenter Networking projected that the SDN market will increase from $360 million in 2013 to $3.7 billion in 2016.

SDN Attributes include:

  • Architectural model that leads to network virtualization
  • Dynamic exchange between applications and the network
  • Delivering programmable interfaces to the network (e.g., OpenFlow, APIs)
  • Management abstraction of the topology
  • Separation of control and forwarding functions (implemented in different equipment)

Rohit stated that SDN was NOT another name for “Cloud-based Networking” and that they were each in functionally different domains:

  • Cloud-based Networking involves emerging network provisioning, configuration and management offerings that leverage cloud Computing and Storage capabilities.
  • It’s a “Network As A Service” model that can apply to routers, WLAN, Unified Communications, app delivery, etc.

Rohit expects network equipment and network management vendors to add these capabilities to their platforms in 2013.

Three Emerging SDN Deployment Models are envisioned by IDC:

1. Pure OpenFlow (more on the role of Open Flow later in this article)

  • Driven largely by being open and standards-based (by Open Networking Foundation or ONF)
  • Inhibited by fluidity of OpenFlow release schedule; limited support in existing switches

2. Overlays

  • Exemplified by Nicira/VMware’s Network Virtualization Platform (NVP), IBM’s DOVE, others
  • Some vendors that started out offering “pure OpenFlow” have adopted overlays (Big Switch Networks)

3. Hybrid (Overlay, OpenFlow, Other Protocols/APIs)

  • Put forward by established networking players such as Cisco and Juniper
  • Offer SDN controller, with support for distributed control plane for network programmability and virtualization, etc.
Image courtesy of IDC.
Image courtesy of IDC.

SDN vendors are offering SDN solutions from four different perspectives. Many of them solely target one of the four, while others offer a combination of the following:

  • SDN enabled switches, routers, and network equipment in the data/forwarding plane
  • Software tools and technologies that serve to provide virtualization and control (including vSwitches, controllers, gateways, overlay technologies)
  • Network services and applications that involve Layers 4-7, security, network analytics, etc
  • Professional service offerings around the SDN eco-system

SDN’s Place In The Datacenter-IDC sees two emerging approaches:

1. Some vendors will push SDN within the framework of converged infrastructure (servers, storage, network, management)

  • Appeals to enterprises looking for simplicity, ready integration, and “one throat to choke”
  • Vendors include HP, Dell, IBM, Cisco, Oracle and others

2. Some IT vendors will offer a software-defined data center, where physical hardware is virtualized, centrally managed, and treated as an abstracted resource that can by dynamically provisioned/configured.

  • Vendors include VMware, Microsoft, perhaps IBM
Image courtesy of IDC.
Image courtesy of IDC.

SDN Will Provide CapEx and OpEx Savings:

OpEx

  • Better control and alignment of virtual and physical resources
  • Automated configuration, and management of physical network
  • Service agility and velocity

CapEx

  • Move to software/virtual appliances running on x86 hardware can reduce expenditures on proprietary hardware appliances
  • Support for network virtualization improves utilization of server and switch hardware
  • Potentially cheaper hardware as SDN value chain matures (long-term, not today)

Role of OpenFlow as SDN Matures:

  • Initial OpenFlow interest and adoption from research community, cloud service providers (e.g., Google, Facebook) and select enterprise verticals- e.g., education
  • Led to successful launch of Open Networking Foundation (ONF)
  • Centralized control and programmability is the primary use case- but that may be its limitation
  • At a crossroads now- OpenFlow taking time to mature and develop, while alternate solutions are emerging
  • As the market for SDN matures, OpenFlow is likely to be one of the many tools and technologies (but not the ONLY protocol to be used between Control plane virtual switches/servers and Data forwarding equipment in the network)

SDN Challenges and Opportunities– For SDN Vendors and Customers:

  • Vendors will need to consider adding professional services to their SDN portfolio
  • The value chain will benefit from these services early within the market adoption cycle
  • Need for SDN certification and training programs to engage partner and customer constituencies and to reduce political friction associated with change
  • Education on use cases is critical to getting vendor message across, and for creating broader enthusiasm for change among customers
  • Customers must ensure that they have the right mix of skills to evaluate, select, deploy, and manage SDN
  • The battle to break down internal silos will intensify alignment of applications and networks means an alignment of teams that run them
Image courtesy of IDC.
Image courtesy of IDC.

Conclusions:

1.SDN is rapidly gaining traction as a potentially disruptive technology transition, not seen for a long time in networking
2.SDN is riding the wave of a “Perfect Storm”, with many individual market and technology factors coming together:

  • Growth of Cloud Services/Applications
  • Focus on converged infrastructures (compute/storage/network)
  • Emergence of Software-Defined Data Center (SDDC)
  • Lessons learned (and benefits) from server virtualization

3.SDN brings us closer to application and network alignment with next-generation IT
4.Incumbent vendors will need to find the right fit between showing leadership in SDN innovation and balancing existing portfolio investments


Addendum: Software Defined Networks and Large-Scale Network Virtualization Combine to Drive Change in Telecom Networks

In a March 7th press release IDC wrote that SDN along with large-scale network virtualization are two emerging telecom industry technologies that will combine to drive a more software-centric and programmable telecom infrastructure and services ecosystem. These complementary and transformative technologies will have a sustained impact on today’s communication service providers and the way they do business.

“IDC believes that the rapid global growth of data and video traffic across all networks, the increasing use of public and private cloud services, and the desire from consumers and enterprises for faster, more agile service and application delivery are driving the telecom markets toward an inevitable era of network virtualization,” said Nav Chander, Research Manager, Telecom Services and Network Infrastructure, IDC.

“SDN and large-scale network virtualization will become a game shifter, providing important building blocks for delivering future enterprise and hybrid, private, and public cloud services.”  he added.  Additional findings from IDC’s research includes the following:

  • Time to service agility is a key driver for SDN concepts
  • Lowering OPEX spend is a bigger driver than lowering CAPEX for CSPs
  • Network Function Virtualization and SDN will emerge as key components of both operator service strategies and telecom networking vendor’s product strategies

The IDC study, Will New SDN and Network Virtualization Technology Impact Telecom Networks? (IDC #239399), examines the rapidly emerging software-defined network (SDN) market, the developments in large-scale network virtualization, and a new Network Functions Virtualization ecosystem, which are likely to have an impact on telecom equipment vendors’ and CSP customers’ plans for next-generation wireline and wireless network infrastructure.


References:

http://community.comsoc.org/blogs/alanweissberger/fbr-sdn-result-40-drop-switchrouter-ports-deployed-service-providerslarge-en-0

http://community.comsoc.org/blogs/alanweissberger/googles-largest-internal-network-interconnects-its-data-centers-using-software


IEEE ComSocSCV had the two leaders of the SDN movement talk at one of our technical meetings last year. Their presentations are posted in the 2012 meeting archive section of the chapter website:

Date: Wednesday, July 11, 2012; 6:00pm-8:30pm
Title: Software Defined Networking (SDN) Explained — New Epoch or Passing Fad?
Speaker 1: Guru Parulkar, Executive Director of Open Networking Research Center
Subject: SDN: New Approach to Networking
Speaker 2: Dan Pitt, Executive Director at the Open Networking Foundation
Subject: The Open Networking Foundation
http://www.ewh.ieee.org/r6/scv/comsoc/ComSoc_2012_Presentations.php