AT&T Outlines SDN/NFV Focus Areas for Domain 2.0 Initiative

Introduction:  The White Paper

As previously reported*, AT&T’s future Domain 2.0 network infrastructure must be open, simple, scalable and secure, according to John Donovan, AT&T’s senior executive vice president of technology and network operations.

* AT&T’s John Donovan talks BIG GAME but doesn’t reveal Game Plan at ONS 2014  

But what does that really mean?  And what are the research initiatives that are guiding AT&T’s transition to SDN/NFV?

Let’s first examine  AT&Ts Domain 2.0 white paper.

It specifically states the goal of moving to a virtualized, cloud based, SDN/NFV design based on off-the-shelf components (merchant silicon) and hardware and rejecting the legacy of OSMINE compliance and traditional telecom standards for OSS/BSS.  Yet there is no mention of the OpenFlow API/protocol we could find.

“In a nutshell, Domain 2.0 seeks to transform AT&T’s networking businesses from their current state to a future state where they are provided in a manner very similar to cloud computing services, and to transform our infrastructure from the current state to a future state where common infrastructure is purchased and provisioned in a manner similar to the PODs used to support cloud data center services. The replacement technology consists of a substrate of networking capability, often called Network Function Virtualization Infrastructure (NFVI) or simply infrastructure that is capable of being directed with software and Software Defined Networking (SDN) protocols to perform a broad variety of network functions and services.”

“This infrastructure is expected to be comprised of several types of substrate. The most typical type of substrate being servers that support NFV, followed by packet forwarding capabilities based on merchant silicon, which we often call white boxes. However it’s envisioned that other specialized network technologies are also brought to bear when general purpose processors or merchant silicon are not appropriate.”

AT&T''s vision of a user-defined cloud experience.
Image courtesy of AT&T

“AT&T services will increasingly become cloud-centric workloads. Starting in data centers (DC) and at the network edges – networking services, capabilities, and business policies will be instantiated as needed over the aforementioned common infrastructure. This will be embodied by orchestrating software instances that can be composed to perform similar tasks at various scale and reliability using techniques typical of cloud software architecture.”

Interview with AT&T’s Soren Telfer:

As a follow up to John Donovan’s ONS Keynote on AT&T’s “user-defined network cloud” (AKA Domain 2.0), we spoke to Soren Telfer, Lead Member of Technical Staff at AT&T’s Palo Alto, CA Foundry. Our intent was to gain insight and perspective on the company’s SDN/NFV research focus areas and initiatives.

Mr. Telfer said that AT&T’s Palo Alto Foundry is examining technical issues that will solve important problems in AT&T’s network.  One of those is the transformation to SDN/NFV so that future services can be cloud based.  While Soren admitted there were many gaps in SDN/NFV standard interfaces and protocols, he said, “Over time the gaps will be filled.”

Soren said that AT&T was working within the  Open Networking Labs (ON.LAB), which is part of the Stanford-UC Berkeley Open Network Research Community.  The ONRC mission from their website:  “As inventors of OpenFlow and SDN, we seek to ‘open up the Internet infrastructure for innovations’ and enable the larger network industry to build networks that offer increasingly sophisticated functionality yet are cheaper and simpler to manage than current networks.”  So for sure, ON.LAB work is based on the OpenFlow API/protocol between the Control and Data Planes (residing in different equipment).

The ON.LAB community is made up of open source developers, organizations and users who all collaborate on SDN tools and platforms to open the Internet and Cloud up to innovation.  They are trying to use a Linux (OS) foundation for open source controllers, according to Soren.  Curiously, AT&T is not listed as an ON.LAB contributor at http://onlab.us/community.html

AT&T’s Foundry Research Focus Areas:

Soren identified four key themes that AT&T is examining in its journey to SDN/NFV:

1.  Looking at new network infrastructures as “distributed systems.”  What problems need to be solved?  Google’s B4 network architecture was cited as an example.

[From a Google authored research paper: http://cseweb.ucsd.edu/~vahdat/papers/b4-sigcomm13.pdf]

“B4 is a private WAN connecting Google’s data centers across the globe. It has a number of unique characteristics:  i) massive bandwidth requirements deployed to a modest number of sites, ii) elastic traffic  demand that seeks to maximize average bandwidth, and iii) full control over the edge servers and network, which enables rate limiting and demand measurement at the edge.”

2.  Building diverse tools and environments for all future AT&T work on SDN/NFV/open networking. In particular, development, simulation and emulation of the network and its components/functional groupings in a consistent manner.  NTT Com’s VOLT (Versatile OpenFlow ValiDator) was cited as such a simulation tool for that carrier’s SDN based network.  For more on VOLT and NTT Com’s SDN/NFV please refer to: http://viodi.com/2014/03/15/ntt-com-leads-all-network-providers-in-deployment-of-sdnopenflow-nfv-coming-soon/

3.  Activities related to “what if questions.”  In other words, out of the box thinking to potentially use radically new network architecture(s) to deliver new services.  “Network as a social graph” was cited as an example.  The goal is to enable new experiences for AT&T’s customers via new services or additional capabilities to existing services.

Such a true “re-think+” initiative could be related to John Donovan’s reply to a question during his ONS keynote: “We will have new applications and new technology that will allow us to do policy and provisioning as a parallel process, rather than an overarching process that defines and inhibits everything we do.”

+ AT&T has been trying to change it’s tagline to:  “Re-think Possible” for some time now.  Yet many AT&T customers believe “Re-think” is impossible for AT&T, as its stuck in out dated methods, policies and procedures.  What’s your opinion?

According to Soren, AT&T is looking for the new network’s ability to “facilitate communication between people.”  Presumably, something more than is possible with today’s voice, video conferencing, email or social networks?  Functional test or universal tests are being considered to validate such a new network capability.

4.  Overlaying computation on a heterogeneous network system [presumably for cloud computing/storage and control of the Internet of Things (IoT)]. Flexible run times for compute jobs would be an example attribute for cloud computing.  Organizing billions of devices and choosing among meaningful services would be an IoT objective.

What then is the principle role of SDN in all of these research initiatives?  Soren said:

SDN will help us to organize and manage state.”  That includes correct configuration settings, meeting requested QoS, concurrency, etc.   Another goal was to virtualize many physical network elements (NEs).  DNS server, VoIP server and other NEs that could be deployed as Virtual Machines (VMs).

Soren noted that contemporary network protocols internalize state. For example, the routing data base for paths selected are internally stored in a router. An alternate “distributed systems” approach would be to externalize state such that it would not be internal to each network element.

However, NE’s accessing external state would require new state organization and management tools.  He cited Amazon’s Dynamo and Google’s B4 as network architectures AT&T was studying. But creating and deploying protocols that work with external state won’t be here soon.  “We’re looking to replace existing network protocols with those designed for more distributed systems in the next seven or eight years,” he added.

Summing up, Soren wrote in an email:

“AT&T is working to deliver the User Defined Network Cloud, through which AT&T will open, simplify, scale, and secure the network of the future.  That future network will first and foremost deliver new experiences to users and to businesses.

The User Defined Network Cloud and Domain 2.0, are bringing broad and sweeping organizational and technical changes to AT&T. The AT&T Foundry in Palo Alto is a piece of the broader story inside and outside of the company. At the Foundry, developers and engineers are prototyping potential pieces of the future network where AT&T sees gaps in the current ecosystem. These prototypes utilize the latest concepts from SDN and techniques from distributed computing to answer questions and to point paths towards the future network. In particular, the Foundry is exploring how to best apply SDN to the wide-area network to suit the needs of the User Defined Network Cloud.”

Comment and Analysis:

Soren’s remarks seem to imply AT&T is closely investigating Google’s use of SDN (and some version of OpenFlow or similar protocol) for interconnecting all of its data centers as one huge virtual cloud. It’s consistent with Mr. Donovan saying that AT&T would like to transform its 4,600 central offices into environments that support a virtual networking cloud environment.

After this year’s “beachhead projects,” Mr. Donovan said AT&T will start building out new network platforms in 2015, as part of its Domain 2.0 initiative.   But what Soren talked about was a much longer and greater network transformation.  Presumably, the platforms built in 2015 will be based on the results of the “beachhead projects” that Mr. Donovan mentioned during the Q &A portion of his ONS keynote speech.

Based on its previously referenced Domain 2.0 Whitepaper, we expect the emphasis to be placed on NFV concepts and white boxes, rather than pure SDN/Open Flow.  Here’s a relevant paragraph related to an “open networking router.”

“Often a variety of device sizes need to be purchased in order to support variances in workload from one location to another. In Domain 2.0, such a router is composed of NFV software modules, merchant silicon, and associated controllers. The software is written so that increasing workload consumes incremental resources from the common pool, and moreover so that it’s elastic: so the resources are only consumed when needed. Different locations are provisioned with appropriate amounts of network substrate, and all the routers, switches, edge caches, and middle-boxes are instantiated from the common resource pool. Such sharing of infrastructure across a broad set of uses makes planning and growing that infrastructure easier to manage.”

We will continue to follow SDN/NFV developments and deployments, particularly related to carriers such as AT&T, NTT, Verizon, Deutsche Telekom, Orange, etc.  Stay tuned…

0 thoughts on “AT&T Outlines SDN/NFV Focus Areas for Domain 2.0 Initiative

  1. Alan, thanks for the excellent article and digging to get the rest of the story from some high-level comments made at a conference.

    What I find particularly intriguing is the comment in your article about the 4,600 central offices and the power to create a cloud-based network around those points of presence.

    That could be a huge competitive advantage as the web goes local; whether it is off-site hosting of servers or elimination of servers, the fact that AT&T already has buildings with conditioned power, tight security procedures and lots of space (thanks to the collapse of switches from rows and rows of equipment to a single rack).

    I am seeing many independent network operators providing these local cloud services. As the trusted local provider, they have a huge advantage over the competition. It seems like AT&T could leverage their trusted infrastructure in this way as well.

  2. Great article! First time I’ve seen any real substance to how AT&T would realize their Domain 2.0 network transformation. As Ken points out, making AT&Ts 4,600 COs into a virtual network cloud will be a real challenge!

    1. Anand and Ken, Thanks for your comments. We are discussing the implications of this article on the IEEE Member email list (which I created in 2006 and still moderate/administer).

      One member said that even before AT&T’s not mentioning it, OpenFlow seems to have lost steam. That it will be one of many protocols used for Control/Data plane interfaces between new network elements. But that causes a huge vendor interoperability problem that no one else is talking about. Or maybe not?
      At ONS earlier this month, Guido Appenzeller of Big Switch told me that there’s a capability for the ODM made and Linux OS based “packet forwarding engines/white boxes” to request a download of the “southbound” protocol (Control plane to data plane) at initialization time. That would imply no standard is actually needed. Rob Sherwood of Big Switch wrote in an email:
      “With bare metal switches (AKA “white boxes”), ODMs allow companies like Big Switch and others to program their boxes directly (effectively acting as a file server), that is compile and run raw machine code. This does not imply that bare metal switches are x86 — some are and some are not. But it does require that we have detailed knowledge of the device before hand and can create a binary image that matches the CPU architecture of the box (e.g., PPC, ARM, or x86). Big Switch has recently open sourced part of its bare metal switch operating system in the “Open Network Linux” project, which you can download the code at github.com/opennetworklinux/ONL” Continued at the IEEE Member email discussion group…

Leave a Reply

Your email address will not be published. Required fields are marked *

I accept that my given data and my IP address is sent to a server in the USA only for the purpose of spam prevention through the Akismet program.More information on Akismet and GDPR.

This site uses Akismet to reduce spam. Learn how your comment data is processed.