NTT Com Leads all Network Providers in Deployment of SDN/OpenFlow; NFV Coming Soon

Introduction:

An image of Yukio Ito, Senior Vice President for Service Infrastructure at NTT Communications
Yukio Ito*, Senior Vice President for Service Infrastructure at NTT Communications

While AT&T has gotten a lot of press for its announced plans to use Software Defined Networking (SDN) to revamp its core network, another large global carrier has been quietly deploying SDN/OpenFlow for almost two years and soon plans to launch Network Function Virtualization (NFV) into its WAN.

NTT Communications (NTT-Com) is using an “SDN overlay” to connect 12 of its cloud data centers (including one’s in China and Germany scheduled for launch this year) located on three different continents.    This summer, the global network operator plans to deploy NFV in their WAN, based on virtualization technology from their Virtela acquisition last year.

ONS Presentation and Interview:

At a March 4, 2013 Open Network Summit (ONS) plenary session, Yukio Ito*, Senior Vice President for Service Infrastructure at NTT Communications described NTT-Com’s use of SDN to reduce management complexity, capex, and opex, while reducing time to market for new customers and services.

The SDN overlay inter-connects the data centers used in NTT-Com’s “Enterprise Cloud.”

Diagram of how NTT Com is helping customer Yamaha Motor reduce ICT costs via cloud migration.
Diagram of how NTT Com is helping customer Yamaha Motor reduce ICT costs via cloud migration.

Started in June 2012, it was the first private cloud in the world to adopt virtualized network technology.  Enterprise Cloud became available on a global basis in February 2013.  In July 2013, NTT-Com launched the world’s first SDN-based cloud migration service- On-premises Connection.  The service facilitates smooth, flexible transitions to the cloud by connecting customer on-premises systems with NTT Com’s Enterprise Cloud via an IP-MPLS VPN.  Changes in the interconnected cloud data centers create changes in NTT-Com’s IP-MPLS VPN (which connects NTT-Com’s enterprise customers to cloud resident data centers).

NTT-Com’s Enterprise Cloud currently uses SDN/OpenFlow within and between 10 cloud resident data centers in in 8 countries, and will launch two additional locations (Germany and China) within 2014.  The company’s worldwide infrastructure now reaches 196 countries/regions.

NTT-Com chose SDN for faster network provisioning and configuration than manual/semi-automated proprietary systems provided. “In our enterprise cloud, we eliminated cost structures and human error due to manual processes,” Ito-san said.  The OpenFlow protocol has proved useful in helping customers configure VPNs, according to Mr. Ito. “It might just be a small part of the whole network (5 to 10%), but it is an important step in making our network more efficient,” he added.

SDN technology enables NTT-Com’s customers to make changes promptly and flexibly, such as adjusting bandwidth to transfer large data in off-peak hours.  On-demand use helps to minimize the cost of cloud migration because payment for the service, including gateway equipment, is on a per-day basis.

Automated tools are another benefit made possible by SDN and can be leveraged by both NTT- Com and its customers.  One example is the ability to let a customer running a data backup storage service  to crank up its bandwidth then throttle back down when the backup is complete. In that case, the higher bandwidth is no longer needed. Furthermore, SDN also allows customers to retain their existing IP addresses when migrating from their own data centers to NTT-Com’s clouds.

In addition to faster provisioning/reconfiguration, CAPEX and OPEX savings, NTT-Com’s SDN deployment allows the carrier to enable the carrier to partner with multiple vendors for networking, avoid redundant deployment, simplify system cooperation, and shorten time-to-market, Ito-san said. NTT-Com is currently using SDN Controllers (with OpenFlow and BGP protocols) and Data Forwarding (AKA Packet Forwarding) equipment made by NEC Corp.

The global carrier plans to use SDN throughout its WAN. A new SDN Controller platform is under study with an open API. “The SDN Controller will look over the entire network, including packet transport and optical networks. It will orchestrate end-to-end connectivity.” Ito-san said.  The SDN-WAN migration will involve several steps, including interconnection with various other networks and equipment that are purpose built to deliver specific services (e.g. CDN, VNO/MVNO, VoIP, VPN, public Internet, etc).

NTT-Com plans to extend SDN to control its entire WAN, including Cloud as depicted in the illustration
NTT-Com plans to extend SDN to control its entire WAN, including Cloud as depicted in the illustration

NFV Deployment Planned:

NTT Com is further enhancing its network and cloud services with SDN related technology, such as NFV and overlay networks.  In the very near future, the company is looking to deploy NFV to improve network efficiency and utilization. This will be through technology from Virtela, which was acquired in October 2013.

The acquisition of cloud-based network services provider Virtela has enhanced NTT’s portfolio of cloud services and expanded coverage to 196 countries. The carrier plans to add Virtela’s NFV technology to its cloud-based network services this summer to enhance its virtualization capabilities.

“Many of our customers and partners request total ICT solutions. Leveraging NTT Com’s broad service portfolio together with Virtela’s asset-light networking, we will now be able to offer more choices and a single source for all their cloud computing, data networking, security and voice service requirements,” said Virtela President Ron Haigh. “Together, our advanced global infrastructure enables rapid innovation and value for more customers around the world while strengthening our leadership in cloud-based networking services.”

High value added network functions can be effectively realized with NFV, according to Ito-san, especially for network appliances. Ito-san wrote in an email to this author:

“In the case of NFV, telecom companies such as BT, France Telecom/Orange, Telefonica, etc. are thinking about deploying SDN on their networks combined with NFV. They have an interesting evolution of computer network technologies. In their cloud data centers, they have common x86-based hardware. And meanwhile, they have dedicated hardware special-function networking devices using similar technologies that cost more to maintain and are not uniform. I agree with the purpose of an NFV initiative that helps transform those special-function systems to run on common x86-based hardware.  In the carrier markets, the giants need some kind of differentiation. I feel that they can create their own advantage by adding virtualized network functions. Combined with their existing transport, core router infrastructure and multiple data center locations, they can use NFV to create an advantage against competitors.”

NTT’s ONS Demo’s -Booth # 403:

NTT-Com demonstrated three SDN-like technologies at its ONS booth, which I visited:

  1. A Multiple southbound interface control Platform and Portal system or AMPP, a configurable system architecture that accommodates both OpenFlow switches and command line interface (CLI)-based network devices;
  2. Lagopus Switch, a scalable, high-performance and elastic software-based OpenFlow switch that leverages multi-core CPUs and network I/O to achieve 10Gbps level-flow processing; and
  3. The Versatile OpenFlow ValiDator or VOLT, a first of a kind system that can validate flow entries and analyze network failures in OpenFlow environments.  I found such a simulation tool to be very worthwhile for network operators deploying SDN/Open Flow. An AT&T representative involved in that company’s SDN migration strategy also spoke highly of this tool.

NEC, NTT, NTT Com, Fujitsu, Hitachi develop SDN technologies under the ‘Open Innovation Over Network Platforms’ (O3 Project):

During his ONS keynote, Mr. Ito described the mission of the O3 Project as “integrated design, operations and management.”  The O3 Project is the world’s first R&D project that seeks to make a variety of wide area network (WAN) elements compatible with SDN, including platforms for comprehensively integrating and managing multiple varieties of WAN infrastructure and applications. The project aims to achieve wide area SDN that will enable telecommunications carriers to reduce the time to design, construct and change networks by approximately 90% when compared to conventional methods.  This will enable service providers to dramatically reduce the time needed to establish and withdraw services. In the future, enterprises will be able to enjoy services by simply installing the specialized application for services, such as a big data application, 8K HD video broadcasting and global enterprise intranet, and at the same time, an optimum network for the services will be provided promptly.

The O3 Project was launched in June 2013, based on research consigned by the Japan Ministry of Internal Affairs and Communications’ Research and Development of Network Virtualization Technology, and has been promoted jointly by the five companies. The five partners said the project defined unified expressions of network information and built a database for handling them, allowing network resources in lower layers such as optical networks to be handled at upper layers such as packet transport networks. This enables the provision of software that allows operation management and control of different types of networks based on common items. These technologies aim to enable telecoms operators to provide virtual networks that combine optical, packet, wireless and other features.

NTT-Com, NEC Corporation and IIGA Co. have jointly established the Okinawa Open Laboratory to develop SDN and cloud computing technologies.  The laboratory, which opened in May 2013, has invited engineers from private companies and academic organizations in Japan and other countries to work at the facility on the development of SDN and cloud-computing technologies and verification for commercial use.  Study results will be distributed widely to the public. Meanwhile, Ito-san invited all ONS attendees to visit that lab if they travel to Japan. That was a very gracious gesture, indeed!

Read more about this research partnership here:

Summary and Conclusion:

“NTT-Com is already providing SDN/Openflow-based services, but that is not where our efforts will end. We will continue to work on our development of an ideal SDN architecture and OpenFlow/SDN controller to offer unique and differentiated services with quick delivery. Examples of these services include: cloud migration, cloud-network automatic interconnection, virtualized network overlay function, NFV, and SDN applying to WAN,” said Mr. Ito. “Moreover, leveraging our position as a leader in SDN, NTT Com aims to spread the benefits of the technology through many communities,” he added.

Addendum:  Arcstar Universal One

NTT-Com this month is planning to launch its Arcstar Universal One Virtual Option service, which uses SDN virtual technology to create and control overlay networks via existing corporate networks or the Internet. Arcstar Universal One initially will be available in 21 countries including the U.S., Japan, Singapore, the U.K., Hong Kong, Germany, and Australia. The number of countries served will eventually expand to 30. NTT-Com says it is the first company to offer such a service.

Arcstar Universal One Virtual Option clients can create flexible, secure, low-cost, on-demand networks simply by installing an app on a PC, smart phone or similar device, or by using an adapter. Integrated management and operation of newly created virtual networks will be possible using the NTT-Com Business Portal, which greatly reduces the time to add or change network configurations.  Studies from NTT-Com show clients can expect to reduce costs by up to 60% and shorten the configuration period by up 80% compared to the conventional establishment.


*Yukio Ito is a board member of the Open Networking Foundation and Senior Vice President of Service Infrastructure at NTT Communications Corporation (NTT-Com) in Tokyo, a subsidiary of NTT, one of the largest telecommunications companies in the world.

2013 TiECon- Part 2: Software Defined Infrastructure Presentations

Introduction:

Software Defined Infrastructure (SDI) applies to compute, storage and the network within a data center and in the cloud.  This market segment is experiencing tremendous growth and innovation.  It is facilitating increased agility, flexibility and operational cost savings for enterprises and service providers.  The first step in SDI was compute server virtualization and that’s now mainstream.  Network and Storage virtualization are the current target areas.

While Software Defined Networking (SDN) is the new hot topic, that term is being used as an umbrella by networking vendors and service providers.  The only “standardized” version of SDN is coming out of the Open Networking Foundation (ONF is NOT a standards body).  It is based on centralized control and management, with a strict separation of Control and Data planes using the Open Flow protocol (“Southbound API”) to communicate between them.  Network equipment vendors and Service Providers claiming they are ‘SDN Compatible’ have some level of programmable interfaces on their network equipment, but are usually NOT compliant with ONF architecture and Open Flow protocol (the Southbound API”). HP products are an exception- they do seem to be compatible with ONF architecture and Open Flow specification (see AM Keynote below).

This article summarizes the morning keynote and invited presentations at 2013 TiECon.  The third article in this series will cover the afternoon  SDI keynote and panel sessions.  Please refer to the TiECon SDI Track Agenda:  http://tiecon.org/sdi for program details.

AM Keynote: Prepare for Software Defined Networking by Dave Larson of HP

HP is a leader in deploying SDN-Open Flow switches with a claim of, “over 40 SDN switches and 20M Open Flow enabled ports shipped.”

In the context of SDN, the company views the network as a single logical fabric with a vendor specific “Northbound API” (from Control Plane module to Application entities) enabling applications to program the underlying network.  Those applications communicate with HP’s Virtual Applications Network SDN Controller, which  “delivers complete agility; enables cloud service centric management and orchestration through the Management layer,” according to Mr. Larson.

A fact sheet on this key SDN product is at: http://www.hp.com/hpinfo/newsroom/press_kits/2012/convergedcloud2012/FS_VAN.pdf

Image of SDN architecture courtesy of HP. Note, original text associated with Infrastructure block said, "29 Switches – over 15 million ports." This was replaced with the text, "HP Switches with Open Flow to/from SDN Controller."
Base Image Courtesy of HP

HP’s SDN architecture  is illustrated in the figure above.

Four examples of SDN applications using HP SDN products were briefly described by David Larson:

1.  Virtual Cloud Network– Enables public cloud service providers network scalable automation.  Permits an enterprise to securely connect to the cloud and apply their own ‘identity’ to their cloud environment.

2.  Sentinel Security (developed with HBO)- Provides automated, real-time network security and threat detection in enterprise and cloud networks.  Deployed in Australia public schools.

3.  Load Balancing (developed with CERN researchers)- Traffic orchestration using SDN. Goal is to improve network utilization in a high performance computing environmnet.

4.  Unified Communications & Computing (for Lync)- Automated policy for business applications running over an enterprise campus wide network. This application provides: simplified policy deployment, dynamic prioritization, an enhanced user experience.

HP’s SDN vision is to provide end-to-end solutions for campus and branch offices, WANs, multi-tenant data centers and cloud.  For the WAN,  SDN capabilities include: traffic engineering, improved quality of user experience, service automation, and quick provisioning of dynamic VPN services.

The following SDN time-line was presented by Mr. Larson:

  • 1H14:  Deploy SDN controller, Sentinel and Virtual Cloud Network apps.
  • 2015:  Deploy new SDN applications using “RESTful APIs”  (Note: there is no standard for the Northbound API, so HP is suggesting the use of Representational State Transfer (REST) web services and APIs.)
  • 2016: Deploy SDN enterprise wide

Introduction to SDI:  Guru Parulkar, PhD- Stanford & Open Network Research Center

Guru is one of the few SDN speakers that clearly tells you what he believes.  There is no hype, dancing around the issue, or talking out of both sides of his mouth.  Guru says that (pure) SDN is the best opportunity to come around in the last 20 years for the networking  industry.  Here’s why: we need a new network infrastructure to accommodate the current computing environment which has changed drastically in the last few years.

Compute servers are now mostly virtualized and with the huge move to cloud computing and storage, it is extremely difficult to support a virtual network infrastructure based on existing network equipment (which is closed,  vertically integrated, complex, and bloated).  SDN is that new network infrastructure, according to Guru.

SDN will bring a simpler data forwarding plane.  It will permit application builders to control functions such as traffic engineering, routing algorithms for path selection, and mobility policies. The resulting benefits to service providers, data center operators and enterprises include: reduction of CAPEX and OPEX, capability to deploy infrastructure on-demand, and enable innovation at many levels.

A diagram depicting software based infrastructure.

The figure to the right illustrates SDI to control a cloud service provider’s data center (DC) and core network. Cloud Orchestration software interacts with both cloud resident DC Orchestration and SDN Control (of the core network) to deliver  cloud services to customers. Such a core network would be purpose- built for this task and is NOT the public Internet. The cloud resident DC network uses SDN control over the physical DC network which interconnects servers and virtual machines.

…………………………………………………………………………………….

A multi-tenant Cloud Data Center with SDN Virtualization, shown below, was presented by Guru.  Each tenant has its own set of higher layer functions that reside above the Network OS.

Image of a cloud data center with SDN virtualization.

Guru is adamant that SDN overlay models will not yield the benefits of pure SDN and therefore should NOT be pursued.   He emphatically stated, “Everything should be redone to make use of the new SDN/ SDI infrastructure.  Warning to enterprises: Don’t try to maintain your legacy network.”

Guru concluded by saying that “SDI represents a major disruption- one that comes along only once in 20 years. It’s an opportunity for innovation and entreprenneurship.  SDI will be developed across (protocol) layers, technologies and domains.  The IT industry is now just at the beginning of a huge change brought about by SDI.”  And that is as clear a message as one can give!


SDN Use Case:   Albert Greenberg -Microsoft Cloud Services

Albert leads cloud networking services for Windows Azure (Microsoft’s cloud IaaS and PaaS offering).  He said that start-ups could benefit from the huge scale and elasticity of Azure, rather than use in house computing facilities or other public cloud offerings.

“The pace of data center innovation and growth is amazing.  We need software control across the protocol stack to manage the ongoing changes,”  he said. The Northbound API (from the control plane to application or management plane) is critically important for IT resource management.  The physical network used by Azure (internally) is flatter, higher speed (10G) and optimized for cloud services.  Consistent performance is realized and outages are largely prevented as a result.

The increased amount of storage in the data center puts greater pressure on the network, as there is much more data now to exchange and deliver to customers.  “Software is the only solution to manage growth and scale of cloud computing.”  As a result, Albert believes there’ll be plenty of innovation opportunities for SDI.  He would like to see greater progress on some fronts, especially specifications for federated control and IP address management.

While Greenberg said he likes the Open Flow concept and simplicity, Microsoft has instead used its own version of SDN (it’s actually network virtualization) in Windows Azure.  That implementation is based on home-grown “SDN” controllers and a network overlay using NVGRE (Network Virtualization using Generic Routing Encapsulation).  However, Microsoft plans to participate in the OpenDaylight consortium (http://www.opendaylight.org/) – a vendor-driven, Linux Foundation open source software project for SDN -Open Flow platforms.


Lightning Round SDN (start-up) winners -I:  

One Convergence,  Pertino, Plexxi

http://tiecon.org/content/sdi-lightning-round-winners-i

Lightning Round SDN Winners – II

Elastic Box, Cloud Velocity, Lyatiss

http://tiecon.org/content/sdi-lightning-round-winners-ii


Closing Comment:

One of the great things about the TiECON SDI sessions were  no sales pitches, vendor demos, or misleading claims of “SDN support.”  The depth of content, quality of speakers, commercial free, clear and candid remarks by both speakers and panelists made for one of the best conferences on this topic in the last couple of years.  We commend the TiECon team that organized the SDI Track sessions!


Next Up:  Stay tuned for 2013 TiECon Part 3 in this series which will feature the PM keynote on “The coming wave of Data Center Disruption brought about by SDI.”  We’ll also summarize the key points made during several SDI panel sessions and touch on Service Provider views of SDN (Ericsson presenting results of their joint SDN project with Telstra in Australia).

Viodi View – 05/03/12

A video travelogue of the telecom industry and its related applications is one way to think about the Viodi View or, at least ViodiTV. The intent of the video interviews and stories from around the country is to augment what is in the news or what should be in the news. The nuggets we get from folks in our interviews often have wisdom that surpasses the instant news cycle of the Internet and social media. It is an honor to interview so many subject matter experts in so many of the key technology, business and regulatory areas that impact broadband and its deployment.


Breaking Down Silos and Unifying The Experience

Ken Pyle interviews Parks Associates' Melissa Duchin at the Smart Energy Summit.A theme of the Parks Associates’ Smart Energy Summit was the cloud as central to smart energy realities. The inherent nature of cloud computing, coupled with broadband, will break down the silos between different types of services. Melissa Duchin, Research Analyst for Parks Associates, discusses this and other take-aways from the Smart Energy Summit in the above video interview. Companies originating from these different silos, whether retail, energy, security or service provider, will end up competing and/or will be working together, as the cloud breaks down the barriers. This conversation is sure to continue at the 17th annual CONNECTIONS™ at CTIA 2013, which ViodiTV will be covering for Parks Associates. Click here to view.


One Size Doesn’t Fit All

Ken Pyle interviews Brent Christensen at the MTA 2013 Annual Convention.“One of the things the FCC Transformation Order has failed to address is the value piece of telephone service; one size fits all is not the way it is,” said MTA president and CEO Brent Christensen. In this interview, Christensen reflects on the interactive video conference with FCC Commissioner Mignon Clyburn (who was just name interim FCC Chair, while Tom Wheeler nomination works its way through the process) that he co-moderated with NTCA’s Shirley Bloomfield.

He indicates that is important for regulators and lawmakers to visit rural areas and that the MTA is actively serving as a bridge to make those visits and connections happen. He also touches upon the role of Federal versus State regulation, as what is regulated changes from application (e.g. telephone) to platform (broadband). Click here to view.


“Service Provider SDN” Network Virtualization and the ETSI NFV WG by Alan Weissberger

This article will examine network operator motivation for “Service Provider SDN,” and then raise the question whether the solution might be some form of Network Virtualization instead of the ONS SDN-Open Flow standard(s).  We reference VMware’s version of Network Virtualization which is currently available and will soon be enhanced.  The ETSI Network Functions Virtualization (NFV) activity  is described and properly positioned, especially in light of mis-leading vendor support claims. Click here to read more.


Open Network Foundation & Other Organizations; ONF-Optical Transport WG; Ciena & SDN by Alan Weissberger

SDN Reference Architecture is depicted in this diagram.The above article examines the motivation and current work of the ETSI NFV WG and questioned whether service providers might prefer to base “software control” of their networks on that standards initiative, rather than SDN-Open Flow from the ONF.

Dan Pitt, ONF Executive Director (and a colleague of this author for 30 years) acknowledged the SDN carrier issue in an email: “During the Open Network Summit, there was a lot of interest and discussion around the impact of SDN on carriers, service providers and the telecommunications industry. At ONF, we are excited to continue our close work with the ETSI network-operator-led Network Functions Virtualization (NFV) Industry Specification Group (ISG).” Click here to read more.


Looking Back at an Early Digital Divide

Ken Pyle interviews former ACA Board member, Tom Seale.Tom Seale, former CFO of Buford Media and early board member of ACA, talks about the formation of the ACA, the motivation for and the challenges of organizing independent cable operators into an association. He points out that there was a digital divide that existed between the smaller and larger operators at that time. Now, In the banking industry and out of the cable industry, Seale brings an interesting historical perspective to the ACA and the industry. Click here to view.


Some Tweets and Short Thoughts:

  • More broadband to the farm is needed, based on an implication from yesterday’s Wall Street Journal article on using drones as precise data collection devices, “Eventually, farmers will likely depend on third parties to analyze the data and images received by drones.” The implication is cloud and most likely significant amounts of data transmission from farm to data-center (particularly if they are transmitting infrared images to third-party computers for number-crunching).
  • Stumbled upon this article that was written a couple years ago and can’t remember ever seeing it when it was published. It is an accurate overview of what we did to help bring interactivity to a conference on interactive TV.
  • Is there a way to bridge Minecraft between Xbox & PC? Seems like a useful service (Bridging PS3 & Xbox another $1M idea)

The Korner – 3 Tips for Building Community Broadband – Part 2

Ken Pyle interviews Dr. Tim Nulty at the 2012 Broadband Communities Summit.In part 2 of this two-part interview, Dr. Tim Nulty suggests that those wishing to replicate  ecFiber.net‘s rural broadband network build-out should follow these three basic steps:

  1. Develop a lean business plan and and a lean network.
  2. Organize an institution that can implement the plan and maintain the network. He describes an interesting public-private partnership to make this happen.
  3. Finance the project, which is the most difficult. Nulty indicates that the capital for building the networks  is coming from both inside and outside the community and beyond in the form of tax-exempt loans made by individuals who want both a return, as well as fiber to their community.

In this interview, Nulty describes an organizational structure that is unique and one that separates politics from operations.

For additional alternative ways to think about financing an FTTH network, check out this 2008 white paper, co-authored by a Google policy person and pointed out this week by someone in a LinkedIn group. The condo model described in that paper, where a neighborhood or a group of farmers owns the fibers, is especially interesting for those looking at ways to jump start a fiber network initiative.

2013 Cloud Connect Part III: Cloud as IT Disrupter; SDN as a New Virtual Network Infrastructure

Introduction:

One consistent theme during Cloud Connect 2013 was the cloud as a disrupter of IT organizations.  During the Cloud Executive Summit workshop on April 2nd, Avery Lyford of LEAP Commerce said that there were three huge areas of disruption: the mobile cloud, Big Data (analytics) and Software Defined Networking (SDN).  Each of these areas were then explored as disrupters by three excellent speakers.   We were especially impressed with the presentation by Andre Kindness of Forrester Research who candidly stated that SDN is an evolution; not a revolution and it will take 5 to 7 years for the technology to mature.

PLUMGrid’s SDN presentation on  April 5th was also very enlightening.  It’s described later in this Cloud Connect wrap-up article.

While the majority of Cloud Connect 2013 sessions focused on building private or hybrid clouds, McKinsey & Company consultants Will Forrest and Kara Sprague proposed a very different, and extremely disruptive scenario for cloud adoption.  Like IDC, McKinsey sees the future of IT (“New IT”) in  public cloud computing.  But McKinsey goes a lot further.  The prestigious market consulting firm thinks public cloud operations may be managed by a separate IT organization, created specifically to reside outside of the existing “Old IT” shop.

Leading-Edge Cloud Research and Industry Analyst View from McKinsey & Company:

“Current IT, as we know it is no longer a game-changer,” said Mr. Forrest of McKinsey.  In fact, “spending on IT is not a differentiator anymore and it doesn’t correlate with business success,” he added.  Much of the available improvement made possible by traditional IT has already been achieved.  And IT use cases have reached diminishing marginal returns– significant increases in productivity or financial savings are unlikely for most.  Probably the greatest contribution IT can make today is to trim budgets to the minimum levels  within a given market segment.  According to Mckinsey, the highest IT priority for most companies should be to move IT spend to the industry average (rather than overspend on IT).

As a result, thought leaders in the technology world are advocating for a rethink of enterprise and corporate IT.   Cloud is seen as a key lever to decrease IT costs and reach the industry average.  Mckinsey’s emphasis on using the cloud for cost reduction is in sharp contrast to the results of Everest Group’s Enterprise Cloud Adoption survey which found that flexibility and agility were much more important (see Cloud Connect Part II article).

McKinsey sees significant disruption in many business models.  They say that CEOs recognize that future revenue growth will come from new business models.  Furthermore, economic conditions are changing, demanding business model transformation.

“New IT” is rising to fill the place of “Current IT,” according to McKinsey.  The “New IT” drives business model transformation, team and corporate productivity growth and digital-only products.

Examples of companies pursuing the “New IT” are:  Amazon transforming e-retail by driving customer preference and share of wallet gains (Amazon is the market leader among online retailers in average order size, driven by “push” sales), Deloitte teams using Yammer to collaborate and Google offering digital products (AdWords and AdSense deliver data-driven, custom advertisements, resulting in $36B of annual revenues for Google).

Image suggesting that CEOs are hoping to see improvements due to the cloud beyond just those received by having more efficient IT.
Image courtesy of McKinsey & Company

CEOs are hoping to see improvements from cloud other than current IT cost reductions, such as increased business flexibility and ability for IT to scale up (or shrink) to meet business needs.  These expectations for cloud computing are shown in the adjacent figure to the right:

CEOs really don’t believe their current IT organizations can implement the “New IT.” They’re suggesting public cloud computing for the “New IT” infrastructure and may create a separate, but parallel IT organization to manage public cloud operations.

In summary, Forrester said that “Old IT” expects cloud computing to achieve incremental cost reductions within the context of established business practices, while CEOs are looking at public cloud to create new business offerings that are flexible, agile, and scalable.

McKinsey’s Kara Sprague, stated that a survey will soon be launched to determine the effect of cloud computing on SMB customers.  “Hardware OEMs are increasingly turning to service partners to access the customers, at the same time that independent software vendors are using the SaaS model to go to the customer directly. This is bad news for VARs, integrators and distributors, many of whom are trying to either become cloud service providers themselves or move into a cloud brokerage model,”  said Ms Sprague.


In a panel titled, “Disruptive Tools and Technologies,” Scott Bils of Everest Group and Randy Bias, CTO of Cloud Scaling detailed a laundry list of disruptions brought on by cloud computing.  Those included:

  • Public cloud is creating a “shadow IT” organization focused on achieving business agility, flexibility and dramatic time-to-market compression.  “Business users stand to gain significantly by evaluating public cloud options for ‘spiky’ workloads, such as development/test environments, or for non mission-critical workloads,” said Mr. Bils.
  • Open Source Software is causing redesign of cloud resident data centers (e.g. using OpenStack or CloudStack), enables an organization to move faster, reduces vendor lock-­in and risk, eliminates licensing fees.  But it dramatically increases reliance upon the community maintaining or improving the open source code.
  • Innovation in Hardware Design, e.g. ARM processors and solid state drives in cloud resident servers, Taiwanese Original Design Manufacturers (ODMs) selling direct to IT enterprise customers.
  • Building a private or hybrid cloud requires building a “net new infrastructure,” according to Mr. Bias.  It should be able to scale up or down, based on workload demand.
  • Software Defined Networking (SDN) is a huge potential disruptor, especially in data center network architecture.  However there are several important questions that have not been answered:  What is it really?  Why is it important? And is it ready for prime time?
  • It was agreed that existing network infrastructure (e.g. IP-MPLS VPNs or private line) “is not going to disappear,” especially for cloud access.  That’s due to its ability to achieve: QoS, bandwidth guarantees, low latency, multi-cast, stability and connectivity.  Therefore, SDN will need to work with that existing network architecture, perhaps as an overlay or adjunct.

In a session titled, “SDN is Here to Stay- Now What?”  PLUMGrid CTO Pere Monclus talked about SDN as a new virtual network infrastructure.  “As a way of simplifying operations and enabling a solution view of the networking space, SDN brings the additional value needed in cloud and datacenter environments to complement current hardware trends,” he said.  PLUMGrid believes that SDN, rather than traditional switches and routers, is the glue that will hold the new network together.

SDN is the layer that decouples virtual data centers from physical data centers.  It must be exensible- in both the data and control planes- as a platform to deliver better network functionality.  Those include: multi-tenancy, self service, virtual topologies, faster provisioning, and “Network as a Service.”  When deployed, SDN will result in operational simplicity, capital efficiency, and an elastic, on-demand, self service network.  However, there are many real problems to be solved before that vision can be realized.

Image depicting architecture gridlock to platform ecosystem.
Image Courtey of PLUMgrid

The functional SDN block diagram on the right was said to transform the current network architecture “gridlock” to a “SDN Platform ecosystem,” while facilitating innovation in both the control and data planes.

……………………………………………………………………………………………………………

On that note, we conclude our three part coverage of the information packed Cloud Connect 2013 conference.  Next week we’ll be attending the Open Networking Summit- the happening of the year for SDN techies and afficionados (this author is NOT one of them). We will be reporting on what we learn to Viodi View readers.

Till next time…….

References:

http://www.cio.com/article/731525/What_Cloud_Computing_Means_For_the_Future_of_IT_Organizations?page=1&taxonomyId=3024

http://www.crn.com/news/cloud/240152247/cloud-connect-the-cloud-threatens-the-smb-channel.htm

Cloud Network Deployment Requirements; Monitoring, Optimization & Enterprise Cloud Adoption Survey – 2013 Cloud Connect Part II

Introduction:

In this second article on the excellent Cloud Connect 2013 conference, we look at selected sessions that should be of keen interest to IT managers and enterprise cloud customers.  Key findings are presented in a concise form to aid readability.  We then report the results of the Enterprise Cloud Adoption survey by Everest Group.

The third article in this series will summarize leading-edge Cloud Research and Analysis by McKinsey & Company and also look at disruptive aspects of cloud computing, as discussed in several Cloud Connect 2013 panel sessions.

Effective Cloud Network Deployments, Monitoring & Optimization:

In a presentation, “Effective Networking Strategies for Your Cloud Deployment,” Eric Hanselman of the 451 Research Group suggested that cloud users should first establish networking requirements in terms of the metrics important to their organization.  These may include: performance requirements, data flows, data paths, application segmentation, user-to- application paths and direction, intra-application volume and performance.

Network options involve varying levels of sophistication, including L2 (Data Center switching and Carrier Ethernet bridging ) and L3 (IP VPN routing).  External connections should be based on requirements for aggregate bandwidth (throughput), latency, and L2/L3 path control.  access control lists, firewall placement, encryption, VPN management, prioritization, and optimization (compression, caching and reformatting) all need to be considered as well.

Amazon’s Virtual Private Cloud (VPC) offers more structure, according to Eric.  It includes: network segmentation, routing tables (with multiple interfaces per instance), virtual appliance support (with firewalls, intrusion detection systems and APM/WAN optimizers). Rackspace and Google cloud networking offerings were also profiled (but not Savvis/Century Link which offers many networking options including a customizable IP-MPLS VPN).

Sharon Wagner of Israel- based Clodyn provided lessons learned from AWS cloud deployments in his talk titled,  “Best Practices in Cloud Optimization.”  Some key findings:

  • 58% of IT shops run unmanaged cloud applications
  • Cloud applications change every second week
  • Nearly 13% of the configuration is changed in every release (of cloud application software)
  • Around 80% of the instances are significantly (~12%) under-utilized
  • Customer reports indicate that 15% over-­utilized cloud apps causes a 7% revenue loss

Such dynamic cloud environments results in over provisioning, wasted resources and budget violations.   The solution is exactly what Cloudyn does- it monitors, analyzes, diagnoses, and optimizes cloud deployments.  Effective cloud deployments were said to require consistent monitoring and optimization of usage, performance, and cost.

Life-cycle, and usefulness to the business should also be evaluated, according to Mr. Wagner.  The only metric that matters is the impact to the business, i.e. Return on Capital Investment and that can only be achieved by making cloud operations more efficient.


Enterprise Cloud Adoption Survey:

Cloud Connect and Everest Group conducted a joint survey in 2012 and 2013 to identify cloud market trends and disconnects.  The objectives of the surveys were to:

  • Identify broad based cloud adoption patterns
  • Identify barriers to adoption
  • Evaluate overall “cloud sentiment” post adoption

“Year over year, we are seeing the emergence of next generation budgets within business units,” said Scott Bils, partner at Everest Group. “Cloud buying power is concentrating in lines of business, not the CIO’s office.”

Business buyers are the primary cloud adoption drivers.  It’s the business stakeholders who are the primary decision-makers for most major enterprise workloads.  They’re primarily concerned about speed, flexibility, scale and reduced time to market. Those are the top cloud adoption drivers.  Surprisingly, cost savings was not a significant adoption factor.

Here’s a summary of the key findings of the first survey, which was completed in October, 2012:

  • Cloud adoption is expanding beyond “low hanging fruit” of SaaS, such as email and custom applications.
  • Buying focus is turning away from horizontal use cases towards more customized business applications.
  • Cloud is seen as an enabler of topline growth, beyond the cost reduction imperative.
  • Perceived “security issues” continues to be the most significant barrier to cloud adoption.
  • While VMware has leading mindshare (for IaaS), many enterprises prefer open source cloud software (e.g. OpenStack).
  • Cloud Service providers need to adapt to new budget centers and sell on business value rather than cost.
  • SaaS modules have been adopted most widely; IaaS adoption is expected to grow fastest in the near future.
  • Overall, buyers’ “cloud sentiment” remains extremely positive, with high expectations for the future.

The focus of the most recent survey (first half of 2013) was on private vs public clouds. Not surprisingly, private cloud is still the overwhelmingly preferred choice by survey respondents!  Enterprises still show a preference for private cloud models for most workload types.  “This increasing preference towards private cloud models within enterprises could potentially be driven by internal ‘IT marketing,’” said Bils.

Note: This author has long believed that security issues and variable performance (depending on aggregate workloads and network loading) have hindered adoption of public clouds for quite some time. That’s despite all the press about Amazon AWS, Rackspace, Google, Microsoft, AT&T and other public cloud providers.  The Metro Ethernet Forum (MEF) is focusing on Carrier Ethernet for delivery of Private Cloud Services, rather than public clouds.

Trends in cloud computing are depicted in this image.
Image Courtesy of Everest Group

The tipping point for enterprise clouds may be here.  Everest Group found that the majority of enterprises now expect migration to some type of cloud delivery model across all major workload types.  This is illustrated in the adjacent figure.

The “Private Cloud Infrastructure Wars” are heating up, according to the Everest Group survey.  While VMware is the current leader, a significant proportion of the market professes to be patform agnostic  or prefer open source platforms, such as CloudStack and OpenStack.  Please see the figure below for more details.  Note that Public Cloud Infrastructure is solely determined by the CSP, e.g. Amazon AWS.

Trends in cloud computing are depicted in this image.
Image Courtesy of Everest Group

 

 

 

 

 

 

Growth in adoption of cloud services is creating two distinct, but overlapping buying groups in the enterprise as shown in the chart below.

Trends in cloud computing are depicted in this image.

Image Courtesy of Everest Group

 

Further Information from Everest Group:

Readers may download a copy of the 2013 Enterprise Cloud Adoption Survey Summary Report at:  http://everestgrp.com/ccevent

7 Things We Learned at Cloud Connect | Gaining Altitude in the Cloud, by Scott Bils of the Everest Group is at:

http://www.everestgrp.com/2013-04-7-things-we-learned-at-cloud-connect-gaining-altitude-in-the-cloud-10902.html

—————————————————————————————————

Stay tuned for 2013 Cloud Connect Part III!  Here’s a preview:  McKinsey consultants Will Forrest and Kara Sprague proposed a different, and enormously disruptive, scenario of the ultimate cloud adoption road map.  Like IDC, McKinsey sees the future of IT in  public cloud computing, which may very likely be in the form of a separate IT organization, created specifically to reside outside of the existing one. Randy Bias of Cloud Scaling and other panelists detailed a laundry list of disruptions brought on by cloud computing which we will share with Viodi View readers.

2013 Cloud Connect Part I: Highlights & Mobile Cloud Issues

Introduction:

The four-year-old Cloud Connect conference, sponsored by United Business Media, was held  April 2 to 5th in Santa Clara, CA.   Having attended all four Cloud Connects, this one was by far the most in depth and comprehensive treatment of Cloud Computing. At last, no more defining terms and debating methods of cloud computing, this year’s conference discussed how the cloud is being used now. And also how business could leverage the cloud for more effective IT operations.  For example, many attendees wanted to know how to make use of a hybrid cloud as they migrate from private to public cloud or look to combine both.

The balance between convenience and security is depicted in this image.
Image Courtesy of Citrix

In this first article of a three (or four) part series on Cloud Connect 2013, we provide what we perceived to be the key takeways and messages.  We also examine how the Mobile Cloud has and will continue to change business operations.  It’s more of a balancing act, with compromises needed between compliance/security vs worker freedom/convenience as shown in the adjacent figure on the left.

Key Themes and Messages:

  • There’s a strong focus on reinventing the data center for cloud computing, using software defined infrastructure, such as virtualized networking and storage as well as software defined networking (SDN).  However, the legacy networking infrastructure from Cloud to Premises is not going away anytime soon.
  • OpenStack is now an acceptable alternative to Amazon Web Services (AWS) for public clouds.  There was much discussion on using OpenStack for private cloud implementations as well.  Openstack was initially promoted by Cloud Service Provider (CSP) Rackspace, but is now endorsed by many other CSPs, including HP. There are many new and well funded OpenStack based start-ups.
  • Virtual networking and SDN are being added to the growing number of OpenStack capabilities by the OpenStack Foundation (OSF).  On April 4th, OSF issued its “Grizzly release,which contains 230 new features for running production-level cloud computing. Networking has lagged servers when it comes to being managed as a virtual resource and in most enterprises, is still tied to a set of hardware resources that are hard to modify. Virtual networking and SDN aim to change that by making the network a logical rather than physical part of the IT and cloud infrastructure. OpenStack’s work on SDN “lets software change the network infrastructure for cloud computing,” according to one knowledgeable conference attendee.
  • Amazon’s Virtual Private Cloud (VPC) is now the defacto way of accessing AWS, replacing the public Internet (and in some cases) private lines. VPS lets the cloud user provision a logically isolated section of the AWS Cloud where resources are launched in a virtual network.  The customer has complete control over the virtual networking environment, including selection of  IP address range, creation of subnets, configuration of route tables and network gateways.
  • Big Data (analytics) and Cloud are a paradigm shift and an architectural change that involves putting data and computing power together as a massive processing unit.  With the explosion in all types of information, businesses need data analytics to be competitive. Organizations need to analyze data from multiple sources and places to gain insights. That data can’t be stored in one place and can even be maintained outside the organization (such as in a private cloud).
  • The reorganization of computing into larger, more demand-responsive cloud-based data centers run by Google, Amazon Web Services, Rackspace and others is part of a shift in business that replaces transaction systems with “systems of interactions,” said Cisco Systems VP of Cloud Computing Lew Tucker.
  • “Analytics becomes business critical” because huge volumes of data will be generated by the Internet of Things (IoT), with billions of devices soon to be connected to the Internet. The billions of connected devices drive a need for cloud storage and cloud analytics.  The creation of big data drives business decision-making and businesses’ need to keep employees in constant collaboration and communication, driving a need for a new style of internal networking: the software-defined network that responds more flexibly to changing conditions, Cisco’s Tucker said.
  • Dimitri Stiliadis, Chief Architect and Co-Founder of Nuage Networks (http://www.nuagenetworks.net/),-a new start-up within Alcatel-Lucent – presented “The True Power of Network Virtualization.”  Nuage has developed a SDN overlay product for inside and outside the data center.  The start-up plans to extend the product to SDN-enabled wide-area networks for the enterprise.  Nuage’s Virtualized Services Platform incorporates a controller, virtual routing and switching, and a virtualized services directory. It builds tunnels between virtual machines running in the same server rack or in different racks in the same or different data centers. It works with cloud-management software from OpenStack, CloudStack and VMware.  This overlay platform was said to be “a novel, open standards approach that fulfills the full promise of massively scalable network virtualization, enabling seamless interconnection of cloud services with existing enterprise environments.”
  • Mobile Cloud is being used as more workers have mobile computing devices, especially tablets and notebooks.  Organizations continue to make use of mobile apps to improve productivity and business process, according to Citrix.  They deployed over 100 third party apps, e.g. Citrix Receiver, Adobe Reader, etc. as well as custom written apps.  Packaged, deployable mobile apps stores for the enterprise are starting to emerge.  (Mobile Cloud is covered in more detail in the next section of this article).
  • PayPal chief information security officer Michael Barrett stated that cloud computing had changed the stakes involved in the security of computer systems. The cloud can provide the computing power to run an attack to decipher passwords. “Password hacking is now the work for script kiddies,” he warned, as opposed to a challenge for skilled hackers backed by massive compute resources.
  • William Ruh, VP and global technology director at General Electric, said business is moving from an analog way of operating to a digital one which will change nearly every aspect of business.  Civilization is moving from the industrial revolution through the Internet revolution and into what he called “the Industrial Internet.”
  • Machines will be connected to the Internet (IoT) and become intelligent through the software they possess that analyzes the information they’re generating. That will contrast with today’s industrial operations where machines are not intelligent and most of the data they generate “isn’t even stored,” Ruh observed.
  • The shift will, “Foundationally change the way machines are built and the way data is collected on them, petabytes of information,” said Ruh. The information will be fed to the operations staffs at utility power plants and other large industrial installations, who will use it to look for efficiencies that we don’t know about today, he said.
  • Case studies are beginning to emerge from a variety of users. The cloud industry has moved beyond case studies from technology innovators, such as Netflix, to rank-and-file companies that are just getting their first cloud computing systems up-and-running.

The Mobile Cloud:

Mobile and cloud are combining to change how the underlying infrastructure of business. Mobile and cloud combine to change how applications are developed, tested and distributed. Mobile changes what features and user experience exists in applications while cloud changes where data should be located and how it will be accessed. Security and management will also change as businesses embrace mobile. Applications will be device aware, location aware and network/cloud aware. But they have to be purpose built, i.e. desktop/workstation apps won’t run on mobile computing platforms- even with 4G access.  And because the demand for mobile cloud apps is uncertain, the mobile cloud must be very flexible in scaling up or down to accomodate the actual number of users for all the mobile apps being supported.  Going forward, business processes will assume an environment of multiple devices with cloud connectivity and running cloud resident mobile apps.

This graph depicts the number of mobile devices and tablets sold versus PCs.
Image Courtesy of Citrix

Before the end of this year there will be more smart phones than PCs, and in 2015 there will be more tablets than PCs as shown in the illustration to the right.

Mobile work styles are becoming the rule rather than the exception in Enterprise IT and traditional methods of securing data behind VPNs will fall short as employees demand business tools that are as easy to use and frequently updated as the ones they use at home.  Unfortunately, legal and regulatory requirements for securing data are no less stringent than they were before the mobile era.  There are compliance issues with laws such as HIPAA and FINRA that apply to data sync and sharing of information/digital content.

In the future, companies will rebuild transform business applications to take advantage of a  range of by using contextual data from all connected devices, including location, time of day, presence and device type. Sensors in the latest devices will also also provide contextual information such as temperature, humidity, motion, and orientation. Applications based on business critical data from connected sensors will be used by many industries, with utility, oil and gas industries leading the way. Transforming business will require businesses to use the cloud and big data processing to turn mobile data into insight in real-time.

In an excellent presentation by Jesse Lipson, Citrix VP of Data Sharing, Managing Data in the Cloud said:  “VPNs are going away.They are clumsy and incovenient for mobile users.”  Other reasons;  there’s more IP outside of the firewall, Mobile Device Management (MDM) and simpler two factor authentication are combining to alleviate the need for VPN access.  Mr Lipson also sees several new trends as a result of mobile data tsunami:

  • Active Directory Integration with Single Sign On (e..g. SAML 2.0)
  • 2 factor authentication going away; perhaps replaced by text message authentication
  • Auto Log-In from mobile devices, especially smart phones
  • On premises storage alive and well due to security, compliance, convenience, and ability to access existing data stores
  • “Open-in…”  enable another application to open in the application being run
  • Device control via MDM software deployed on all enterprise owned mobile devices
  • Other mobile devices, especially laptops are getting more attention for security and control

In the end, enterprise control of mobile devices, data and apps is a balancing act between corporate compliance and security vs employee convenience and productivity.  Each organization must decide how to chose the necessary tools, methods and procedures to ensure that both objectives are met.


Stay tuned for 2013 Cloud Connect Part II which will summarize several market studies and forecasts related to enterprise cloud computing.

2013 IDC Directions Part II- New Data Center Dynamics and Requirements

Introduction:

In this second article on the IDC Directions 2013 Conference (March 5th in Santa Clara, CA), we cover the two sessions that addressed the changing dynamics of the new Data Center (DC). We examine the many challenges and requirements new DCs must address, especially for geographically distributed sites. Next we look at current trends, projected growth rates and emerging technologies, such as Software Defined Networking (SDN), and the role they might play in the DC of the future. Finally, we look at why converged DC infrastructure will be a huge growth opportunity till at least 2016.

Note: Please see 2013 IDC Directions Part I  for an explanation of the “3rd Platform” and its critical importance to the IT industry.

Mega Datacenters and Content Depots: The New Physics of IT in the 3rd Platform Universe:

Sr. IDC Analyst Rick Villars asked a rhetorical question to start his presentation: Are Datacenters up to the task? Evidently not! IDC found that 84% of DCs had issues with power dissipation, cooling, space, capacity, and up-time, which negatively impacted business operations.  The solution was said to be DC expansion- in three different ways:

  1. Global expansion: deploy DCs in new geographical regions to provide business services to more clients in different locations.
  2. Offer new services and applications
  3. Massive Increase in scale: DCs need to be able to accommodate more users, different types of applications and much more data to be processed, stored, and retrieved.

Three key Data Center issues were said to be:

  1. Types of DC workloads, e.g. transactions, computing, content servicing, archiving, analytics
  2. Variable computing, i.e. variable workloads based on type
  3. Data gravity-physical placement of DCs will be shaped by available network connectivity, power consumption, cost of land and application characteristics NOT necessarily the DC provider’s business location.

There’s a growing importance of “data factories” based on hyperscale and hyper-standardization. Some companies are expanding the type of DC workloads they’ll handle. For example, Amazon has traditionally offered content and archiving, but is now adding compute and transaction capabilities to its DCs.

DC agility is a never-ending journey. We’ve migrated from traditional batch processing to virtualized servers to converged infrastructure (server, storage and networking). This has resulted in much quicker service/application deployment times- from months to days to hours. Converged DC infrastructures will grow at a 40% CAGR to $17.8B in 2016 vs <2% CAGR for non-converged (i.e. separate) DC infrastructures.

SDN [Software Defined Networking] may provide new provisioning and management services that could result in even faster application deployment times (e.g. maybe in minutes). SDN may be the technology of choice to inter-connect DCs, according to Mr. Villars. Here are the other key points he made during his presentation:

Impact of Variable Computing:

  • 10% of IT server assets will handle 50% of workloads
  • Solutions that boost IT agility for variable loads will dominate the management agenda
  • Radical changes ahead for application software licensing and life-cycle management
  • Competing in public cloud is all about maximizing revenue from variable computing

Impact of Data Gravity:

  • Growing distinction between data owners and data custodians
  • Cloud providers have or will become the new leaders shaping storage technology choices
  • Most new DCs will be located close to cloud building
  • Operating DCs can’t be a part-time job
  • SDN discussion shifts from intra-DC to inter-DC (connecting DCs)

The Future of The Data Center: it’s not just a place to keep computers and storage servers

  • The first point of contact with customers
  • The foundation for new business model(s)
  • Maintaining a DC is not a part-time job! “3rd Platform” IT environments change all the rules for capacity planning, which becomes much more complex and time-consuming.
  • Manufactured, not constructed, to provide the best services, agility and scale
  • The DC is the system and is the business of the provider (whether it’s used for internal or external customers)

Why the Datacenter of the Future Will Leverage a Converged Infrastructure:

IDC Group VP Matt Eastwood said that the “3rd Platform” Requires a Different Type of Infrastructure. The “re-aggregated” DC infrastructure should be an “Enterprise DC” with the following characteristics:

  • System Level performance
  • Legacy
  • Heterogeneous equipment/services
  • Bladed/Converged
  • OPEX optimized
  • Support a complex portfolio of applications
Image courtesy of IDC.
Image courtesy of IDC.

In addition, the “3rd Platform”is placing increased burdens on the DC to Scale in terms of number of users, explosion in data, and more services/applications. This requires better programmability, more agility, increased need for internal analytics, better systems management in public/private/hybrid cloud environments.

70% of DC apps have now been virtualized. The next step is a converged infrastructure with OPEX reductions overtaking CAPEX as the main cost driver. Such a converged will provide flexibility, faster time to market for new services, reduced incidents of down-time by 50 to 75%. It’s fundamental characteristics include :

  • Integrated server/storage/network/management
  • General-purpose distributed workloads
  • Single vendor sale and support
  • Single SKU/complete system/support
Image courtesy of IDC.
Image courtesy of IDC.

IDC estimates the converged infrastructure DC will reduce cost per user by 50% each for server, network, power and cooling; and by 25% each for storage and facilities.

IDC says the worldwide converged system market was only 3.9% of DC spending in 2012, but will grow to 12.8% of all DC spending by 2016. The firm forecasts 2016 CAGRs of 42.7% for software, 37.1% for storage, 40.9% for servers, and 40.1% for the (internal DC) network. Cisco was seen to be the big winner with a 17.4% increase in market share by 2016. Dell was forecast to be in second place with a 5.7% market share increase by 2016.

Conclusions:

Image courtesy of IDC.
Image courtesy of IDC.

The explosion in applications, data, devices and users challenges CAPEX and OPEX in traditional IT driving need for an integrated 3rd Platform Data Center which will result in:

  • Lower Costs: Reduce DC floorspace, power and cooling, capital spend and operational requirements
  • Speed: Market forces drive need for faster time-to-market
  • New Apps: Users see opportunity to do something different
  • Flexibility: Need to efficiently automate and virtualize data centers requiring pre-tuned infrastructures

Looking Ahead:

The third article in this series on the 2013 IDC Directions conference will take a hard look at SDN as discussed in the session: Evolution or Disruption: Where Are We Headed with Software-Defined Networking (SDN)?  We will also examine why it’s very unlikely SDN will be used within a DC, but has better potential for inter-connecting DCs.

—————————————————————————————————————

For an in depth look at Data Center Dynamics from both a silicon and equipment vendor perspective, please attend the April 10th  IEEE ComSoc SCV meeting in Santa Clara, CA.

Meeting logistics and RSVP information is at:  http://www.ewh.ieee.org/r6/scv/comsoc/index.php#current

Viodi View – 03/08/13

Bruce Wolk Being Interviewed by ABC 7.
Click to Read More

Telling the Whole Story

Overwhelmed with information from all outlets, content needs to fit into soundbites and short, focused stories to keep the short attention span of today’s viewer and/or reader. Broadcast television has always been that way and this point was proven to me again over the weekend. Still, for those viewers who want an unedited conversation, broadcast television, particularly the local news, just doesn’t work. One of the advantages of online video is that, for better or worse, the entire story can be told and the people being interviewed can give their entire viewpoint on a topic. Click here to read what I wanted on TV, but which ended up on the cutting room floor.


Big Data Central to Smart Energy

Image of an electronic door lock at the Parks Associates' Smart Energy Summit.
Click to View Video

Big Data was a recurring theme of the ViodiTV interviews at the 2013 Smart Energy Summit.  It is becoming a given that there will be low-cost connectivity between devices both within the home between the home and the cloud(s). Big Data from smart energy results from the collection of information from sensors and exogenous sources (e.g. weather information) which, when mixed together, results in actions that result in great efficiency or increased consumer control, convenience and comfort. Click here for a video overview provides a glimpse of the 25 some interviews that will be released on the Parks Associates and ViodiTV web sites over the coming months.


2013 IDC Directions-Part I: 3rd Platform, Cloud Spending & Apps, Global Economic Trends by Alan Weissberger

Evolution of the IT platform according to IDC.
Image courtesy of IDC

In this first of two articles on the 2013 IDC Directions conference (March 5th in Santa Clara, CA), Alan Weissberger reports on the very impressive opening and closing keynote presentations.  IDC Chief Analyst Frank Gens’ opening keynote presentation stressed the importance of the “3rd Platform,” examined public vs private cloud adoption, and the most likely corporate apps that would be moving to the cloud. Professor Robert Reich’s closing keynote speech provided insight and perspective about the U.S. and global economy. Click here to read more.


Tele-Psychiatry – Better Outcomes While Saving Money

An example of a tele-psychiatry application is given in this video.
Click to View Video

$18 million in Medicaid savings and over 80% reduction in per patient, per day costs ($2,500 to $400) is the windfall realized by the Palmetto State Providers Network (PSPN) due to the implementation of a statewide network that facilitates telemedicine to rural areas, according to a recent report issued from the Federal Communications Commission. PSPN is one of only 50 active programs that are part of the FCC’s Rural Health Care Pilot, which is about identifying ways to use the communications infrastructure to improve the quality of health care for rural Americans. Click here to read more.


Roku – The MSO Set-Top

Jim Funk of Roku describes their latest set-top boxes and their content strategy.
Click to View Video

Jim Funk discusses one of Roku’s newest set-tops – which is really a set-stick – that uses MHL as the interface between the TV and the set-top. Funk also explains how service providers, such as Time-Warner, are using Roku via authentication to allow people to stream to their televisions without need for a traditional set-top box. He also discusses Roku’s content strategy and how it complements both operators and consumer electronic device makers. Click here to view.


Some Tweets and Short Thoughts:

  • Looking forward to filming interviews at Amercian Cable Association’s 20th Anniversary Summit over the next few days. Please email me any burning questions you would like me to pose to the elected officials I hope to be interviewing.
  • Didn’t realize that former GTE/Verizon content expert, Dick Jones, is now consulting for the NextGen Marketing Group. They provide marketing consulting services to telecom operators and others on a variable rate basis. Seems like a great gig for Dick, as he definitely has lots of knowledge to share (here is an example of his work from a presentation he gave when FiOS had just been announced).
  • Pre-cooling home as part of Demand Response program=fewer Demand Response opt outs and is a form of energy storage #ses2013 #ecofactor

The Korner – The Body Mind Connection – Part 1

The Bodywave brain wave reader is demonstrated.
Click to View Video

The Bodywave device from Freer Logic is a wristband that detects and reads brainwaves. The technology has already been deployed in various professional applications and now consumers will soon see this amazing technology.

Gwen Sorely explains some of the applications, including sensors in a steering wheel that would detect drowsiness or games to help relieve stress and improve concentration. As can be seen at the end of the video, this author had the opportunity to try it at and came away impressed.

This is part 1 of a three-part video interview series with companies on the leading edge of bringing of brain wave technology to the consumer. Click here to view.

2013 IDC Directions-Part I: 3rd Platform, Cloud Spending & Apps, Global Economic Trends

Introduction:

In this first of two articles on the 2013 IDC Directions conference (March 5th in Santa Clara, CA) we report on the very impressive opening and closing keynote presentations.  IDC Chief Analyst Frank Gens’ opening keynote presentation stressed the importance of the “3rd Platform,” examined public vs private cloud adoption, and the most likely corporate apps that would be moving to the cloud. Professor Robert Reich’s closing keynote speech provided insight and perspective about the U.S. and global economy.

Frank Gens on the Significance of the 3rd Platform (mobile, social, cloud, big data):

Evolution of the IT platform according to IDC.
Platform Evolution [courtesy of IDC]
For the  past four years, IDC has been saying that the 3rd Platform for IT (mobile device accessed, cloud based, social networking, and big data analytics) will supplant the 2nd Platform (distributed computing via client/server/PC) solutions that are prevalent in the market today.  According to Mr Gens, we are about to enter the second chapter of 3rd Platform adoption within the IT industry.

IDC asked IT executives and CIOs:  “In 3 years, what % of your total IT delivery will be through some form of cloud (public and private)?”   Survey results:  45.5% with Private Cloud at 65% and Public Cloud at 35%. The key message at this year’s IDC Directions conference was that CAGR for the 3rd Platform will be 11.7% over the next seven years (see chart below), compared to 0.8% CAGR for the 2nd platform.

Chart from IDC showing projected spending on ICT from 2013 to 2020.
ICT Spending 2013-2020 [Image Courtesy of IDC]
Contrast those projections to the average industry growth rate of 2.9% and you can see how important the 3rd platform is to the IT industry.  Even more compelling is that IDC forecasts that 90% of all new IT spending will be on 3rd platform hardware and software!  That despite the fact that it accounts for only 25% of the overall IT market today.

Next, IDC asked:  “How likely to move (any) IT workloads/apps to the cloud? (1 = not likely, 5 = very likely).”  Answer: ~3.4 evenly split between public and private cloud as per the chart below which also depicts Cloud Services Spending in 2016.

Cloud services spending and trends of apps according to IDC.
Cloud Services Spending [Image Courtesy of IDC]
IDC then enquired: “How likely to move (specific types of) IT workloads/apps to the cloud?”  The results are shown below and indicate that the most popular apps are related to mobile, social and big data.

Providing guidance to IT managers, Mr. Gens suggested a “Compete Checklist” for Chapter 2 of 3rd Platform deployments:

  • Master the New Scale of Sales to IT CIOs and Line of Business (lOB) executives
  • Build “Enterprise” Offerings on a Consumer base which should be offered first (Enterprise only IT offerings will be a niche market)
Image shows likelihood of apps moving to the cloud.
IT Apps Moving to the Cloud [Images Courtesy of IDC]
  • Exploit Mobility Beyond Smartphones & Tablets, e.g. cloud connected cars, microscopes with network interfaces, other IoTs/M2M communications
  • Expand Your Connections with Line of Business Executives who will be increasingly more responsible for IT purchases
  • Develop an “Industry PaaS” (Platform as a Service) Cloud Strategy
  • Expand Value from Silos to Mash-Up business solutions with multiple suppliers and technologies
  • Prepare for the “Death” of Dedicated IT which will give way to cloud and shared IT models
  • Follow (and Play a Key Role with) the Data for customers, which will result in market power

Robert Reich’s Key Points on U.S. and Global Economy:

  • U.S. government sequestration (automatic government spending cuts) will continue at least till the remainder of this fiscal year (Sept 30, 2013). It’s likely to reduce U.S. GDP by ~0.5%.
  • The U.S. government will shut down on March 27th (when current funding authorization ends) unless Congress agrees on new funding levels. Both parties in Congress say that they wish to avoid a government shutdown, but doing so will require cooperation that’s not occurred in a very long time.
  • Austerity economics results in reduced government deficits, but also slows economic growth dramatically.
  • U.S. GDP growth of only 1.5% to 2% will result in continued high unemployment.
  • Median real wage is 8% lower than in 2000. As an example, the median wage of the largest U.S. company, Wal-Mart, is only $8.36 per hour.
  • 8.3% of new college graduates are unemployed (higher than the overall U.S. unemployment rate of 7.6%). Many that have jobs are “underemployed” or not working full-time.
  • Despite ultra-low interest rates, consumers have not been able to borrow as much as they’d like. That’s because banks don’t want to take on any more bad debt and many consumers don’t have a good credit rating.
  • Housing has been buoyed by investors buying property to rent out, rather than by owners occupying the homes they’ve bought.
  • No growth in Eurozone this year; most countries will be in a contraction or recession.
  • Especially with new government leaders taking over, China’s economic data is probably not accurate. It probably overstates growth and other economic activities.
  • (In response to a question) Corporate profits have been very high because “companies are doing more with less.” They lay off workers and cut costs to reduce expenses, but their top line is not growing very rapidly at all. Corporate profits are not likely to rise from here until GDP picks up, according to Prof. Reich

Analysis:  Corporate profits cannot indefinitely grow faster than GDP growth or productivity. The theme of “doing more with less” (i.e. fewer workers doing more of the heavy lifting) is about played out. Companies won’t hire till they sense demand will improve, but that’s not likely anytime soon. As quoted in a recent New York Times article,

“There’s a fear that the economy is going to go down again, so the message you get from C.F.O.’s is to be careful about hiring someone,” said John Sullivan, a management professor at San Francisco State University who runs a human resources consulting business. “There’s this great fear of making a mistake, of wasting money in a tight economy.”

With uncertainty due to sequestration and a potential federal government shutdown on March 27th, we think U.S. companies will be very conservative in hiring as well as capital expenditures.  We think profit growth will slow or even turn negative in coming quarters.

Technologies to Monetize Data

[Editor’s Note: This is part 3 of a 3-part article. Click here to read Part 2 of this three-part article.]

So far, we have learnt the basic definitions of Big Data from a Broadband Service Provider’s perspective and have understood what kinds of actionable insights may be obtained from such data.

However, before we do that, the data must be stored and acted upon. There are some well accepted technologies that allow us to do so, and we shall understand those next.

Storing and Processing Big Data

In the RGU [Revenue Generating Unit] decline example provided in the previous article, it’s usually not possible to determine a pattern behind a symptomatic decline without looking at the data, and often the answer is not obvious even then. It’s important to extract as much customer behavior information as is available, combine it with local, social or other sources of information that may be available, and to then perform analysis to understand the root cause behind the loss of RGUs. It may be possible to create a machine learning algorithm that will help “learn” behavior over time.

But before we start acting on the data, we must first store it in such a fashion that we could act on it with technologies that would have scalable storage capability, reliable and scalable processing capabilities, at an affordable price.

While there are many technologies from reputable providers on the market, not everything fits every problem. For instance, there are problems that can be solved in a batch-processing mode, and there are other problems that require a real-time solution.

For problems that require batch-processing, Apache Hadoop is the most common solution in the market today. Hadoop is a framework that allows for the use of commodity hardware to store and process large data sets across clusters of computers. Each computer in Hadoop offers compute power as well as storage. Hadoop is built to scale from a single computer to thousands of computers, and best of all, it is Open Source software – with a very tempting price point – it is free!

Apache Hadoop is an implementation of MapReduce, which is a name for frameworks that could process problems over huge datasets that could be parallelized for processing across clusters of computing devices. MapReduce is a programming model that the search giant Google is credited with implementing and using to solve the decidedly Big Data problems in search.

MapReduce involves two steps:

  • Map: A compute device takes a problem, splits it into multiple sub-problems and hands them over to other compute devices to solve in parallel. This can happen recursively (other devices can split the problem and hand over to other compute devices to solve the sub-sub-problems).
  • Reduce: The results from solving all the sub-problems are combined to provide the answer to the original problem.

While details of MapReduce in general and the Hadoop architecture specifically are outside the scope of this article, suffice it to say that Hadoop offers a distributed file system, as well as the ability to perform parallel processing of extremely large data sets and all the tools needed to store and process the information that most Broadband Service Providers (or other enterprises) may require.

For real-time problem solving, there isn’t really a single solution that works very well for all cases, while there are multiple companies attempting to solve this problem. Twitter invented a solution called “Storm”, which is open-source, simple to use, works with almost any programming language. Storm is often used with a message broker called Kafka [open sourced from LinkedIn], and the combination is scalable and capable of implementing fault-tolerance.

A word of caution here – Hadoop and the technologies around it may appear to be easy to work with (the reader can download the software yourself and try it for free on her own home computer!) but for performing any meaningful processing of this precious data, it is highly recommended that Data Scientists be engaged and allowed to build and maintain the Service Provider’s Big Data system with help from the Service Provider’s engineering staff.

Of course, there is no dearth of companies that would be happy to provide support with an installation of Hadoop as well as setting it up and helping maintain it down the road.

Monetizing Precious Data

As the reader must have gathered from this series of articles, good data is precious. But monetizing Big Data requires a serious commitment and sustained effort on the part of the Service Provider with help from qualified Data Scientists.

To realize the value of Big Data, an Independent Broadband Service Provider must:

  • Collect and sanitize all useful data
  • Ensure access to powerful Big Data processing systems
  • Work with experienced Data Scientists who can help identify the data to collect, the infrastructure to use, and possibly create machine learning systems to extract value from such data

However small a Broadband Service Provider may be, it is likely there is enough data being generated by their customers that it could be analyzed to provide actionable insights that are monetizable. It is early days in the Big Data game, and those Service Providers who recognize Data’s importance and prepare now, stand to be the big winners over time.

Contact Kshitij at kshitij.kumar@viodi.com