Highlights of 2015 TiECon Part II – Cloud Track

Introduction:

Photo of TiE event.
Photo of TiE event.

This is the second article on this year’s TiECon conference.  It is focused on selected presentations and panel sessions from the Cloud track on May 15th. That track covered planning, operational challenges of cloud infrastructure, business and technical challenges of migrating services to the cloud, and the still problematic state of cloud security (which is badly lagging the advances in compute, storage and even networking).

The first article on 2015 TiECon summarized the two opening Grand Keynotes. It can be read here.

Keynote on Enterprise Cloud Trends: Mark Interrante, VP of HP’s Cloud Business Unit Operations

Interrante is driving HP’s OpenStack movement directed at Cloud Computing. The HP Helion Platform¹  is a combined Infrastructure as a Service (Iaas) and Platform as a Service (PaaS) offering for cloud-native workloads. Helion is based on very popular open source projects in OpenStack® and Cloud Foundry™. Mr. Interrante described HP’s Helion offering as a hybrid cloud, which combines the flexibility and convenience of public cloud with the security and control of private cloud.


Note 1. HP states that Helion is:

“A private cloud that enables IT to protect sensitive information, control and broker services across multiple clouds, and deliver exceptional cost advantages. A private cloud that is proven today and delivering on the vision for tomorrow. A vision for a Hybrid World. That cloud is HP Helion.”


“The path to hybrid begins with a private cloud, built on open-standards, using opens source software and designed for compatibility and interoperability from the start,” Interrante said. He enumerated several advantages of open source code, including: software transparency, increased security, being viewed by “many eyes,” code re-use, and open cryptography.

For years, security has been the biggest issue for cloud users – much more so for public than for private cloud. “Security is a prominent concern for all businesses and organizations of every size,” Mark said. The concern is certainly valid as 2014 was “the year of the breach,” which have accelerated since 2011.
“Cloud security is NOT one size fits all. It’s critically important to understand how to isolate a fleet of (cloud) services and applications you use,” he added. Other points Mark made related to cloud security:

  • Security must be provided in, under, across and to/from the cloud or interconnected clouds used by the enterprise customer(s).
  • The security strategy must go beyond compliance in that it has to go beyond just following compliance procedures.
  • Threats include: data breaches, data loss, account or service hacking, insecure interfaces and/or APIs, Denial of Service (DoS) attacks, malicious insider attacks, abuse of cloud services, insufficient due diligence, shared technology vulnerabilities.
  • HP has active Threat Intelligence & Research teams that are working to improve security for their products and services.

In response to the moderator’s question on “dockers² and “containers,” Mark replied: “Docker type containers have had the fastest uptake and most interest than any new software) technology.”


Note 2. Docker is an open-source project that automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization on the Linux real-time operating system.


In summary, Mark said:

“Cloud is driving innovation, changing the IT landscape, and transforming the way companies do business (e.g. everything “as a service”). Every organization is becoming a software company built on cloud computing and storage. The proliferation of mobile devices, connected consumers and machines has spawned new business models based on cloud. IoT will accelerate that trend.”

Cloud Market Trends and Needs:

This panel of IT managers & a CIO addressed issues related to large-scale cloud deployments and problems that they are facing, especially cyber security. Alan Boehme, CIO (Global IT) & Chief Enterprise Architect at Coca-Cola Co. provided by far the most valuable information. To wit:

  • It’s very hard to move legacy applications to the cloud.
  • Public cloud is a quick and easy way to develop new apps, especially for start-ups.
  • Hybrid cloud model is probably best for mid size companies that are able to segregate their computing and storage needs between private/mission critical and secondary/tertiary apps.
  • Level of security is limited on Public clouds.
  • Public cloud issues include: providing the equivalent of an indemnification clause; reliability, robustness, and performance of Open Source software used; skill set needed for cloud security.

Suneet Nandwani, Sr. Director of Cloud at Ebay, noted that Ebay/PayPal uses an internal Private Cloud. That’s largely because they can guarantee a higher level of security (vs a Public or Hybrid Cloud). Suneet mentioned that hardware level security (e.g. built into various SoCs) is desirable and available from ARM, Intel, Freescale, and others.

Nandini Ramani, VP, Engineering at Twitter, said “Twitter has a Private Cloud, but is finding it hard to absorb start-ups. We have a tendency to shift to Public Cloud, but will first move to a Hybrid Cloud.” Nandini noted what most public cloud users are well aware of: “the tools on Amazon AWS³  are not available anyplace else.”


Note 3: In the 2015 Magic Quadrant for Cloud Infrastructure as a Service, Worldwide, Gartner Group placed Amazon Web Services in the “Leaders” quadrant and rated AWS as having both the furthest completeness of vision and the highest ability to execute. AWS groups its data centers into “regions,” each of which contains at least two availability zones. It has regions on the East and West Coasts of the U.S., and in Germany, Ireland, Japan, Singapore, Australia, Brazil, and (in preview) China. It also has one region dedicated to the U.S. federal government. It has a global sales presence.

From the Gartner Group report:

“AWS has a diverse customer base and the broadest range of use cases, including enterprise and mission-critical applications. It is the overwhelming market share leader, with over 10 times more cloud IaaS compute capacity in use than the aggregate total of the other 14 providers in this Magic Quadrant. This has enabled it to attract a very large technology partner ecosystem that includes software vendors that have licensed and packaged their software to run on AWS, as well as many vendors that have integrated their software with AWS capabilities. It also has an extensive network of partners that provide application development expertise, managed services, and professional services such as data center migration.

AWS is a thought leader; it is extraordinarily innovative, exceptionally agile, and very responsive to the market. It has the richest array of IaaS features and PaaS-like capabilities. It continues to rapidly expand its service offerings and offer higher-level solutions. Although it is beginning to face more competition from Microsoft and Google, it retains a multiyear competitive advantage. Although it will not be the ideal fit for every need, it has become the “safe choice” in this market, appealing to customers who desire the broadest range of capabilities and long-term market leadership. It is the provider most commonly chosen for strategic adoption.”


Hybrid Cloud leaves the user in an “awkward state,” where you’re not managing your own destiny (on the Public portion) nor fully taking advantages of popular services and applications for Public Cloud.

Mr. Boehme said that orchestration is missing from many Cloud offerings, especially those that span multiple clouds.  [Orchestration involves the automated arrangement, coordination, and management of applications, services, processes, and workloads. A cloud orchestrator is “software that manages the interconnections and interactions among cloud-based and on-premises compute/storage. Cloud orchestrator products use workflows to connect various automated processes and associated resources.”]

“We have the same set of network technologies and tools for the last 15 years and need new ones.” Alan said.  He doesn’t believe SDN is the answer.  “SDN will take a long time to be adopted by large enterprise customers,” he added.

Mr. Nandwani says the cloud has had a huge impact on eBay/PayPal. Approximately 90% of PayPal’s front end customer facing interace is based on cloud. A key requirement for PayPal’s cloud infrastructure was the ability to scale quickly without compromising availability or agility. OpenStack is playing a major role in PayPal’s vision by enabling a Private Cloud that helps the company’s developers quickly respond to its customers’ increasing demands and constantly changing needs, while developing a stable platform for customers to pay for their purchases.

Cloud Architecture and Technology Trends:

The panelists in this session covered cloud architectural issues from both the vendor (HP, Cisco), networked data center operator (Equinix) and cloud start-up (The Fabric) perspectives. The participants were:

  • Atul Garg, Vice President & GM at Hewlett-Packard
  • Ken Owens, Chief Technology Officer, Cloud Infrastructure Services at Cisco Systems
  • Sindhu Payankulath, VP, Global Network Engineering & Operations at Equinix
  • Prem Talreja, Marketing & Business Development Advisor at The Fabric

Here were the key points made:

HP: Use cloud to automate routine tasks to improve data center operations. The real challenge is how to create a platform to automate delivery of web services that are customized to individual company demands.

Equinix: We manage a multi-vendor network that connects the data centers we rent. Our customers get: compute power, storage, space, power, interconnection of compute/storage resources. Sindhu is responsible for three Equinix regional operations areas (AMER, EMEA and APAC) as well as Global Service Delivery.

While not mentioned by Sindhu, Equinix offers “Cloud Exchange.” which provides “secure, direct, flexible connections to a wide range of cloud service providers.”  It’s described by Equinix as “an advanced interconnection solution that enables seamless, on-demand, direct access to multiple clouds from multiple networks in more than a dozen locations around the world.”  Please see Addendum below.

Cisco: The biggest problem cloud solves is “to help businesses become more agile to enable them to quickly change and pivot.” Cisco is trying to provide a “cloud interconnect” capability to meet that need. The goal is to let customers create, run, maintain, and change cloud resident applications.

HP: Large companies running IBM mainframe applications are NOT going to move to cloud computing. However, midsize companies can shorten the time to provision a server by moving to Private Cloud (which of course HP provides). Atul didn’t even mention Public Cloud which might be a better choice for SMBs.

Cisco: Public cloud is outside of a company’s security and governance policy and compliance domains. As a result, “Private cloud is much more popular than most people realize.” Cisco believes there’s a 60/40 split between Private and Public clouds, which might grow to 50/50 in the next few years. Interestingly, there was no mention of Hybrid cloud or where that might fit for medium size companies.

Mr. Owens identified two huge “gaps” in Cloud:

  1. Too many tools and options to quickly develop new applications that run in the cloud (resident data centers).
  2. Orchestration of legacy systems with new ones.

Cisco is using OpenStack, while VMWare and Equinix were said to be using Open APIs (?).

HP: Customers want to build a Private cloud to operate their compute/storage requirements and then optimize them. HP also sees two huge cloud gaps, but they are different from those identified by Cisco above. From HP’s perspective the cloud gaps are:

  1. Ability to dynamically move workloads from Private to Public Cloud (with the computational results often returned to the Private cloud). “We’re not there yet,” Atul said. There was no mention of the technique called “cloud bursting” which was supposed to accommodate such dynamic, back and forth movement of workloads and results between Private and Public clouds. Evidently, that isn’t happening – at least not on a large scale.
  2. Governance: how to abstract out policies and then develop security to meet them. “The industry needs to figure out how to automatically lock down servers that have been compromised,” he added.

HP recommends migrating workloads from Amazon or VMWare clouds to OpenStack based cloud platforms (like theirs, of course). They suggest the foundation of such a cloud platform be a combination of Open Source + Cloud Foundry4 + OpenStack.


Note 4. Cloud Foundry is the industry’s Open PaaS (Platform as a Service) and provides a choice of clouds, frameworks and application services. As an open source project, there is a broad community both contributing and supporting Cloud Foundry.


Addendum:

In a whitepaper titled: What to Know Before You Migrate to Cloud,  Lauren Gibbons Paul proposes a list of questions for cloud service providers that are related to security and compliance.  Specific questions should be specific to an organization, industry and compliance requirements, but Lauren suggest these basic one’s first:

  • How much experience do you have in data center services? And in what industries?
  • Do you have experience in our industry with customers that have similar compliance needs?
  • Where will my cloud data reside? Do you own your data centers, or do you lease from a third party?
  • Do you have industry-leading physical and logical security? Describe technologies used and best practices for both types of security.
  • Do you use industry standard methodologies like ITIL (Information Technology Infrastructure Library)? What is your security and data reliability track record?
  • How fast could you recover in the event of a successful attack or disaster?
  • How transparent are you with customers?

Do you have a third party certify your security measures and compliance with industry regulations like SarbanesOxley Act of 2002?


Up Next:

The third and final article in this 2015 TiECon series will be on highlights of the IoT track and Cisco’s closing IoT Keynote speech, which clearly defined IoE (Internet of Everything) and gave a glimpse of where Cisco is investing in this space. That and all other Viodi View articles by this author can be read here.


.Addendum: Email received May 31, 2015 from Equinix on their Cloud offering:

“The cloud paradigm is not a passing fad. Most enterprises are in the process of figuring out how to adopt the cloud model for agility and elasticity reasons.  In many cases, their move to the cloud is also multi-cloud in nature. That is, the applications span across multiple private and public clouds because all the data and processing needs cannot be fully satisfied by the services hosted within a single cloud. For many of these workloads, the CIOs mention that they cannot use the public Internet because their high performance, availability and security requirements cannot be adequately satisfied. 
Equinix Cloud Exchange, an SDN driven platform, provides a high performance, secure, and highly available alternative to the public Internet that is available globally across multiple markets. Furthermore, Equinix Cloud Exchange allows enterprises to get access to all the major Network Service Providers and Cloud Service Providers in a timely (a couple of days instead of weeks) and cost effective (using a single port versus separate dedicated lines) manner. Equinix Cloud Exchange currently is integrated with most of the major Cloud Service Providers with respect to provisioning and service assurance, and it can be accessed both via a portal and also APIs.”

 

 

 

Highlights of 2015 TiECon Grand Keynotes

Introduction:

A photo of Hussain Aamir of CenturyLink.
CenturyLink’s Hussain Aamir

Over 4,300 delegates attended 2015 TiECon¹ –the largest global conference on entrepreneurship. The conference was held May 15th and 16th in Santa Clara, CA.

In this first TiECon article, we summarize the two Grand Keynote conversations from the first day (Friday May 15th of the conference. Future articles will cover keynotes and panel sessions from various tracks, such as Cloud, Security, IoT, and Breakthrough Thinkers.

Note 1. The Indus Entrepreneurs (TiE), which creates the event, has its headquarters in Silicon Valley and has chapters in 61 cities in 20 different countries. It is the world’s largest non-profit organization for entrepreneurs.

Highlights of Grand Keynote 1. – Jack Welch (ex CEO-GE) and Suzy Welch (co-author of “Real Life MBA”):

Jack: Since the 2008-2009 recession ended, companies are trying to do more with less and the pace of change has accelerated. An employee shouldn’t wait over one month in a non-creative company environment if he or she is an innovator.

Suzy: Corporate America has thousands of different ways to say NO, while entrepreneurs are YES people who must get out and start their own companies.

Jack (about his experiences in India): I couldn’t believe the intellectual capacity of India. The people are smart, aggressive, courteous, and always searching. I’m basically an Indian salesman.

Suzy: There seems to be herds of unicorns (startups valued in excess of $1B) galloping between San Francisco and Santa Clara. At the SF Four Seasons bar, we overheard tech startup talk that made our heads spin.

Jack: Startups today are different from the DOTCOM era (1998-2001) in that they have real cash flow, cause disruption (of industries and products/services), and are entering large markets. They are not “follies” or “just apps companies.”

Jack: A PhD in tech is a ticket to the moon (this author STRONGLY DISAGREES), but it’s also nice to have an MBA.

Avoiding “career pergatory:” The status quo is dangerous. Set a time-table for how long you (the employee) is going to stay with a company if stuck with a bad boss or an indifferent organization/bureaucracy. Don’t be negative during your stay at the company you may soon leave.

Suzy: Over the past few years, only about 10% of employees generally know where they stand within their company and have a sense of a career trajectory. At Google, it’s 60%. Most employees feel disillusioned and disengaged. Many hate come to work each day hating their job.

Leaders need to be turned on by the success of their people. The key is to build great product teams. Get smart people, energize and excite them, then let them go (and progress their agendas/initiatives).

Jack: There’s much quicker speed in the workplace today, because “everyone knows everything.” [Presumably that’s because of lightning quick information flow due to the Internet, social networking, mobile apps, instant messaging, texting, etc]. Companies need to be more transparent than ever before due to global competition. It’s imperative to get bureaucracy out of the company. Flatter (organizations), faster (decision-making) is needed to compete today in all types of companies.

Lessons learned: Act faster, fail fast, if it doesn’t work  – fix it. There’s no room for caution in any business today.

When asked about his life and noteworthy accomplishments, Jack said he can’t address his legacy, because “legacy is a bore.”

Suzy said Jack has an incredible curiosity about what’s happening and why. She gave an example of Jack interrogating a taxi driver in a 3rd world country everything about the place.  When they arrived at their destination, the taxi driver was completely overwhelmed by Jack’s close questioning.

Jack’s closing remark: “India is all about brain power. We went there for (lower) cost, but found intellect.”


Grand Keynote 2. – Aamir Hussain (EVP & CTO, CenturyLink), Tom Reilly (CEO Cloudera) Gary Gauba (Founder & CEO CenturyLink Cognilytics) –Transformational Journey Towards New Data Economy:

CenturyLink is the 3rd largest telco in the U.S.and operates in 5 continents. That despite only having a wireline footprint. In recent years they’ve acquired Qwest/US West, Embarq (formerly Sprint Local), Savvis, and CenturyLink Cognilytics. CenturyLink’s serves 98% of Fortune 500 companies and 20% of the world’s internet traffic flows through its network.

Cloudera is revolutionizing enterprise data management by offering the first unified platform for Big Data. It uses (Apache Open Source) Hadoop, which enables distributed parallel processing of huge amounts of data across inexpensive, industry-standard servers that both store and process the data, and can scale without limits.

Cogniltyics (now part of CenturyLink) is a Big Data/Analytics as a Service company.

The lobby of CenturyLink's technology center in Monroe, LA.
CenturyLink’s Technology Center of Excellence

Century Link (CTL) recently opened a huge “Technology Center of Excellence” in Monroe, Louisiana. It includes a technology research and development lab, a network operations center and collaborative office and meeting space. In the Center, employees with network, cloud, information technology and other skills will work together to create innovative products and services for CenturyLink’s customers.

Aamir, who hold 11 telecom related patents, said CTL has transformed itself from a traditional telco (providing only network connectivity) to an IT services company (with a full range of managed services). There are thousands of applications running on the CTL network (we suspect most of these came from the Savvis acquisition in 2011).

“More data is being created today then companies can process,” Mr. Hussain said. And that trend will only accelerate with IoT devices sending massive amounts of collected/monitored data to the cloud. While old data was said to have “gravity,” new data (from sensors, mobile/wearable/IoT devices) will be processed by cloud resident compute servers

Hussain believes there’s a huge market for hybrid (private + on premises) cloud.  His very credible thesis is that the older IBM mainframe applications will continue to run in premises  customer data centers, while new applications will be developed and invoked from a hosted private  cloud.  That makes for a “static” hybrid cloud solution, which doesn’t have to deal with the thorny (and unresolved) problem of bursting from private to  public cloud with data results being stored back in the private cloud for security, safety, and governance/compliance.”

“Cyber security is seen as a huge opportunity for CTL. It’s on top of every customers mind who ask: How to protect my business? “ As 20% of global data traffic passes through the CTL network, the company strongly believes they have a responsibility to protect it, Hussain said.

[Tom Reilly said that Cloudera was using on chip encryption from Intel and cyber security intelligence in Hadoop to protect their customers’ data.]

Summing up, Hussain provided this advice to service provider companies: “Be agile, nimble, listen to customers. Big data has and will continue to change (disrupt?) many business models.”

Gary Gauba gave this advice for entrepreneurs: “Dream big and go make it happen. Take the ups and downs of your entrepreneurial journey in stride. Believe in yourself.” Gary suggested that CenturyLink and Cloudera were good companies for entrepreneurs to partner with.

In a post conference email to this author, Gary expressed his thoughts on the TiECon session and its relevance for the “new data economy.”:

The transformational journey for the new data economy is a common theme and has sparking a lot of interest.  The thesis behind this topic is big data, the evolution of technology and serving the omni-channel customer. At TiECon, Aamir Hussain, Tom Reilly and I presented at a grand key note discussing the implications of the cloud, big data and the Internet of Things (IoT).

The question on everyone’s mind is: How does my organization embark on the journey of the new data economy?  Organizations are hoarding terabytes of data — only a small fraction is actually being monetized, and the rest gets lost.

As technology leaders, Cloudera and CenturyLink Cognilytics are looking at ways to transform processes and interactions with customers to ultimately reduce costs and improve efficiency. CenturyLink Cognilytics and Cloudera are working together on a mission to help businesses of all sizes monetize this data as a strategic asset, transforming raw data into actionable and valuable insights that help them leap-frog their competition.

CenturyLink showcased itself as an 80+ year old, entrepreneur-like company that has built grand-scale technology centers of excellence and is leading the charge on enterprise-grade technology solutions

On TiECon 2015:

It was a great turnout at TiECon. Thousands of budding entrepreneurs, venture capitalists, executives and inquisitive minds listened to keynotes, participated in breakout sessions and engaged with start-ups.   


References:

Video of the 2nd Grand Keynote: https://www.youtube.com/watch?v=f6hdyCxFTVE

Interview with Aamir Hussain of CenturyLinkhttps://www.youtube.com/watch?v=4noR3WuswP4

CenturyLink’s gigabit fiber expansion in 17 states targets SMBs:
http://community.comsoc.org/blogs/alanweissberger/centurylinks-gigabit-fiber-expansion-17-states-targets-smbs


Postscript:

On May 19th CTL announced it has been identified by industry analyst firm Gartner, Inc. as a visionary in the 2015 Magic Quadrant for Cloud Infrastructure as a Service- Worldwide, report.

“In the fast-moving cloud market, CenturyLink continues to differentiate in hybrid IT innovation with our advanced cloud services and complementary agile infrastructure, network and managed services,” said Jared Wray, senior vice president, platforms, at CenturyLink. “The velocity of our cloud innovation continues to intensify, with our agile DevOps approach delivering new features and functionality that delight our customers.”

With the recent acquisitions of Orchestrate, Cognilytics and DataGardens, as well as global expansions of its cloud node locations and data center footprint, CenturyLink continues to advance its managed services, cloud and collocation offerings for enterprises.

Gartner analysts Lydia Leong, Douglas Toombs and Bob Gill authored the Magic Quadrant for Cloud Infrastructure as a Service, Worldwide, report, published on May 18, 2015. Evaluation for the report was based on vendors’ completeness of vision and ability to execute.

 

IDC Directions 2015: Major Network Transformations Needed to Adopt to 3rd Platform

Introduction:
Network realignment was a very hot topic at IDC Directions 2015 last Wednesday in San Jose, CA. We review selected presentations that cover the types of new mobility and cloud network transformations that will reside on the 3rd platform (cloud, mobile, social business, big data/analytics).

The major wide area network (WAN) transformation needed is one that moves from remote/central site private line/virtual private line connectivity to all sites having a reliable, available, and high performance connection to one or more Cloud Service Providers (CSPs). New strategies and partnerships are forming to address these challenges for wireless and wire-line carriers/MSOs as well as for newer players providing cloud connect solutions such as cloud exchanges.

Presentation Summaries & Take-Aways:
(1) During an early bird session on the big SMB Technology Reset, IDC’s Ray Boggs noted that on average, SMB outperformers (those citing net revenue gains in past year) were 61% more likely than the average SMB to prefer cloud delivery over on premise when deploying new IT solutions. Laggards (those citing net decreases in revenue over the past year) have almost the same response rate as the average. With a cloud access/delivery first model, SMBs need to revamp their WANs from the typical point to point private line/virtual private line or network model to one where all sites have high-speed/high availability access to cloud compute and storage resources.

Ray added that those same SMB outperformers are much more likely than the average SMB to prioritize mobile support (BoD, 3G/4G, WiFi) as a key 2015 spending priority. In particular, Small Business Outperformers are 58% more likely, Mid Market Outperformers are 60% more likely than the average SMB to have a solid mobile workforce strategy in place by 2015.

(2) In his morning keynote presentation on Tech Disruption and Data Center Transformation, IDC’s Rick Villars said that only 11% of WAN managers said they don’t need to change their networks to accommodate cloud services (likely because they weren’t planning to use them anytime soon). The remaining 89% of WAN managers are pursuing multiple options to realign their networks from the typical branch office-central site connectivity to more of a star topology where the majority of compute and storage services are delivered from one or more clouds. Some of the questions those managers were said to be concerned with were:

  • Where’s Your Data? Is it stored locally, cached, or in the cloud?
  • What’s In Your Service Catalog? For access by both internal line of business’ and external customers/partners.
  • Is Your Network Congested? If so, how to alleviate it without too much over-provisioning?
  • 50% of new IT hardware will be bought as a “converged bundle” in 2018. New software defined models (OpenStack, Hyper Convergence, Software Containers, etc) will influence IT hardware purchases.
  • 58% of IT budget in 2016 will be for managed services.

(3) In a very intriguing presentation on Industry Clouds for line of business (LOB) to line of business communications, IDC’s Scott Lundstrom made these key points:

  • Numerous examples exist in life sciences, biotech, financial services, retail, manufacturing, government, healthcare, and energy
  • The number of Cloud Industry Platforms will expand to +500 by 2016, generating over a billion dollars in IT spending
  • Industrial Data Lakes – Big Data on industry-specific platforms (e.g. GE, Merck, UHG)
  • Industry platforms will disrupt 1/3 of the Top 20 Market Leaders in most industries by 2018
  • Industry Cloud Participants include: existing enterprise suppliers, emerging cloud platform operators and networks, industry process and community specialists, services, software, and hardware vendors. Effectively, one Line of Business (LOB) to another LOB.
  • New joint ventures will emerge
  • Industry developer communities will gather and grow
  • End users becoming suppliers – Global 2000
  • LOB-2-LOB is the next B2B

Digital networks (LANs and WANs) are having a huge impact and disrupting business models:

  • Innovation Accelerators drive change in every industry
  • Connected products create new service opportunities
  • Improving the process with sensors and automation
  • Distribute intelligence and determine the next best action

(4) The spot on highlight of this year’s IDC Directions for me was Courtney Munroe’s presentation “The Future of Telecommunications Networking: Resurgence or Obsolescence?” With digital traffic and content continuing its exponential growth trajectory, and ARPUs flat or declining, both wireless and wireline telcos have an unsustainable business model. What steps they take to ensure their survivability depends on the market they’re addressing: wireless, residential broadband, enterprise wire-line, or cloud connect.

Consumer (wireless and residential broadband) market requirements for telcos:

  • Manage the mobile data storm (Courtney didn’t say how – data caps?)
  • Recognize that pure play connectivity/Internet access is dead. Instead, implement a multi-play strategy (Verizon, AT&T, and Comcast have certainly done that with their double and triple play bundles)
  • Create an Over The Top (OTT) strategy – either alone or with partner companies. An example is Vodafone partnering with Dropbox to deliver cloud based storage for smart phones.

Enterprise market requirements for telcos: develop a Cloud Hub Enterprise WAN. This is best illustrated in the chart below titled: Enterprise WAN Requirements vs Internet-based Cloud Connectivity

IDC - Slide 6 - Task at Hand
The Task at Hand: Developing the CSP Enterprise – slide courtesy of IDC

Instead of the plethora of connectivity choices for business customers to interconnect their geographically dispersed locations (private lines, Ethernet virtual private lines/LANs, IP-MPLS VPN, IP SEC VPN, etc) Courtney suggested that all physical sites should be cloud connected. The three choices today, depicted in the illustration below, are: public Internet, private line to CSP POP, and something equivalent to Verizon’s Private IP (one of several network operators that have a cloud network solution).

IDC - Slide 8 - Cloud Connected Devices.
Cloud Connect Choices – slide courtesy of IDC

Among cloud networking solutions similar to Verizon’s Private IP: AT&T Netbond, Orange’s Business VPN Gallerie, NTT Com’s Enterprise Cloud (for NTT’s private cloud service only), Century Link/Savis IP-MPLS VPN, and specifications from the Metro Ethernet Forum on Carrier Ethernet for Cloud Service Delivery  (although we don’t know of any network or cloud providers that have implemented it yet).

NFV was said to be “the new holy grail” for network operators as they’d then be able to virtualize and automate service creation and delivery. NFV examples include: vCPE, vFirewall, vVPN (???), vSet Top Boxes. Courtney said that telcos might be able to save 25% on operational costs and provide cloud based services. He identified AT&T, Telefonica, NTT/Virtela, and China Telecom as telcos that have announced NFV initiatives (Orange is also a leader in testing and deploying NFV in their SF research center). AT&T was quoted as saying that by 2020, 70% of their network would be virtualized.

When it comes to global revenues and profits, the telco space is very concentrated with five major players: AT&T, NTT, Verizon, DT, and China Mobile as per the graphic below:

IDC - Slide 11 - Even more Connections
Global Earnings: Even more Consolidation – slide courtesy of IDC

 

Consolidation is expected to continue in 2015. IDC say that there were ~100 telecom M&A deals in 2014 worth $262B.

Mr. Munroe than presented six telco/MSO business models: mobile first operator, integrated multi-national super carrier, broadband/content first super carrier (mostly MSOs/cablecos), data center exchange/ fiber-cloud interconnection, cloud communications providers, and cloud VPNs.  The Data Center Exchange/Cloud Connect/Cloud Exchange model is shown below:

IDC - Slide 16 - The Evolving Business Model
The Evolving Business Model Datacenter Exchange/Fiber Centric, slide courtesy of IDC

For Data Center Exchanges/ Fiber Centric players, Courtney named several companies: Level 3, Tata Communications and Allied Fiber (see ViodiTV interview with Allied Fiber’s Hunter Newby). For Cloud Exchange, he cited: Equinix, Interxion, and Zayo.


Sidebar: Cloud Exchanges and Cloud Connect Solutions

For several years we’ve heard about cloud exchanges for interconnecting multiple cloud providers, but haven’t seen much deployment yet. Hosting and co-location providers realize that space and power are becoming a commodity service, so they are beginning to offer higher value cloud exchange or cloud connect services to provide direct connectivity for their customers to global carriers, ISPs, Internet exchanges, content and CDN players, storage vendors and enterprise and ecosystem partners.
“Cloud Connect” solutions allow co-location providers to offer enterprise customers high bandwidth, low-latency cloud connections that bypass the public Internet for superior throughput, reliability, security, and economics. Cloud Connect services combine the economics and service velocity of public clouds with the performance, reliability, and security of private connections.

Months ago, I asked a Comcast Business speaker how his company would provide cloud access to business customers he said “Cloud Exchanges” without any hesitation. We think this area deserves close watching in the months ahead.


Summing up with essential guidance for the telco/MSO space:

  • Large Scale Super Carriers will strive for additional scale
  • Cloud Exchanges will expand to Emerging Markets
  • Cloud Platform Providers will become major Players
  • SDN/NFV will create long-term investment opportunities
  • CSPs need help developing Channels (vertical solutions, IoT developers, VARs/OEMs/Systems Integrators, and OTT players)
  • Developers will become important CSP Partners (we think that will be especially true for OTT and IoT solutions)

It will be very interesting how all this plays out as the move to the 3rd platform accelerates in the years ahead.

References:

Value Added Business Services Drive New Revenue

An image showing the interior of ITS Fiber's Data Center.
ITS Fiber Data Center

Perhaps data center was the wrong term to use in describing the event Viodi is producing in conjunction with ITS Fiber this week in Florida. It is almost impossible for an independent operator to compete with large, data centers that are essentially real estate plays backed by entities with deep pockets. The point of our mini-conference is to explore how operators can use their strengths to create new value for customers and generate new revenue from non-regulated sources; specifically from business customers..

Generation of revenue from non-regulated sources has been a priority for many independent operators over the past decade. With the threat of Title II regulation of broadband, finding new revenue and targeting investments to generate that revenue will be even more important. Offering services to business and institutional customers that add value to a broadband connection is one such new revenue source.

ITS Fiber has made the transformation from provider of telephone services in a limited rural market to one that offers a panoply of business services via an all-fiber, all underground network throughout a larger region. By leveraging its fiber assets and its relatively protected and secure location, ITS Fiber is able to offer a package of services that is hard to match by larger competitors that don’t have the same local presence as ITS Fiber.

[dropshadowbox align=”right” effect=”lifted-both” width=”250px” height=”” background_color=”#ffffff” border_width=”1″ border_color=”#dddddd” ]The informational graphic about the webinars that will be held at ITS Fiber.Can’t make it to Florida for next week’s event? Watch and take part via webinar. Click here to learn more.[/dropshadowbox]In today’s all-IP world, where floor-space needs are reduced from racks and racks to rack-units, the central office offers an opportunity to provide a secure place for various forms of servers.

Jeff Meyer of ITS Fiber will be speaking about how they worked with the IIS Group, LLC and Ingemel, S.A Engineering Company to effectively redevelop their central office into a data center. More than that, however, they have figured out a way to modularize the approach, such they can match investment in additional data space with actual customer demand.

This idea of pinpointing investment to match revenues is another of today’s telecom realities. A deep fiber network is a given. The key is how and where to deploy that last mile fiber to ensure investments are directed to the right locations. Calix’s Juan Vela will enlighten attendees about how things like analytics can help operators maximize their return on infrastructure investment.

Generic servers and software are starting to replace some of the purpose-built infrastructure. Steve Gleave of Metaswitch Networks will provide a high-level view of what operators need to know about the enabling infrastructure technology, such as SDN (Software Defined Networking) and NFV (Network Functions Virtualization) that will make operators more efficient, flexible and responsive with their offerings. Metaswitch Networks’ Chris Carabello will discuss how operators can differentiate their offerings by customizing apps specifically for a given customer.

We are also going to start a conversation with Dave Fridley of FARR Technologies on how these customers, although technically customers of business services, are not always in the office. That is, broadband and cloud services are making telecommuting and the distributed workforce a reality. Fridley, who lives this reality with his company, will explore ways operators should target this growing market. We will also be talking to the founder of Cloud PC Online about their offering, which effectively virtualizes the PC; this could be a great tool for work-from-home and BYOD (Bring Your Own Device) environments.

Implicit in the day’s discussions will be the need for rapid service introduction to keep up with and stay ahead of technology changes and competitive turmoil. Marketing plays an important role in ensuring a company stays in the forefront, so it’s fitting that Leo Staurulakis of JSI/JSI Capital Advisors and Denise Lechner of ITS Fiber will finish the day leading a discussion about techniques for the marketing efforts around business services.

So yeah, the data center is really just a small, but important part, of a larger discussion that we will start this week….there is so much more to discuss than what is mentioned above, but that will be for another day and place.

Security is Biggest Issue for U.S. Infrastructure, Cloud Computing, Open Networking, and the Internet of Things

The Security Threat is Real and Increasing!

“At around 8:15am the Monday before Thanksgiving, that black screen of death came on (all the office PCs). They shut down the entire network. We couldn’t really work the rest of the week, which seemed OK because it was a holiday week. But as Tuesday and Wednesday progressed, it became clear that this wasn’t a simple hack….It wasn’t until Monday or Tuesday of the following week when we realized the extent of it. That’s when we got word that it might take weeks to get (our PCs and Data Centers) back up.”

Those words from an employee of Sony Pictures Entertainment who talked to Fortune magazine.

As is now common knowledge, Sony Pictures Entertainment revealed that it had been hacked by a group calling itself the Guardians of Peace, which the FBI claims was an agent of North Korea. Apparently, that repressive Communist country was using cyber-terrorism in an attempt to repress free speech in the United States.

Few remember that between April and May 2011, Sony Computer Entertainment’s online gaming service, PlayStation Network, and its streaming media service (Qriocity), along with Sony Online Entertainment (the company’s in-house game developer and publisher), were hacked by LulzSec – a splinter group of the hacker collective known as Anonymous.

The latest Sony cyberattack comes after many years where China’s government has been accused of hacking into U.S. State Department, Postal Service, military contractors and government agency computer networks.

Iran has tried to disrupt American banks with denial-of-service attacks, and conducted a destructive attack on a Saudi oil company’s computers in 2012. For years, organized crime groups in Russia have used cyberespionage to commit financial fraud, while the Russian government does nothing to stop it.

Expect to hear of more of our government networks infiltrated by rogue foreign states. A Georgia Institute of Technology report on Emerging Cyber Threats in 2015 states, “Low-intensity online nation-state conflicts become the rule, not the exception.”

It’s not only Sony and the U.S. government being targeted. Let’s not forget the cyber attacks and data breaches on Target, JP Morgan Chase, Home Depot, Apple, EBay, P.F. Chang (restaurants), Domino’s Pizza, Montana Health Department, Google, etc.

Reports, Maps, and Expert Opinions:

In it’s most recent State of the Internet Security report, Akamai states that there were a record setting number of DDoS (Distributed Denial of Service) attacks on websites in Q3 2014.The 22% increase in total DDoS attacks marked an 80% increase in average peak bandwidth compared to Q2 2014 and a 389% increase from the same period a year ago (Q3 2013). That means the largest companies with the highest bandwidth websites are being targeted by hackers.

Kaspersky's real-time cyberthreat map.
Kaspersky’s real-time cyberthreat map.

This terrific interactive cyber map from anti-virus software maker Kaspersky, depicts all the current cyber attacks occurring around the world in real time. It clearly shows the growing intensity of hack attacks as the year progresses.

“Security will never be the same again. It’s a losing battle,” said Martin Casado, PhD during his Cloud Innovation Summit keynote speech on March 27, 2014.  “Currently, cyber security spend is outpacing IT spend, and the only thing outpacing security spend is security losses,” he added.

A recent survey by the Ponemon Institute indicated the average cost of cyber crime for U.S. retail stores more than doubled from 2013 to an annual average of $8.6 million per company in 2014. The annual average cost per company of successful cyber attacks increased to $20.8 million in financial services, $14.5 million in the technology sector, and $12.7 million in communications industries.

Clearly, this isn’t an issue of investment, innovation, or priorities as most large industries are built around security. Mr. Casado believes there is a fundamental architectural issue: that we must trade off between context and isolation when implementing security controls.

Security Top Concern for Cloud Computing and Open Networking:

With today’s huge “cloud” resident data centers (Google, Amazon, Facebook, Microsoft, Yahoo, etc), there is a very large potential “attack surface” or “threat footprint” for malware and other cyber threats. That’s still the number one concern of users who are considering cloud computing.

In a Dec 17, 2014 article KPMG says “Data Security Still Top Cloud Concern.” However, theft of intellectual property (IP) is the most significant challenge IT executives face in doing business in the cloud. Isn’t theft of IP a security issue too?

The mega trend to replace hardware functions by software (known as open networking, software defined networking, network virtualization, and network function virtualization) greatly compound the security problem by exponentially expanding the threat attack surface.

For example, if the (centralized) SDN Controller goes down because of a DDoS attack, the entire network goes down. If multiple NFV “virtual appliances” are implemented on a compute server that has been compromised, all those functions stop working. Similarly, if a server running network virtualization (or tunneling in the overlay SDN model) is attacked, that network goes down too.

U.S. Infrastructure May Be Targeted Next:

Information security experts say the greatest danger is that foreign governments and cyber terrorists will go after the nation’s critical infrastructure — airports, water treatment plants, power companies, oil refineries and chemical plants.

Cyber terrorists could turn off the lights for millions of Americans by attacking power grids, shut down the nation’s airports by seizing control of air-traffic control systems or blow up an oil pipeline from thousands of miles away, experts say.

“This is a much bigger threat over time than losing some credit cards to cyber-criminals,” said Derek Harp, lead instructor at the recent training conference run by SANS Institute, which provides cyber security education and certification for people who run industrial control systems.

Maryland Rep. Dutch Ruppersberger, the senior Democrat on the House Intelligence Committee, said cyber attacks will be “the warfare of the future.”

“Just think what could happen down the future if North Korea wanted to knock out a grid system, an energy system, knock out air- traffic control,” he said in a December 22nd interview on CNN.

What Will U.S. Government Do in Response?

At a news conference last week, President Obama urged Congress to try again next year to pass

“strong cybersecurity laws that allow for information-sharing. … Because if we don’t put in place the kind of architecture that can prevent these attacks from taking place, this is not just going to be affecting movies, this is going to be affecting our entire economy.”

A front page article in the December 26th Wall Street Journal reported “that (U.S. government) officials have held a series of briefings on the issue in 13 cities across the country advising companies not to connect industrial control systems to the Internet.”  The article does not state or infer what those systems should be connected to.

Finally, we infer that the highly touted Internet of Things will be subject to the same cloud security issues as industrial control systems. I shudder at the thought.

References:

 

AT&T's "SDN-WAN" as the Network to Access & Deliver Cloud Services

Introduction:

For several years, we’ve wondered why there was so many alternative WANs used and proposed to access and deliver cloud computing and storage services (IaaS, PaaS, SaaS, etc.) for public, private, and hybrid clouds. The list includes: the public Internet (best effort), IP MPLS VPN, other types of IP VPNs, Carrier Ethernet for Private Cloud (MEF spec), dedicated private line to Cloud Service Provider (CSP) data center/platform/point of presence, etc.

AT&T is attempting to position its “SDN WANenhanced IP-MPLS VPN as the unified WAN solution for cloud services provided by its partners.  At IT Roadmap in San José, CA on Sept 17, 2014, Randall Davis announced that AT&T is partnering with several CSPs to use its enhanced IP-MPLS VPN WAN to enable end users to access a variety of cloud services. The impressive list of CSPs includes: Microsoft (Windows Azure), HP, IBM, Salesforce.com, Box, and CSC. That bestows credibility and confidence in AT&T’s cloud networking approach.

Network Enabled Cloud Solutions via AT&T NetBond:

Mr. Davis stated that AT&T spends ~$20B per year on CAPEX/OPEX to maintain and improve its wireless and wire-line networks. Instead of discrete network functions and equipment associated with individual services running on disparate subnetworks, AT&T’s goal is to consolidate all services to be delivered to customers onto a software based, programmable, cloud like “SDN WAN” which uses their own intellectual property (see below).

AT&T's vision of a network enabled cloud. Image courtesy of AT&T.
AT&T’s vision of a network enabled cloud. Image courtesy of AT&T.

“The User Defined Network Cloud is AT&T’s vision for the network of the future,” Davis stated. “Our goal is to provide a set of services delivered from a single cloud-like network. AT&T is tapping into the latest technologies, open source projects and open network principles to make that happen,” he said.

“It’s a fundamental new way to build a smart “cloud-like network” that addresses the many concerns of end users about the network being the bottleneck in delivery of cloud services.” Indeed, barriers to moving enterprise workloads to the cloud often involve the WAN. For example, how can the network address cloud integration complexity, a warehouse of telecom circuits, security, reliability/availability, and compliance issues?

AT&Ts “network enabled cloud,” called NetBond, allows customers to extend their existing MPLS Virtual Private Network (VPN) to the CSPs platform for the delivery of business/ enterprise applications through fast and highly secure connectivity.  AT&T says they are driving the network enabled ecosystem and working with leading CSPs such as Microsoft, Salesforce.com, HP, IBM, CSC and Equinix.

Positioned between the enterprise customer premises and the CSP’s platform/point of presence, AT&T’s NetBond provides a highly flexible and simple way for AT&T customers to utilize their enterprise VPNs to connect to a cloud computing or IT service environment in AT&T’s cloud partner ecosystem (which is growing).  This solution bypasses the public Internet entirely, thereby providing secure and seamless access to CSPs applications and data storage.

AT&Ts NetBond enables the end customer to integrate cloud services within its enterprise wide IP-MPLS VPN (from AT&T, of course).  It does so by extending its MPLS VPN to the CSPs compute/storage platform, thereby isolating traffic from other customer traffic, creating a private network connection. As a result, there’s no need for a separate IP VPN to/from the CSP.

The solution is designed around the following keys areas:

  1. Flexibility. Network bandwidth is optimized for your workloads and fully scalable
  2. Network Security and isolation. Intelligent routing directs traffic to logically separated customer environments on shared physical infrastructure.
  3. Availability and performance. The solution is built on a stable, robust and scalable technology platform resulting in up to 50% lower latency and up to 3X availability.
  4. Automation and control. The solution uses automation and a self-service portal to activate service changes in hours versus weeks.

NetBond permits both the network and cloud infrastructure to scale or contract in tandem and on-demand, rapidly accommodating workload changes. It seems to be well suited for customers who want to avoid exposure to the public Internet and risk of DDoS attacks, as well as, have a highly available and high-performance connection to their cloud resources. Davis said that “NetBond provides a scalable, highly secure, high performance, and integrated WAN solution” for access to the cloud.

Other benefits were identified:

  • Private IP address space avoids DDoS attacks
  • API controlled pre-integrated survivable network infrastructure
  • Elasticity with dynamic traffic bursting (above some pre-defined threshold)
  • AT&T sells baseline units of traffic capacity with most bursting covered
  • Bursting overages at the “95th percentile” incur an extra charge
  • Any to any instant on connectivity (zero provisioning time to reach CSP that’s partnered with AT&T)
  • Improved legacy application performance and increased throughput
  • Privacy from separation of data and control planes
  • Better availability due to simplicity of operations
  • Bursting capability eliminates gaps and gluts
  • Cost model aligns with cloud usage

AT&T NetBond Backgrounder:

AT&T’s website states: NetBond provides benefits of a private network with the flexibility of cloud. With NetBond, security, performance and control are no longer barriers to moving enterprise applications and data to the cloud.

“NetBond uses patented technology that uses Software Defined Network (SDN) capabilities, providing traffic routing flexibility and integration of VPN to cloud service providers. With AT&T NetBond, customers can expect up to 50% lower latency and up to 3x availability. In addition, network connectivity can be scaled up or down with the cloud resources resulting in bursting of up to 10 times your contracted commitment. From a security perspective, AT&T NetBond isolates traffic from the Internet and from other cloud traffic reducing exposure to risks and attacks such as DDoS.”

“AT&T VPN customers can create highly-secure, private and reliable connectivity to cloud services in minutes without additional infrastructure investments and long-term contract commitments. We also enable end to end integration with cloud service providers resulting in a common customer experience regardless of the cloud platform.”

“Because it can reduce over-provisioning, AT&T NetBond can result in savings of as much as 60% on networking costs compared to internet based alternatives. Also, customers experience true flexibility in that they only pay for what they have ordered and are able to change their billing plan at any time to reflect usage.”

For more on the technology used for AT&T’s IP MPLS VPN see this white paper:

What’s the Control Mechanism for NetBond?

AT&T uses its own version of SDN WAN with “APIs to expose control mechanisms used to order (provision) and manipulate network services.” AT&T’s SDN WAN is based on proprietary intellectual property the company refers to as “Intelligent Route Service Control Processor (IRSCP).” That technology is used to dynamically change the routing (end to end paths) in the network to respond to operational changes, new customers, more or less traffic, and to automatically recover from failed network nodes or links. Davis said that AT&T’s suppliers are using the company’s version of SDN WAN in “novel ways.” AT&T is also using open source software whenever possible, he said (we assume that to mean in their suppliers’ equipment and possibly in their network management/OSS software).

A quick web search indicates that AT&T has at least one patent on IRSCP. In 2006, AT&T Labs researchers published a paper titled, “Dynamic Connectivity Management with an Intelligent Route Service Control Point” in the Proceedings of the 2006 SIGCOMM Workshop on Internet Network Management.

Mobile Integration into Cloud Applications is Needed:

With more and more mobile apps on smart phones and tablets accessing cloud based applications, it’s essential to provide a wireless network that solves both security and performance problems. Randall hinted that AT&T’s NetBond may be extended to include wireless access in the near future. The following benefits of doing so were enumerated:

  • Faster time to market for new mobile apps
  • Access to easier solutions which can be quickly configured (no explanation provided)
  • Simpler compliance
  • Improved performance
  • Better security

Author’s Notes:

  1. Mr. Davis referred to “Project Gamma” as an early example of AT&T’s Domain 2.0 architecture. It was said to be an example of “User Defined Network Cloud (UDNC)” in that it virtualizes Ethernet connectivity and routing to automate services delivered to AT&T customers. [No reference was given or could be found for Project Gamma.]
  2. On Sept 17, 2014 (the date of Mr. Davis’ IT Roadmap-SJ presentation), Light Reading reported that AT&T will bring its User-Defined Network to Austin businesses by the end of this year.

“This is really focused on wireline services, specifically, we’re starting with Ethernet… I would expect that we’ll look at wireless too,” says Josh Goodell, VP of Network on Demand at AT&T.

Businesses with the Network on Demand Ethernet service will be able to change some network services and modify upload and download speeds via a self-service portal. This will mean that services will be changed almost instantaneously, “rather than the previous method of modifying, installing or replacing hardware to make network changes,” AT&T notes.

Addendum:

On Sept 18, 2014, AT&T and IBM announced a strategic partnership AT&T Teams with IBM Cloud to Extend Highly Secure Private Network to Clients.

AT&T NetBond services will be extended to IBM’s SoftLayer platform for stronger security and performance. This extension of the IBM and AT&T alliance will allow businesses to easily create hybrid-computing solutions. AT&T Virtual Private Network (VPN) customers can use AT&T NetBond to connect their IT infrastructure to IBM’s SoftLayer private network and cloud services. The service allows customers to benefit from highly secure connections with high reliability and performance as an alternative to relying on public Internet access.

“AT&T NetBond gives customers a broader range of options as they explore how to best leverage a hybrid cloud,” said Jim Comfort, general manager of IBM Cloud Services. “Customers can easily move workloads to and from SoftLayer as if it were part of their local area network. This added flexibility helps optimize workload performance while allowing customers to scale IT resources in a way that makes sense.”

“Businesses look to AT&T and IBM to deliver best in class solutions to meet their most demanding needs— especially when it comes to cloud,” said Jon Summers, senior vice president growth platforms, AT&T Business Solutions. “Together, we’re making the network as flexible as the cloud and giving enterprises confidence they can migrate their business systems to the cloud and still meet their security, scalability and performance requirements.”

End NOTE:  We will update this article if and when we receive a figure from AT&T that illustrates NetBond.  Stay tuned!

 

2014 Hot Interconnects Semiconductor Session Highlights & Takeaways- Part I.

Introduction:

With Software Defined: Networking (SDN), Storage and Data Center movements firmly entrenched, one might believe there’s not much opportunity for innovation in dedicated hardware implemented in silicon.  Several sessions at the 2014 Hot Interconnects conference, especially one from ARM Ltd, indicated that was not the case at all.

With the strong push for open networks, chips have to be much more flexible and agile, as well as more powerful, fast and functionally dense. Of course, there are well known players for specific types of silicon. For example: Broadcom for switch/routers;  ARM for CPU cores (also Intel and MIPS/Imagination Technologies),  many vendors for System on a Chip (SoC)- which includes 1 or more CPU core(s)-mostly from ARM (Qualcomm, Nvidia, Freescale, etc), and Network Processors (Cavium, LSI-Avago/Intel, PMC-Sierra, EZchip, Netronome, Marvell, etc),  bus interconnect fabrics (Arteris, Mellanox, PLX /Avago, etc).

What’s not known is how these types of components, especially SoC’s, will evolve to support open networking and software defined networking in telecom equipment (i.e. SDN/NFV).    Some suggestions were made during presentations and a panel session at this year’s excellent Hot Interconnects conference.

We summarize three invited Hot Interconnects presentations related to network silicon in this article. Our follow on Part II article will cover network hardware for SDN/NFV based on an Infonetics presentation and service provider market survey.

  1. Data & Control Plane Interconnect Solutions for SDN & NFV Networks, by Raghu Kondapalli, Director of Strategic Planning at LSI/Avago (Invited Talk)

Open networking, such as SDN (Software Defined Networking) and NFV (Network Function Virtualizationprovides software control of many network functions.   NFV enables virtualization of entire classes of network element functions such that they become modular building blocks that may be connected, or chained, together to create a variety of communication services.

Software defined and functionally disaggregated network elements rely heavily on deterministic and secure data and control plane communication within and across the network elements. In these environments scalability, reliability and performance of the whole network relies heavily on the deterministic behavior of this interconnect.  Increasing network agility and lower equipment prices are causing severe disruption in the networking industy.

A key SDN/NFV implementation issue is how to disaggregate network functions in a given network element (equipment type).  With such functions modularized, they could be implemented in different types of equipment along with dedicated functions (e.g. PHYs to connect to wire-line or wireless networks.  The equipment designer needs to: disaggregate, virtualize, interconnect, orchestrate and manage such network functions.

“Functional coordination and control plane acceleration are the keys to successful SDN deployments,” he said.  Not coincidently, the LSI/Avago Axxia multicore communication processor family (using an ARM CPU core) is being positioned for SDN and NFV acceleration, according to the company’s website. Other important points made by Raghu:

  • Scale poses many challenges for state management and traffic engineering
  • Traffic Management and Load Balancing are important functions
  • SDN/NFV backbone network components are needed
  • Disaggregated architectures will prevail.
  • Circuit board interconnection (backplane) should consider the traditional passive backplane vs. an active switch fabric.

Axxia 5516 16-core communications processor was suggested as the SoC to use for a SDN/NFV backbone network interface.  Functions identified included:  Ethernet switching, protocol pre-processing, packet classification (QoS), traffic rate shaping, encryption, security, Precision Time Protocol (IEEE 1588) to synchronize distributed clocks, etc.

Axxia’s multi-core SoCs were said to contain various programmable function accelerators to offer a scalable data and control plane solution.

Note:  Avago recently acquired semiconductor companies LSI Corp. and PLX Technology, but has now sold its Axxia Networking Business (originally from LSI which had acquired Agere in 2007 for $4 billion) to Intel for only $650 million in cash.  Agere Systems (which was formerly AT&T Micro-electronics- at one time the largest captive semiconductor maker in the U.S.) had a market capitalization of about $48 billion when it was spun off from Lucent Technologies in Dec 2000.

  1. Applicability of Open Flow based connectivity in NFV Enabled Networks, by Srinivasa Addepalli, Fellow and Chief Software Architect, Freescale (Invited Talk)

Mr. Addepalli’s presentation addressed the performance challenges in VMMs (Virtual Machine Monitors) and the opportunities to offload VMM packet processing using SoC’s like those from Freescale (another ARM core based SoC).   The VMM layer enables virtualization of networking hardware and exposes each virtual hardware element to VMs.

“Virtualization of network elements reduces operation and capital expenses and provides the ability for operators to offer new network services faster and to scale those services based on demand. Throughput, connection rate, low latency and low jitter are few important challenges in virtualization world. If not designed well, processing power requirements go up, thereby reducing the cost benefits,” according to Addepalli.

He positioned Open Flow as a communication protocol between control/offload layers, rather than the ONF’s API/protocol between the control and data planes (residing in the same or different equipment, respectively).  A new role for Open Flow in VMM and vNF (Virtual Network Function) offloads was described and illustrated.

The applicability of OpenFlow to NFV1 faces two challenges, according to Mr. Addepalli

  1. VMM networking
  2. Virtual network data path to VMs/

Note 1.  The ETSI NFV Industry Specification Group (ISG) is not considering the use of ONF’s Open Flow, or any other protocol, for NFV at this time.  It’s work scope includes reference architectures and functional requirements, but not protocol/interface specifications.  The ETSI NFV ISG will reach the end of Phase 1 by December 2014, with the publication of the remaining sixteen deliverables.

“To be successful, NFV must address performance challenges, which can best be achieved with silicon solutions,” Srinivasa concluded.   [Problem with that statement is that the protocols/interfaces to be used for fully standardized NFV have not been specified by ETSI or any standards body.  Hence, no one knows the exact combination of NFV functions that have to perform well]

  1. The Impact of ARM in Cloud and Networking Infrastructure, by Bob Monkman, Networking Segment Marketing Manager at ARM Ltd.

Bob revealed that ARM is  innnovating way beyond the CPU core it’s been licensing for years.  There are hardware accelerators, a cache coherent network and various types of network interconnects that have been combined into a single silicon block that is showed in the figure below:

Image courtesy of ARM - innovating beyond the core.
Image courtesy of ARM

Bob said something I thought was quite profound and dispels the notion that ARM is just a low power, core CPU cell producer: “It’s not just about a low power processor – it’s what you put around it.”  As a result, ARM cores are being included in SoC vendor silicon for both  networking and storage components. Those SoC companies, including LSI/Avago Axxia  and Freescale (see above), can leverage their existing IP by adding their own cell designs for specialized networking hardware functions (identified at the end of this article in the Addendum).

Bob noted that the ARM ecosystem was conducive to the disruption now being experience in the IT industy with software control of so many types of equipment.  The evolving network infrastructure – SDN, NFV, other Open Networking- is all about reducing total cost of ownership and enabling new services with smart and adaptable building blocks.  That’s depicted in the following illustration:

Evolving infrastructure is reducing costs and enabling new services.
Image courtesy of ARM.

Bob stated that one SoC size does not fit all.  For example, one type of Soc can contain: high performance CPU, power management, premises networking, storage & I/O building blocks.  While one for SDN/NFV might include: a high performance CPU, power management, I/O including wide area networking interfaces, and specialized hardware networking functions.

Monkman articulated very well what most already know:  that the Networking and Server equipment are often being combined in a single box (they’re “colliding” he said).  [In many cases, compute servers are running network virtualization (i.e.VMWare), acceleration, packet pre-processing, and/or control plane software (SDN model).]  Flexible intelligence is required on an end-to-end basis for this to work out well.  The ARM business model was said to enable innovation and differentiation, especially since the ARM CPU core has reached the 64 bit “inflection point.”

ARM is working closely with the Linaro Networking and Enterprise Groups. Linaro is a non-profit industry group creating open source software that runs on ARM CPU cores.  Member companies fund Linaro and provide half of its engineering resources as assignees who work full time on Linaro projects. These assignees combined with over 100 of Linaro’s own engineers create a team of over 200 software developers.

Bob said that Linaro is creating an optimized, open-source platform software for scalable infrastructure (server, network & storage).  It coordinates and multiplies members’ efforts, while accelerating product time to market (TTM).  Linaro open source software enables ARM partners (licensees of ARM cores) to focus on innovation and differentiated value-add functionality in their SoC offerings.

Author’s Note:  The Linaro Networking Group (LNG) is an autonomous segment focused group that is responsible for engineering development in the networking space. The current mix of LNG engineering activities includes:

  • Virtualization support with considerations for real-time performance, I/O optimization, robustness and heterogeneous operating environments on multi-core SoCs.
  • Real-time operations and the Linux kernel optimizations for the control and data plane
  • Packet processing optimizations that maximize performance and minimize latency in data flows through the network.
  • Dealing with legacy software and mixed-endian issues prevalent in the networking space
  • Power Management
  • Data Plane Programming API:

For more information: https://wiki.linaro.org/LNG


OpenDataPlane (ODP) http://www.opendataplane.org/ was described by Bob as a “truly cross-platform, truly open-source and open contribution interface.” From the ODP website:

ODP embraces and extends existing proprietary, optimized vendor-specific hardware blocks and software libraries to provide inter-operability with minimal overhead. Initially defined by members of the Linaro Networking Group (LNG), this project is open to contributions from all individuals and companies who share an interest in promoting a standard set of APIs to be used across the full range of network processor architectures available.]

Author’s Note:   There’s a similar project from Intel called DPDK or Data Plane Developer’s Kit that an audience member referenced during Q &A . We wonder if those APIs are viable alternatives or can be used in conjunction with the ONF’s OpenFlow API?


Next Generation Virtual Network Software Platforms, along with network operator benefits, are illustrated in the following graphic:

An image depicting the Next-Gen virtualized network software platforms.
Image courtesy of ARM.

Bob Monkman’s Summary:

  • Driven by total cost of ownership, the data center workload shift is leading to  more optimized and diverse silicon solutions
  • Network infrastructure is also well suited for the same highly integrated, optimized and scalable solutions ARM’s SoC partners understand and are positioned to deliver
  • Collaborative business model supports “one size does not fit all approach,” rapid pace of innovation, choice and diversity
  • Software ecosystem (e.g. Linaro open source) is developing quickly to support target markets
  • ARM ecosystem is leveraging standards and open source software to accelerate deployment readiness

Addendum:

In a post conference email exchange, I suggested several specific networking hardware functions that might be implemented in a SoC (with 1 or more ARM CPU cores).  Those include:  Encryption, Packet Classification, Deep Packet Inspection, Security functions,  intra-chip or inter-card interface/fabric, fault & performance monitoring, error counters?

Bob replied: “Yes, security acceleration such as SSL operations; counters of various sorts -yes; less common on the fault notification and performance monitoring. A recent example is found in the Mingoa acquisition, see: http://www.microsemi.com/company/acquisitions ”

…………………………………………………………………….

References:


End NOTE:  Stay tuned for Part II which will cover Infonetics’ Michael Howard’s presentation on Hardware and market trends for SDN/NFV.

2014 Hot Interconnects Highlight: Achieving Scale & Programmability in Google's Software Defined Data Center WAN

Introduction:

Amin Vahdat, PhD & Distinguished Engineer and Lead Network Architect at Google, delivered the opening keynote at 2014 Hot Interconnects, held August 26-27 in Mt View, CA. His talk presented an overview of the design and architectural requirements to bring Google’s shared infrastructure services to external customers with the Google Cloud Platform.

The wide area network underpins storage, distributed computing, and security in the Cloud, which is appealing for a variety of reasons:

  • On demand access to compute servers and storage
  • Easier operational model than premises based networks
  • Much greater up-time, i.e. five 9’s reliability; fast failure recovery without human intervention, etc
  • State of the art infrastructure services, e.g. DDoS prevention, load balancing, storage, complex event & stream processing, specialised data aggregation, etc
  • Different programming models unavailable elsewhere, e.g. low latency, massive IOPS, etc
  • New capabilities; not just delivering old/legacy applications cheaper

Andromeda- more than a galaxy in space:

Andromeda – Google’s code name for their managed virtual network infrastructure- is the enabler of Google’s cloud platform which provides many services to simultaneous end users. Andromeda provides Google’s customers/end users with robust performance, low latency and security services that are as good or better than private, premises based networks. Google has long focused on shared infrastructure among multiple internal customers and services, and in delivering scalable, highly efficient services to a global population.

An image of Google's Andromeda Controller diagram.
Click to view larger version. Image courtesy of Google

“Google’s (network) infra-structure services run on a shared network,” Vahdat said. “They provide the illusion of individual customers/end users running their own network, with high-speed interconnections, their own IP address space and Virtual Machines (VMs),” he added.  [Google has been running shared infrastructure since at least 2002 and it has been the basis for many commonly used scalable open-source technologies.]

From Google’s blog:

Andromeda’s goal is to expose the raw performance of the underlying network while simultaneously exposing network function virtualization (NFV). We expose the same in-network processing that enables our internal services to scale while remaining extensible and isolated to end users. This functionality includes distributed denial of service (DDoS) protection, transparent service load balancing, access control lists, and firewalls. We do this all while improving performance, with more enhancements coming.  Hence, Andromeda itself is not a Cloud Platform networking product; rather, it is the basis for delivering Cloud Platform networking services with high performance, availability, isolation, and security.”

Google uses its own versions of SDN and NFV to orchestrate provisioning, high availability, and to meet or exceed application performance requirements for Andromeda. The technology must be distributed throughout the network, which is only as strong as its weakest link, according to Amin.  “SDN” (Software Defined Networking) is the underlying mechanism for Andromeda. “It controls the entire hardware/software stack, QoS, latency, fault tolerance, etc.”

“SDN’s” fundamental premise is the separation of the control plane from the data plane, Google and everyone else agrees on that. But not much else!  Amin said the role of “SDN” is overall co-ordination and orchestration of network functions. It permits independent evolution of the control and data planes. Functions identified under SDN supervision were the following:

  • High performance IT and network elements: NICs, packet processors, fabric switches, top of rack switches, software, storage, etc.
  • Audit correctness (of all network and compute functions performed)
  • Provisioning with end to end QoS and SLA’s
  • Insuring high availability (and reliability)

“SDN” in Andromeda–Observations and Explanations:

“A logically centralized hierarchical control plane beats peer-to-peer (control plane) every time,” Amin said. Packet/frame forwarding in the data plane can run at network link speed, while the control plane can be implemented in commodity hardware (servers or bare metal switches), with scaling as needed. The control plane requires 1% of the overhead of the entire network, he added.

As expected, Vahdat did not reveal any of the APIs/ protocols/ interface specs that Google uses for its version of “SDN.” In particular, the API between the control and data plane (Google has never endorsed the ONF specified Open Flow v1.3). Also, he didn’t detail how the logically centralized, but likely geographically distributed control plane works.

Amin said that Google was making “extensive use of NFV (Network Function Virtualization) to virtualize SDN.” Andromeda NFV functions, illustrated in the above block diagram, include: Load balancing, DoS, ACLs, and VPN. New challenges for NFV include: fault isolation, security, DoS, virtual IP networks, mapping external services into name spaces and balanced virtual systems.

Managing the Andromeda infrastructure requires new tools and skills, Vahdat noted. “It turns out that running a hundred or a thousand servers is a very difficult operation. You can’t hire people out of college who know how to operate a hundred or a thousand servers,” Amin said. Tools are often designed for homogeneous environments and individual systems. Human reaction time is too slow to deliver “five nines” of uptime, maintenance outages are unacceptable, and the network becomes a bottleneck and source of outages.

Power and cooling are the major costs of a global data center and networking infrastructure like Google’s. “That’s true of even your laptop at home if you’re running it 24/7. At Google’s mammoth scale, that’s very apparent,” Vahdat said.

Applications require real-time high performance and low-latency communications to virtual machines. Google delivers those capabilities via its own Content Delivery Network (CDN).  Google uses the term “cluster networking” to describe huge switch/routers which are purpose-built out of cost efficient building blocks.

In addition to high performance and low latency, users may also require service chaining and load-balancing, along with extensibility (the capability to increase or reduce the number of servers available to applications as demand requires). Security is also a huge requirement. “Large companies are constantly under attack. It’s not a question of whether you’re under attack but how big is the attack,” Vahdat said.

[“Security will never be the same again. It’s a losing battle,” said Martin Casado, PhD during his Cloud Innovation Summit keynote on March 27, 2014]

Google has a global infrastructure, with data centers and points of presence worldwide to provide low-latency access to services locally, rather than requiring customers to access a single point of presence. Google’s software defined WAN (backbone private network) was one of the first networks to use “SDN”. In operation for almost three years, it is larger and growing faster than Google’s customer facing Internet Connectivity between Google’s cloud resident data centers and is comparable to the data traffic within a premises based data center, according to Vahdat.

Note 1.   Please refer to this article: Google’s largest internal network interconnects its Data Centers using Software Defined Network (SDN) in the WAN

“SDN” opportunities and challenges include:

  • Logically centralized network management- a shift from fully decentralized, box to box communications
  • High performance and reliable distributed control
  • Eliminate one-off protocols (not explained)
  • Definition of an API that will deliver NFV as a service

Cloud Caveats:

While Vahdat believes in the potential and power of cloud computing, he says that moving to the cloud (from premises based data centers) still poses all the challenges of running an IT infrastructure. “Most cloud customers, if you poll them, say the operational overhead of running on the cloud is as hard or harder today than running on your own infrastructure,” Vahdat said.

“In the future, cloud computing will require high bandwidth, low latency pipes.” Amin cited a “law” this author never heard of: “1M bit/sec of I/O is required for every 1MHz of CPU processing (computations).” In addition, the cloud must provide rapid network provisioning and very high availability, he added.

Network switch silicon and NPUs should focus on:

  • Hardware/software support for more efficient read/write of switch state
  • Increasing port density
  • Higher bandwidth per chip
  • NPUs must provide much greater than twice the performance for the same functionality as general purpose microprocessors and switch silicon.

Note: Two case studies were presented which are beyond the scope of this article to review.  Please refer to a related article on 2014 Hot Interconnects Death of the God Box

Vahdat’s Summary:

Google is leveraging its decade plus experience in delivering high performance shared IT infrastructure in its Andromeda network.  Logically centralized “SDN” is used to control and orchestrate all network and computing elements, including: VMs, virtual (soft) switches, NICs, switch fabrics, packet processors, cluster routers, etc.  Elements of NFV are also being used with more expected in the future.

References:

http://googlecloudplatform.blogspot.com/2014/04/enter-andromeda-zone-google-cloud-platforms-latest-networking-stack.html

https://www.youtube.com/watch?v=wpin6GKpDm8

http://gigaom.com/2014/04/02/google-launches-andromeda-a-software-defined-network-underlying-its-cloud/

http://virtualizationreview.com/articles/2014/04/03/google-andromeda.aspx

http://community.comsoc.org/blogs/alanweissberger/martin-casado-how-hypervisor-can-become-horizontal-security-layer-data-center

http://www.convergedigest.com/2014/03/ons-2014-google-keynote-software.html

https://www.youtube.com/watch?v=n4gOZrUwWmc

http://cseweb.ucsd.edu/~vahdat/

Addendum:  Amdahl’s Law

In a post conference email to this author, Amin wrote:

Here are a couple of references for Amdahl’s “law” on balanced system design:

Both essentially argue that for modern parallel computation, we need a fair amount of network I/O to keep the CPU busy (rather than stalled waiting for I/O to complete).
Most distributed computations today substantially under provision IO, largely because of significant inefficiency in the network software stack (RPC, TCP, IP, etc.) as well as the expense/complexity of building high performance network interconnects.  Cloud infrastructure has the potential to deliver balanced system infrastructure even for large-scale distributed computation.

Thanks, Amin

NTT Com Leads all Network Providers in Deployment of SDN/OpenFlow; NFV Coming Soon

Introduction:

An image of Yukio Ito, Senior Vice President for Service Infrastructure at NTT Communications
Yukio Ito*, Senior Vice President for Service Infrastructure at NTT Communications

While AT&T has gotten a lot of press for its announced plans to use Software Defined Networking (SDN) to revamp its core network, another large global carrier has been quietly deploying SDN/OpenFlow for almost two years and soon plans to launch Network Function Virtualization (NFV) into its WAN.

NTT Communications (NTT-Com) is using an “SDN overlay” to connect 12 of its cloud data centers (including one’s in China and Germany scheduled for launch this year) located on three different continents.    This summer, the global network operator plans to deploy NFV in their WAN, based on virtualization technology from their Virtela acquisition last year.

ONS Presentation and Interview:

At a March 4, 2013 Open Network Summit (ONS) plenary session, Yukio Ito*, Senior Vice President for Service Infrastructure at NTT Communications described NTT-Com’s use of SDN to reduce management complexity, capex, and opex, while reducing time to market for new customers and services.

The SDN overlay inter-connects the data centers used in NTT-Com’s “Enterprise Cloud.”

Diagram of how NTT Com is helping customer Yamaha Motor reduce ICT costs via cloud migration.
Diagram of how NTT Com is helping customer Yamaha Motor reduce ICT costs via cloud migration.

Started in June 2012, it was the first private cloud in the world to adopt virtualized network technology.  Enterprise Cloud became available on a global basis in February 2013.  In July 2013, NTT-Com launched the world’s first SDN-based cloud migration service- On-premises Connection.  The service facilitates smooth, flexible transitions to the cloud by connecting customer on-premises systems with NTT Com’s Enterprise Cloud via an IP-MPLS VPN.  Changes in the interconnected cloud data centers create changes in NTT-Com’s IP-MPLS VPN (which connects NTT-Com’s enterprise customers to cloud resident data centers).

NTT-Com’s Enterprise Cloud currently uses SDN/OpenFlow within and between 10 cloud resident data centers in in 8 countries, and will launch two additional locations (Germany and China) within 2014.  The company’s worldwide infrastructure now reaches 196 countries/regions.

NTT-Com chose SDN for faster network provisioning and configuration than manual/semi-automated proprietary systems provided. “In our enterprise cloud, we eliminated cost structures and human error due to manual processes,” Ito-san said.  The OpenFlow protocol has proved useful in helping customers configure VPNs, according to Mr. Ito. “It might just be a small part of the whole network (5 to 10%), but it is an important step in making our network more efficient,” he added.

SDN technology enables NTT-Com’s customers to make changes promptly and flexibly, such as adjusting bandwidth to transfer large data in off-peak hours.  On-demand use helps to minimize the cost of cloud migration because payment for the service, including gateway equipment, is on a per-day basis.

Automated tools are another benefit made possible by SDN and can be leveraged by both NTT- Com and its customers.  One example is the ability to let a customer running a data backup storage service  to crank up its bandwidth then throttle back down when the backup is complete. In that case, the higher bandwidth is no longer needed. Furthermore, SDN also allows customers to retain their existing IP addresses when migrating from their own data centers to NTT-Com’s clouds.

In addition to faster provisioning/reconfiguration, CAPEX and OPEX savings, NTT-Com’s SDN deployment allows the carrier to enable the carrier to partner with multiple vendors for networking, avoid redundant deployment, simplify system cooperation, and shorten time-to-market, Ito-san said. NTT-Com is currently using SDN Controllers (with OpenFlow and BGP protocols) and Data Forwarding (AKA Packet Forwarding) equipment made by NEC Corp.

The global carrier plans to use SDN throughout its WAN. A new SDN Controller platform is under study with an open API. “The SDN Controller will look over the entire network, including packet transport and optical networks. It will orchestrate end-to-end connectivity.” Ito-san said.  The SDN-WAN migration will involve several steps, including interconnection with various other networks and equipment that are purpose built to deliver specific services (e.g. CDN, VNO/MVNO, VoIP, VPN, public Internet, etc).

NTT-Com plans to extend SDN to control its entire WAN, including Cloud as depicted in the illustration
NTT-Com plans to extend SDN to control its entire WAN, including Cloud as depicted in the illustration

NFV Deployment Planned:

NTT Com is further enhancing its network and cloud services with SDN related technology, such as NFV and overlay networks.  In the very near future, the company is looking to deploy NFV to improve network efficiency and utilization. This will be through technology from Virtela, which was acquired in October 2013.

The acquisition of cloud-based network services provider Virtela has enhanced NTT’s portfolio of cloud services and expanded coverage to 196 countries. The carrier plans to add Virtela’s NFV technology to its cloud-based network services this summer to enhance its virtualization capabilities.

“Many of our customers and partners request total ICT solutions. Leveraging NTT Com’s broad service portfolio together with Virtela’s asset-light networking, we will now be able to offer more choices and a single source for all their cloud computing, data networking, security and voice service requirements,” said Virtela President Ron Haigh. “Together, our advanced global infrastructure enables rapid innovation and value for more customers around the world while strengthening our leadership in cloud-based networking services.”

High value added network functions can be effectively realized with NFV, according to Ito-san, especially for network appliances. Ito-san wrote in an email to this author:

“In the case of NFV, telecom companies such as BT, France Telecom/Orange, Telefonica, etc. are thinking about deploying SDN on their networks combined with NFV. They have an interesting evolution of computer network technologies. In their cloud data centers, they have common x86-based hardware. And meanwhile, they have dedicated hardware special-function networking devices using similar technologies that cost more to maintain and are not uniform. I agree with the purpose of an NFV initiative that helps transform those special-function systems to run on common x86-based hardware.  In the carrier markets, the giants need some kind of differentiation. I feel that they can create their own advantage by adding virtualized network functions. Combined with their existing transport, core router infrastructure and multiple data center locations, they can use NFV to create an advantage against competitors.”

NTT’s ONS Demo’s -Booth # 403:

NTT-Com demonstrated three SDN-like technologies at its ONS booth, which I visited:

  1. A Multiple southbound interface control Platform and Portal system or AMPP, a configurable system architecture that accommodates both OpenFlow switches and command line interface (CLI)-based network devices;
  2. Lagopus Switch, a scalable, high-performance and elastic software-based OpenFlow switch that leverages multi-core CPUs and network I/O to achieve 10Gbps level-flow processing; and
  3. The Versatile OpenFlow ValiDator or VOLT, a first of a kind system that can validate flow entries and analyze network failures in OpenFlow environments.  I found such a simulation tool to be very worthwhile for network operators deploying SDN/Open Flow. An AT&T representative involved in that company’s SDN migration strategy also spoke highly of this tool.

NEC, NTT, NTT Com, Fujitsu, Hitachi develop SDN technologies under the ‘Open Innovation Over Network Platforms’ (O3 Project):

During his ONS keynote, Mr. Ito described the mission of the O3 Project as “integrated design, operations and management.”  The O3 Project is the world’s first R&D project that seeks to make a variety of wide area network (WAN) elements compatible with SDN, including platforms for comprehensively integrating and managing multiple varieties of WAN infrastructure and applications. The project aims to achieve wide area SDN that will enable telecommunications carriers to reduce the time to design, construct and change networks by approximately 90% when compared to conventional methods.  This will enable service providers to dramatically reduce the time needed to establish and withdraw services. In the future, enterprises will be able to enjoy services by simply installing the specialized application for services, such as a big data application, 8K HD video broadcasting and global enterprise intranet, and at the same time, an optimum network for the services will be provided promptly.

The O3 Project was launched in June 2013, based on research consigned by the Japan Ministry of Internal Affairs and Communications’ Research and Development of Network Virtualization Technology, and has been promoted jointly by the five companies. The five partners said the project defined unified expressions of network information and built a database for handling them, allowing network resources in lower layers such as optical networks to be handled at upper layers such as packet transport networks. This enables the provision of software that allows operation management and control of different types of networks based on common items. These technologies aim to enable telecoms operators to provide virtual networks that combine optical, packet, wireless and other features.

NTT-Com, NEC Corporation and IIGA Co. have jointly established the Okinawa Open Laboratory to develop SDN and cloud computing technologies.  The laboratory, which opened in May 2013, has invited engineers from private companies and academic organizations in Japan and other countries to work at the facility on the development of SDN and cloud-computing technologies and verification for commercial use.  Study results will be distributed widely to the public. Meanwhile, Ito-san invited all ONS attendees to visit that lab if they travel to Japan. That was a very gracious gesture, indeed!

Read more about this research partnership here:

Summary and Conclusion:

“NTT-Com is already providing SDN/Openflow-based services, but that is not where our efforts will end. We will continue to work on our development of an ideal SDN architecture and OpenFlow/SDN controller to offer unique and differentiated services with quick delivery. Examples of these services include: cloud migration, cloud-network automatic interconnection, virtualized network overlay function, NFV, and SDN applying to WAN,” said Mr. Ito. “Moreover, leveraging our position as a leader in SDN, NTT Com aims to spread the benefits of the technology through many communities,” he added.

Addendum:  Arcstar Universal One

NTT-Com this month is planning to launch its Arcstar Universal One Virtual Option service, which uses SDN virtual technology to create and control overlay networks via existing corporate networks or the Internet. Arcstar Universal One initially will be available in 21 countries including the U.S., Japan, Singapore, the U.K., Hong Kong, Germany, and Australia. The number of countries served will eventually expand to 30. NTT-Com says it is the first company to offer such a service.

Arcstar Universal One Virtual Option clients can create flexible, secure, low-cost, on-demand networks simply by installing an app on a PC, smart phone or similar device, or by using an adapter. Integrated management and operation of newly created virtual networks will be possible using the NTT-Com Business Portal, which greatly reduces the time to add or change network configurations.  Studies from NTT-Com show clients can expect to reduce costs by up to 60% and shorten the configuration period by up 80% compared to the conventional establishment.


*Yukio Ito is a board member of the Open Networking Foundation and Senior Vice President of Service Infrastructure at NTT Communications Corporation (NTT-Com) in Tokyo, a subsidiary of NTT, one of the largest telecommunications companies in the world.

Virtually Networked: The State of SDN

We have all heard about hectic activity with several initiatives on network virtualization. The potpourri of terms in this space (SDN/OpenFlow/OpenDaylight etc.) are enough to make one’s head spin. This article will try to lay out the landscape as of the time of writing and explain how some of these technologies are relevant to independent broadband service providers.

In the author’s view – Software Defined Networking (SDN) evolved with the aim of freeing the network operator from dependence on networking equipment vendors for developing new and innovative services and was intended to make networking services simpler to implement and manage.

Software Defined Networking decouples the control and data planes – thereby abstracting the physically architecture from the applications running over it. Network intelligence is centralized and separated away from the forwarding of packets.

SDN is the term used for a set of technologies that enable the management of services over computer networks without worrying about the lower level functionality – which is now abstracted away. This theoretically should allow the network operator to develop new services at the control plane without touching the data plane since they are now decoupled.

Network operators can control and manage network traffic via a software controller – mostly without having to physically touch switches and routers. While the physical IP network still exists – the software controller is the “brains” of SDN that drives the IP based forwarding plane. Centralizing this controller functionality allows the operator to programmatically configure and manage this abstracted network topology rather than having to hand configure every node in their network.

SDN provides a set of APIs to configure the common network services (such as routing/traffic management/security) .

OpenFlow is one standard protocol that defines the communication between such an abstracted control and data plane. OpenFlow was defined by the Open Networking Foundation – and allows direct manipulation of physical and virtual devices. OpenFlow would need to be implemented at both sides of the SDN controller software as well as the SDN-capable network infrastructure devices.

How would SDN impact an independent broadband service providers? If SDN lives up to its promise, it could provide the flexibility in networking that Telcos have needed for a long time. From a network operations perspective, it has the potential to revolutionize how networks are controlled and managed today – making it a very simple task to manage physical and virtual devices without ever having to change anything in the physical network.

However – these are still early days in the SDN space. Several vendors have implemented software controllers and the OpenFlow specification appears to be stabilizing. OpenDaylight is an open platform for network programmability to enable SDN. OpenDaylight has just released its first release of software code – Hydrogen and it can be downloaded as open source software today. But this is not the only approach to SDN – there are vendor specific approaches that this author will not cover in this article.

For independent broadband service providers wishing to learn more about SDN – it would be a great idea to download the Hydrogen release of OpenDaylight and play with it – but don’t expect it to provide any production ready functionality. Like the first release of any piece of software there are wrinkles to be ironed out and important features to be written. It would be a great time to get involved if one wants to contribute to the open source community.

For the independent broadband service providers wanting to deploy SDN – it’s not prime-time ready yet – but it’s an exciting and enticing idea that is fast becoming real. Keep a close ear to the ground – SDN might make our lives easier fairly soon.

[Editor’s Note; For more great insight from Kshitij about “SDN” and other topics , please go to his website at http://www.kshitijkumar.com/]