Highlights of 2015 TiECon Part III – IoT Track

Introduction & Backgrounder:

This is the third and final article on this year’s TiECon conference. It covers the Internet of Things (IoT) track with emphasis on Cisco’s closing Keynote presentation. The first two articles on TiECon 2015, as well as others by this author can be read here.

Gartner Research defines the Internet of Things as “the network of physical objects that contain embedded technology to communicate and sense or interact with their internal states of the external environment.”

For years, the biggest issues for IoT have been worries about security and privacy. Despite lots of hype, there is no definitive set of standards and connectivity options for various IoT industry verticals. That includes the identification of MAC/PHY, protocol stacks and message formats.

In an excellent blog post, Chris Kocher- Founder and Managing Director of Grey Heron identifies five key IoT issues that are said to be challenges areas: Security; Trust and Privacy; Complexity, confusion and integration issues; Evolving architectures, protocol wars and competing standards; Concrete use cases and compelling value propositions.

Compliance will continue to be a major issue in medical and assisted-living applications, which could have life and death ramifications. New compliance frameworks to address the IoT’s unique issues will evolve. Social and political concerns in this area may also hinder IoT adoption. Related to entrepreneurs:

“Slower adoption and unanticipated development resource requirements will likely slip schedules and slow time to revenues, which will require additional funding for IoT projects and longer ‘runways’ for startups.”

Battle of IoT Platforms & Protocols:

This TiECon session promised to reveal the strengths and weaknesses of different IoT platforms and protocols that are available to build successful products. Three panelists from semiconductor companies (Intel, Marvell and MediaTek) described their own hardware/software portfolio without relating them to ongoing work in the various IoT consortiums and alliances. Representatives from Amazon Web Services (AWS) and WSO2 provided perspectives from a cloud service provider and middleware provider point of view, respectively.

Panelists:

  • Dr. Manas Saksena, Sr. Director of Technology & Marketing, Platform Solutions Group at Marvell Semiconductor
  • Geetha Dabir, Vice President, Internet of Things at Intel
  • Marc Naddell, Vice President MediaTek Labs at MediaTek
  • Jinesh Varia, Technology Evangelist at Amazon Web Services
  • John Mathon, Vice President of Enterprise Evangelism at WSO2

Mr. Mathon said that IoT value will be created by the integration of new devices with information from them analyzed in the cloud. Mr. Varia opined that would not always be the case as there might very well be on premises servers that provide control and data analytics for devices. There’s also the issue of IoT device to device communications as envisioned by the AllSeen Alliance.

Ms. Dabir noted that Intel’s IoT platform was not just silicon, but also included “intelligence analytics” that was being developed by Intel Labs. The goal is to understand the hardware environment of things (e.g. motors, sensors, etc), record status and do predictive maintenance¹ (the ability to accurately diagnose and prevent failures in real-time is a major advantage for companies and might be vital for critical infrastructure applications). Intel uses its WindRiver subsidiary’s Operating System as part of the company’s IoT platform.

Note 1. An example of predictive maintenance for a wind turbine farm for renewable energy production: A wind on-site sensor-equipped systems could collect data from multiple turbines, not just a single turbine, enabling failure analysis to be performed to predict when a system or component is likely to malfunction due to stress or overheating, and thereby enabling better operator or autonomous decision making for maintenance.

For example, if there is a high likelihood of the gearbox breaking down within a turbine, then switching to a lower performance mode and a reduced mechanical load, while still delivering 80 percent efficiency, could mean continued operation and further electricity generation for several weeks. This would allow scheduled maintenance that combines the repair and maintenance of more than just one turbine.

IoT Track Closing Keynote by Anand Oswal (VP Engineering, Cisco):

Here are the key points made by Mr. Oswal about IoT/IoE:

  • The pace of change is accelerating through digital content, the Internet and the mobile economy.
  • Disrupting tech trends are coming from social, mobile, big data/analytics, cloud and now IoT.
  • The Internet of Everything (IoE) includes: people, processes, things and data.
  • IoE is characterized by: cheap and reliable sensors/devices, ubiquitous wireless connectivity (which standard: 2G/3G/4G, WiFi, Zigbee, BlueTooth, Near Field Communications, other?)
  • Industries to be impacted by IoE include: aviation, rail, oil & gas, heavy machinery, power generation, health care, and smart cities.
  • The IoE future will bring: self refilling bottles, crop harvest alerts, smart carts, and driver-less cars.
  • IoE can be equated to the impact that the Industrial Revolution had on the world (post TiECon comment – a very bold statement in this author’s opinion).

Cisco is working with partner companies on several real world IoE examples, which include:

  1. Mining precious metals: WiFi network combined with sensors on both workers and equipment; video surveillance of miners and engineers working underground.
  2. Asset monitoring: Tire company increases its efficiency in real-time.
  3. IoE ready retailer: Dynamic optimization of store staffing based on check-out line monitoring throughout the store. Big Data/ Analytics at the retail store combines the power of mobile, social, data and cloud.

IoE poses new challenges and opportunities for industries such as manufacturing, transportation, and smart cities. What’s needed for IoE includes:

  • A converged, managed network that might include both a closed proprietary network (vertical industry dependent) and an open IP-based network.
  • Operations and resilience at scale, including self management of devices/processes & automated self-healing/failure recovery.
  • Security for all industries and applications.
  • Distributed intelligence, especially at the edge of the network (oil & gas).
  • Application enablement (Cisco IoX was provided as an example).
  • Big data – geographically distributed with real time actions that affect business processes.

Cisco is a charter member of the IoT World Forum which is “an annual event that brings together the best and brightest thinkers, practitioners, and innovators from business, government, and academia to accelerate the market adoption of the Internet of Things.”

Cisco is working with entrepreneurs globally through a variety of new funds and initiatives – Entrepreneurship Residence Program, Startup Accelerators and IoE Innovation Centers. The company has funded² six IoT/IoE related start-ups in 2014 and has allocated $150M for early stage start-up investments.

The IoT start-ups funded by Cisco include:

  • Ayla Networks: “Agile IoT platform end-to-end solutions that allow manufacturers to turn home controls, HVAC, appliances, lighting and other everyday products into intelligent devices.”
  • Pawaa: “SecureCARE software for data leak prevention.”
  • ParStream: “Analytics platform built for large-scale IoT solutions utilizing massively parallel processing technologies.”
  • DGLogik: “Innovative software solutions that enable, drive and visualize the IoT; connecting and visualizing all things IoT”

Authors Note: “..” descriptions of the above start-ups was taken from their websites.


Note 2. Gartner Group has estimated that IoT companies will generate $309 billion in revenue per year by 2020, half of which will come from startups. A lot of that money will find its way back to companies like Cisco (Qualcomm, and Intel are also investing in this space) as IoT drives up demand for hardware components and network equipment. It certainly makes business sense for established tech companies to help the IoT/IoE market lift off and gain critical mass.


Cisco has started an Incubation Program Entrepreneurs in Residence (EIR) which supports early-stage business-to-business companies. This new entity will collaborate with Cisco and its global partner ecosystem to build IoE, Big Data/ Analytics and Smart City solutions. Anand said that start-ups Cisco has invested in are “paired with a business unit/group,” assumingly to work together on a combined solution for their IoE deliverables.

End Note:  We hope you enjoyed this comprehensive and detailed three part series on TiECon 2015. Please leave a comment in the box below this article and email me any questions you might have on the material covered:  alan@viodi.com.

References:

Video interview at TiE TV lounge with Anand Oswal:

Highlights of 2015 TiECon Part II – Cloud Track

Introduction:

Photo of TiE event.
Photo of TiE event.

This is the second article on this year’s TiECon conference.  It is focused on selected presentations and panel sessions from the Cloud track on May 15th. That track covered planning, operational challenges of cloud infrastructure, business and technical challenges of migrating services to the cloud, and the still problematic state of cloud security (which is badly lagging the advances in compute, storage and even networking).

The first article on 2015 TiECon summarized the two opening Grand Keynotes. It can be read here.

Keynote on Enterprise Cloud Trends: Mark Interrante, VP of HP’s Cloud Business Unit Operations

Interrante is driving HP’s OpenStack movement directed at Cloud Computing. The HP Helion Platform¹  is a combined Infrastructure as a Service (Iaas) and Platform as a Service (PaaS) offering for cloud-native workloads. Helion is based on very popular open source projects in OpenStack® and Cloud Foundry™. Mr. Interrante described HP’s Helion offering as a hybrid cloud, which combines the flexibility and convenience of public cloud with the security and control of private cloud.


Note 1. HP states that Helion is:

“A private cloud that enables IT to protect sensitive information, control and broker services across multiple clouds, and deliver exceptional cost advantages. A private cloud that is proven today and delivering on the vision for tomorrow. A vision for a Hybrid World. That cloud is HP Helion.”


“The path to hybrid begins with a private cloud, built on open-standards, using opens source software and designed for compatibility and interoperability from the start,” Interrante said. He enumerated several advantages of open source code, including: software transparency, increased security, being viewed by “many eyes,” code re-use, and open cryptography.

For years, security has been the biggest issue for cloud users – much more so for public than for private cloud. “Security is a prominent concern for all businesses and organizations of every size,” Mark said. The concern is certainly valid as 2014 was “the year of the breach,” which have accelerated since 2011.
“Cloud security is NOT one size fits all. It’s critically important to understand how to isolate a fleet of (cloud) services and applications you use,” he added. Other points Mark made related to cloud security:

  • Security must be provided in, under, across and to/from the cloud or interconnected clouds used by the enterprise customer(s).
  • The security strategy must go beyond compliance in that it has to go beyond just following compliance procedures.
  • Threats include: data breaches, data loss, account or service hacking, insecure interfaces and/or APIs, Denial of Service (DoS) attacks, malicious insider attacks, abuse of cloud services, insufficient due diligence, shared technology vulnerabilities.
  • HP has active Threat Intelligence & Research teams that are working to improve security for their products and services.

In response to the moderator’s question on “dockers² and “containers,” Mark replied: “Docker type containers have had the fastest uptake and most interest than any new software) technology.”


Note 2. Docker is an open-source project that automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating-system-level virtualization on the Linux real-time operating system.


In summary, Mark said:

“Cloud is driving innovation, changing the IT landscape, and transforming the way companies do business (e.g. everything “as a service”). Every organization is becoming a software company built on cloud computing and storage. The proliferation of mobile devices, connected consumers and machines has spawned new business models based on cloud. IoT will accelerate that trend.”

Cloud Market Trends and Needs:

This panel of IT managers & a CIO addressed issues related to large-scale cloud deployments and problems that they are facing, especially cyber security. Alan Boehme, CIO (Global IT) & Chief Enterprise Architect at Coca-Cola Co. provided by far the most valuable information. To wit:

  • It’s very hard to move legacy applications to the cloud.
  • Public cloud is a quick and easy way to develop new apps, especially for start-ups.
  • Hybrid cloud model is probably best for mid size companies that are able to segregate their computing and storage needs between private/mission critical and secondary/tertiary apps.
  • Level of security is limited on Public clouds.
  • Public cloud issues include: providing the equivalent of an indemnification clause; reliability, robustness, and performance of Open Source software used; skill set needed for cloud security.

Suneet Nandwani, Sr. Director of Cloud at Ebay, noted that Ebay/PayPal uses an internal Private Cloud. That’s largely because they can guarantee a higher level of security (vs a Public or Hybrid Cloud). Suneet mentioned that hardware level security (e.g. built into various SoCs) is desirable and available from ARM, Intel, Freescale, and others.

Nandini Ramani, VP, Engineering at Twitter, said “Twitter has a Private Cloud, but is finding it hard to absorb start-ups. We have a tendency to shift to Public Cloud, but will first move to a Hybrid Cloud.” Nandini noted what most public cloud users are well aware of: “the tools on Amazon AWS³  are not available anyplace else.”


Note 3: In the 2015 Magic Quadrant for Cloud Infrastructure as a Service, Worldwide, Gartner Group placed Amazon Web Services in the “Leaders” quadrant and rated AWS as having both the furthest completeness of vision and the highest ability to execute. AWS groups its data centers into “regions,” each of which contains at least two availability zones. It has regions on the East and West Coasts of the U.S., and in Germany, Ireland, Japan, Singapore, Australia, Brazil, and (in preview) China. It also has one region dedicated to the U.S. federal government. It has a global sales presence.

From the Gartner Group report:

“AWS has a diverse customer base and the broadest range of use cases, including enterprise and mission-critical applications. It is the overwhelming market share leader, with over 10 times more cloud IaaS compute capacity in use than the aggregate total of the other 14 providers in this Magic Quadrant. This has enabled it to attract a very large technology partner ecosystem that includes software vendors that have licensed and packaged their software to run on AWS, as well as many vendors that have integrated their software with AWS capabilities. It also has an extensive network of partners that provide application development expertise, managed services, and professional services such as data center migration.

AWS is a thought leader; it is extraordinarily innovative, exceptionally agile, and very responsive to the market. It has the richest array of IaaS features and PaaS-like capabilities. It continues to rapidly expand its service offerings and offer higher-level solutions. Although it is beginning to face more competition from Microsoft and Google, it retains a multiyear competitive advantage. Although it will not be the ideal fit for every need, it has become the “safe choice” in this market, appealing to customers who desire the broadest range of capabilities and long-term market leadership. It is the provider most commonly chosen for strategic adoption.”


Hybrid Cloud leaves the user in an “awkward state,” where you’re not managing your own destiny (on the Public portion) nor fully taking advantages of popular services and applications for Public Cloud.

Mr. Boehme said that orchestration is missing from many Cloud offerings, especially those that span multiple clouds.  [Orchestration involves the automated arrangement, coordination, and management of applications, services, processes, and workloads. A cloud orchestrator is “software that manages the interconnections and interactions among cloud-based and on-premises compute/storage. Cloud orchestrator products use workflows to connect various automated processes and associated resources.”]

“We have the same set of network technologies and tools for the last 15 years and need new ones.” Alan said.  He doesn’t believe SDN is the answer.  “SDN will take a long time to be adopted by large enterprise customers,” he added.

Mr. Nandwani says the cloud has had a huge impact on eBay/PayPal. Approximately 90% of PayPal’s front end customer facing interace is based on cloud. A key requirement for PayPal’s cloud infrastructure was the ability to scale quickly without compromising availability or agility. OpenStack is playing a major role in PayPal’s vision by enabling a Private Cloud that helps the company’s developers quickly respond to its customers’ increasing demands and constantly changing needs, while developing a stable platform for customers to pay for their purchases.

Cloud Architecture and Technology Trends:

The panelists in this session covered cloud architectural issues from both the vendor (HP, Cisco), networked data center operator (Equinix) and cloud start-up (The Fabric) perspectives. The participants were:

  • Atul Garg, Vice President & GM at Hewlett-Packard
  • Ken Owens, Chief Technology Officer, Cloud Infrastructure Services at Cisco Systems
  • Sindhu Payankulath, VP, Global Network Engineering & Operations at Equinix
  • Prem Talreja, Marketing & Business Development Advisor at The Fabric

Here were the key points made:

HP: Use cloud to automate routine tasks to improve data center operations. The real challenge is how to create a platform to automate delivery of web services that are customized to individual company demands.

Equinix: We manage a multi-vendor network that connects the data centers we rent. Our customers get: compute power, storage, space, power, interconnection of compute/storage resources. Sindhu is responsible for three Equinix regional operations areas (AMER, EMEA and APAC) as well as Global Service Delivery.

While not mentioned by Sindhu, Equinix offers “Cloud Exchange.” which provides “secure, direct, flexible connections to a wide range of cloud service providers.”  It’s described by Equinix as “an advanced interconnection solution that enables seamless, on-demand, direct access to multiple clouds from multiple networks in more than a dozen locations around the world.”  Please see Addendum below.

Cisco: The biggest problem cloud solves is “to help businesses become more agile to enable them to quickly change and pivot.” Cisco is trying to provide a “cloud interconnect” capability to meet that need. The goal is to let customers create, run, maintain, and change cloud resident applications.

HP: Large companies running IBM mainframe applications are NOT going to move to cloud computing. However, midsize companies can shorten the time to provision a server by moving to Private Cloud (which of course HP provides). Atul didn’t even mention Public Cloud which might be a better choice for SMBs.

Cisco: Public cloud is outside of a company’s security and governance policy and compliance domains. As a result, “Private cloud is much more popular than most people realize.” Cisco believes there’s a 60/40 split between Private and Public clouds, which might grow to 50/50 in the next few years. Interestingly, there was no mention of Hybrid cloud or where that might fit for medium size companies.

Mr. Owens identified two huge “gaps” in Cloud:

  1. Too many tools and options to quickly develop new applications that run in the cloud (resident data centers).
  2. Orchestration of legacy systems with new ones.

Cisco is using OpenStack, while VMWare and Equinix were said to be using Open APIs (?).

HP: Customers want to build a Private cloud to operate their compute/storage requirements and then optimize them. HP also sees two huge cloud gaps, but they are different from those identified by Cisco above. From HP’s perspective the cloud gaps are:

  1. Ability to dynamically move workloads from Private to Public Cloud (with the computational results often returned to the Private cloud). “We’re not there yet,” Atul said. There was no mention of the technique called “cloud bursting” which was supposed to accommodate such dynamic, back and forth movement of workloads and results between Private and Public clouds. Evidently, that isn’t happening – at least not on a large scale.
  2. Governance: how to abstract out policies and then develop security to meet them. “The industry needs to figure out how to automatically lock down servers that have been compromised,” he added.

HP recommends migrating workloads from Amazon or VMWare clouds to OpenStack based cloud platforms (like theirs, of course). They suggest the foundation of such a cloud platform be a combination of Open Source + Cloud Foundry4 + OpenStack.


Note 4. Cloud Foundry is the industry’s Open PaaS (Platform as a Service) and provides a choice of clouds, frameworks and application services. As an open source project, there is a broad community both contributing and supporting Cloud Foundry.


Addendum:

In a whitepaper titled: What to Know Before You Migrate to Cloud,  Lauren Gibbons Paul proposes a list of questions for cloud service providers that are related to security and compliance.  Specific questions should be specific to an organization, industry and compliance requirements, but Lauren suggest these basic one’s first:

  • How much experience do you have in data center services? And in what industries?
  • Do you have experience in our industry with customers that have similar compliance needs?
  • Where will my cloud data reside? Do you own your data centers, or do you lease from a third party?
  • Do you have industry-leading physical and logical security? Describe technologies used and best practices for both types of security.
  • Do you use industry standard methodologies like ITIL (Information Technology Infrastructure Library)? What is your security and data reliability track record?
  • How fast could you recover in the event of a successful attack or disaster?
  • How transparent are you with customers?

Do you have a third party certify your security measures and compliance with industry regulations like SarbanesOxley Act of 2002?


Up Next:

The third and final article in this 2015 TiECon series will be on highlights of the IoT track and Cisco’s closing IoT Keynote speech, which clearly defined IoE (Internet of Everything) and gave a glimpse of where Cisco is investing in this space. That and all other Viodi View articles by this author can be read here.


.Addendum: Email received May 31, 2015 from Equinix on their Cloud offering:

“The cloud paradigm is not a passing fad. Most enterprises are in the process of figuring out how to adopt the cloud model for agility and elasticity reasons.  In many cases, their move to the cloud is also multi-cloud in nature. That is, the applications span across multiple private and public clouds because all the data and processing needs cannot be fully satisfied by the services hosted within a single cloud. For many of these workloads, the CIOs mention that they cannot use the public Internet because their high performance, availability and security requirements cannot be adequately satisfied. 
Equinix Cloud Exchange, an SDN driven platform, provides a high performance, secure, and highly available alternative to the public Internet that is available globally across multiple markets. Furthermore, Equinix Cloud Exchange allows enterprises to get access to all the major Network Service Providers and Cloud Service Providers in a timely (a couple of days instead of weeks) and cost effective (using a single port versus separate dedicated lines) manner. Equinix Cloud Exchange currently is integrated with most of the major Cloud Service Providers with respect to provisioning and service assurance, and it can be accessed both via a portal and also APIs.”

 

 

 

Highlights of 2015 TiECon Grand Keynotes

Introduction:

A photo of Hussain Aamir of CenturyLink.
CenturyLink’s Hussain Aamir

Over 4,300 delegates attended 2015 TiECon¹ –the largest global conference on entrepreneurship. The conference was held May 15th and 16th in Santa Clara, CA.

In this first TiECon article, we summarize the two Grand Keynote conversations from the first day (Friday May 15th of the conference. Future articles will cover keynotes and panel sessions from various tracks, such as Cloud, Security, IoT, and Breakthrough Thinkers.

Note 1. The Indus Entrepreneurs (TiE), which creates the event, has its headquarters in Silicon Valley and has chapters in 61 cities in 20 different countries. It is the world’s largest non-profit organization for entrepreneurs.

Highlights of Grand Keynote 1. – Jack Welch (ex CEO-GE) and Suzy Welch (co-author of “Real Life MBA”):

Jack: Since the 2008-2009 recession ended, companies are trying to do more with less and the pace of change has accelerated. An employee shouldn’t wait over one month in a non-creative company environment if he or she is an innovator.

Suzy: Corporate America has thousands of different ways to say NO, while entrepreneurs are YES people who must get out and start their own companies.

Jack (about his experiences in India): I couldn’t believe the intellectual capacity of India. The people are smart, aggressive, courteous, and always searching. I’m basically an Indian salesman.

Suzy: There seems to be herds of unicorns (startups valued in excess of $1B) galloping between San Francisco and Santa Clara. At the SF Four Seasons bar, we overheard tech startup talk that made our heads spin.

Jack: Startups today are different from the DOTCOM era (1998-2001) in that they have real cash flow, cause disruption (of industries and products/services), and are entering large markets. They are not “follies” or “just apps companies.”

Jack: A PhD in tech is a ticket to the moon (this author STRONGLY DISAGREES), but it’s also nice to have an MBA.

Avoiding “career pergatory:” The status quo is dangerous. Set a time-table for how long you (the employee) is going to stay with a company if stuck with a bad boss or an indifferent organization/bureaucracy. Don’t be negative during your stay at the company you may soon leave.

Suzy: Over the past few years, only about 10% of employees generally know where they stand within their company and have a sense of a career trajectory. At Google, it’s 60%. Most employees feel disillusioned and disengaged. Many hate come to work each day hating their job.

Leaders need to be turned on by the success of their people. The key is to build great product teams. Get smart people, energize and excite them, then let them go (and progress their agendas/initiatives).

Jack: There’s much quicker speed in the workplace today, because “everyone knows everything.” [Presumably that’s because of lightning quick information flow due to the Internet, social networking, mobile apps, instant messaging, texting, etc]. Companies need to be more transparent than ever before due to global competition. It’s imperative to get bureaucracy out of the company. Flatter (organizations), faster (decision-making) is needed to compete today in all types of companies.

Lessons learned: Act faster, fail fast, if it doesn’t work  – fix it. There’s no room for caution in any business today.

When asked about his life and noteworthy accomplishments, Jack said he can’t address his legacy, because “legacy is a bore.”

Suzy said Jack has an incredible curiosity about what’s happening and why. She gave an example of Jack interrogating a taxi driver in a 3rd world country everything about the place.  When they arrived at their destination, the taxi driver was completely overwhelmed by Jack’s close questioning.

Jack’s closing remark: “India is all about brain power. We went there for (lower) cost, but found intellect.”


Grand Keynote 2. – Aamir Hussain (EVP & CTO, CenturyLink), Tom Reilly (CEO Cloudera) Gary Gauba (Founder & CEO CenturyLink Cognilytics) –Transformational Journey Towards New Data Economy:

CenturyLink is the 3rd largest telco in the U.S.and operates in 5 continents. That despite only having a wireline footprint. In recent years they’ve acquired Qwest/US West, Embarq (formerly Sprint Local), Savvis, and CenturyLink Cognilytics. CenturyLink’s serves 98% of Fortune 500 companies and 20% of the world’s internet traffic flows through its network.

Cloudera is revolutionizing enterprise data management by offering the first unified platform for Big Data. It uses (Apache Open Source) Hadoop, which enables distributed parallel processing of huge amounts of data across inexpensive, industry-standard servers that both store and process the data, and can scale without limits.

Cogniltyics (now part of CenturyLink) is a Big Data/Analytics as a Service company.

The lobby of CenturyLink's technology center in Monroe, LA.
CenturyLink’s Technology Center of Excellence

Century Link (CTL) recently opened a huge “Technology Center of Excellence” in Monroe, Louisiana. It includes a technology research and development lab, a network operations center and collaborative office and meeting space. In the Center, employees with network, cloud, information technology and other skills will work together to create innovative products and services for CenturyLink’s customers.

Aamir, who hold 11 telecom related patents, said CTL has transformed itself from a traditional telco (providing only network connectivity) to an IT services company (with a full range of managed services). There are thousands of applications running on the CTL network (we suspect most of these came from the Savvis acquisition in 2011).

“More data is being created today then companies can process,” Mr. Hussain said. And that trend will only accelerate with IoT devices sending massive amounts of collected/monitored data to the cloud. While old data was said to have “gravity,” new data (from sensors, mobile/wearable/IoT devices) will be processed by cloud resident compute servers

Hussain believes there’s a huge market for hybrid (private + on premises) cloud.  His very credible thesis is that the older IBM mainframe applications will continue to run in premises  customer data centers, while new applications will be developed and invoked from a hosted private  cloud.  That makes for a “static” hybrid cloud solution, which doesn’t have to deal with the thorny (and unresolved) problem of bursting from private to  public cloud with data results being stored back in the private cloud for security, safety, and governance/compliance.”

“Cyber security is seen as a huge opportunity for CTL. It’s on top of every customers mind who ask: How to protect my business? “ As 20% of global data traffic passes through the CTL network, the company strongly believes they have a responsibility to protect it, Hussain said.

[Tom Reilly said that Cloudera was using on chip encryption from Intel and cyber security intelligence in Hadoop to protect their customers’ data.]

Summing up, Hussain provided this advice to service provider companies: “Be agile, nimble, listen to customers. Big data has and will continue to change (disrupt?) many business models.”

Gary Gauba gave this advice for entrepreneurs: “Dream big and go make it happen. Take the ups and downs of your entrepreneurial journey in stride. Believe in yourself.” Gary suggested that CenturyLink and Cloudera were good companies for entrepreneurs to partner with.

In a post conference email to this author, Gary expressed his thoughts on the TiECon session and its relevance for the “new data economy.”:

The transformational journey for the new data economy is a common theme and has sparking a lot of interest.  The thesis behind this topic is big data, the evolution of technology and serving the omni-channel customer. At TiECon, Aamir Hussain, Tom Reilly and I presented at a grand key note discussing the implications of the cloud, big data and the Internet of Things (IoT).

The question on everyone’s mind is: How does my organization embark on the journey of the new data economy?  Organizations are hoarding terabytes of data — only a small fraction is actually being monetized, and the rest gets lost.

As technology leaders, Cloudera and CenturyLink Cognilytics are looking at ways to transform processes and interactions with customers to ultimately reduce costs and improve efficiency. CenturyLink Cognilytics and Cloudera are working together on a mission to help businesses of all sizes monetize this data as a strategic asset, transforming raw data into actionable and valuable insights that help them leap-frog their competition.

CenturyLink showcased itself as an 80+ year old, entrepreneur-like company that has built grand-scale technology centers of excellence and is leading the charge on enterprise-grade technology solutions

On TiECon 2015:

It was a great turnout at TiECon. Thousands of budding entrepreneurs, venture capitalists, executives and inquisitive minds listened to keynotes, participated in breakout sessions and engaged with start-ups.   


References:

Video of the 2nd Grand Keynote: https://www.youtube.com/watch?v=f6hdyCxFTVE

Interview with Aamir Hussain of CenturyLinkhttps://www.youtube.com/watch?v=4noR3WuswP4

CenturyLink’s gigabit fiber expansion in 17 states targets SMBs:
http://community.comsoc.org/blogs/alanweissberger/centurylinks-gigabit-fiber-expansion-17-states-targets-smbs


Postscript:

On May 19th CTL announced it has been identified by industry analyst firm Gartner, Inc. as a visionary in the 2015 Magic Quadrant for Cloud Infrastructure as a Service- Worldwide, report.

“In the fast-moving cloud market, CenturyLink continues to differentiate in hybrid IT innovation with our advanced cloud services and complementary agile infrastructure, network and managed services,” said Jared Wray, senior vice president, platforms, at CenturyLink. “The velocity of our cloud innovation continues to intensify, with our agile DevOps approach delivering new features and functionality that delight our customers.”

With the recent acquisitions of Orchestrate, Cognilytics and DataGardens, as well as global expansions of its cloud node locations and data center footprint, CenturyLink continues to advance its managed services, cloud and collocation offerings for enterprises.

Gartner analysts Lydia Leong, Douglas Toombs and Bob Gill authored the Magic Quadrant for Cloud Infrastructure as a Service, Worldwide, report, published on May 18, 2015. Evaluation for the report was based on vendors’ completeness of vision and ability to execute.

 

IoT Sessions at 2015 GSA Silicon Summit – Part II

Introduction:

In this second of a two-part article series, we review the IoT afternoon session at the April 15, 2015 GSA Silicon Summit. Part I summarized the morning session and is available here.

MEMS (Micro Electro Mechanical Systems) and Sensors, Shaping the Future of the IoT:

  • Todd Miller, Microsystems Lab Manager, GE Global Research
  • Behrooz Abdi, President and CEO, InvenSense
  • Steve Pancoast, VP, Software and Applications, Atmel
  • David Allan, President and COO, Virtuix Inc.

Todd Miller told the audience what GE cares about for the IoT. One concern is that ~40% of skilled U.S. manufacturing workers will retire in the next five years. That’s a huge challenge for the Industrial Internet, because there’ll be an acute shortage of workers to make the devices/controllers. More outsourcing of high-tech manufacturing to Asia?

Other important challenges include: performance, mitigating cybersecurity threats, scale, and interoperability via open standards. Costs that don’t scale well will limit the value created, Todd said.

GE’s Industrial Performance and Reliability Centers maintain critical asset operations with 6K+ assets in 770 world-wide sites which are monitored 24/7. A wind power site was given as an example.

GE is a founding member of the Industrial Internet Consortium (IIC), an open membership, not-for-profit group of public and private institutions that focuses on:

  • Developing use cases and test beds
  • Sharing best practices, reference architectures, case studies
  • Influencing global standards development to ensure interoperability
  • Building confidence around new and innovative approaches to security
  • Other founding members include: AT&T, Cisco, IBM, and Intel

Miller said that the value of IoT to customers will be huge. Connected machines could eliminate up to $150 billion in waste across industries, he said. Five such industries were cited: Aviation, Power, Healthcare, Rail, Oil and Gas. There are IoT/connected machine benefits provided for each.

GE Global Research provides innovation via “breakthrough device concepts, which become real working devices… from prototypes to low volumes.” A GE MEMS Relay Product Line was established in 2014 with external shipments scheduled for Q4 2015.

Of course, the biggest threat for industrial control is security. In this author’s opinion, the vulnerability of critical infrastructure such as energy and utility is vastly underestimated. A report by Ponemon Institute and Unisys titled “Critical Infrastructure: Security Preparedness and Maturity,” highlights the striking disparity between awareness of cybersecurity risks and the implementation of security protocols in critical infrastructure sectors. More information on this important topic is here.

The Industrial Internet Security Working Committee will establish a security framework to be applied to every technology adopted by the IIC. The framework shall ensure sufficient cyber security and privacy for the various users of the industrial internet. The Security Working Group will also point to best practices and identify gaps. Good luck!


Behroz Abdi characterized IoT as a new form of “Ambient Computing”AlwaysOn with Intuitively Interactive Apps and Services. Another descriptor given was “The Internet of Sensors,” with functions [f(x)] for location determination, activity, time, and environment.

InvenSense was said to be a company that integrates sensors on a SoC, develops algorithms & software, as well as doing systems integration. “The fabless model for MEMS is based on process technology and sensor integration,” he said. The SoC functions from InvenSense often include on-chip building blocks like FIFOs, a digital motion processor, activity classifier, inertial sensors, a tilt sensor, device context gestures, and wake-up sensors.

Abdi said that MEMS technology for a “motion tracking solution” has become more of a software business with 2/3rds of InvenSense’s hires involved in algorithm development for deep learning and software integration.

“Sensors are transformative and are fueling the Internet of Things,” according to Behroz. That’s illustrated in the chart below.

Sensors are transformative according to InvenSense.
Sensors are transformative according to InvenSense.

Wearables are a very promising market for the company. Wearable computing, sports equipment, fitness/activity tracker, virtual reality, head mount displays, extreme sports cam (GoPro camera?), fitness watch and smart pods were cited by Behrooz as wearable IoT products.


Steve Pancoast talked about MEMS and Sensors, Shaping the Future of the IoT. There is a lot of non-digital information processing in the IoT that’s doesn’t follow Moore’s law, he said. That includes: sensors, RF and passive/discrete components.

In particular, edge sensing nodes are and will be a large part of the IoT, Steve said. Some examples are provided in the graphic below:

Edge sensing nodes are a major part of IoT.

Those IoT edge nodes have a broad range of applications and that diversity mandates the following:

  • A very broad portfolio of low power MCUs and MPUs
  • A diverse portfolio of easy to use, secure, relevant wireless products
  • A complete solution where the system software becomes a key differentiator

A complete IoT solution from Atmel will usually include: Wireless Connectivity, Security/Privacy functionality, Low Power Embedded MCUs and MPUs, Sensors, and Software/Tools

IoT Communications Topology will be very dependent on the industry vertical, whether the end device is in a building/home or in the field (different network connectivity) and what type of gateway (if any) is needed to connect things/ endpoint devices to the Internet. This is depicted in the chart below:

The IoT communications topology is depicted in this diagram.

Atmel SmartConnect was said to bridge the gap between embedded hardware/firmware/ software developers and backend services/software developers, as shown in the illustration below:

What does it take to make IoT a reality?

Sensor’s big technology ally was said to be “contextual computing,” which will determine “Where, When, Who, How, and What.” Atmel says that “Contextual computing will be the driving force behind the next wave of new technologies.” We’ll see…

A very comprehensive IoT layered security diagram is shown below. It illustrates each protocol stack layer and the corresponding security function/protocol. The key point is to provide critical security for each and every IoT edge node.

SmartConnect IoT layered security solutions is depicted

IoT endpoints were said to be “a natural fit for Atmel MCUs.” Steve stated that Atmel has:

  • Complete Range of Processing Cores: ARM Cortex M0+, M4 & A5/A7 MPUs
  • Industry leading Low-power, SmartConnect Solutions: Wifi, BT/BLE, 15.4 coupled with Cloud Solutions
  • Sensor Hub Solutions & SW with wide industry support
  • Large selection of Robust IoT Crypto Solutions & Security software

In closing, Steve told the audience to “dream big about IoT” as he showed a photo of a fish wearing what looked like an IoT harness with embedded sensors.


David Allan was very poised as he delivered his closing conference presentation by welcoming attendees to the “Second Machine Age.” Hello: smart house, connected car, connected person, and even a connected cow!

After quoting Broadcom’s founder & CTO Henry Sameli, PhD that “Moore’s law is coming to an end” David boldly claimed that “Moore’s Law doesn’t matter!” He believes that the rise of distributed computing makes transistor densities and processor clock speeds less relevant than before.

[Coincidently, an article in the Economist magazine made the same point: “With the rise of cloud computing, the emphasis on the speed of the processor in desktop and laptop computers is no longer so relevant. The main unit of analysis is no longer the processor, but the rack of servers or even the data centre. The question is not how many transistors can be squeezed onto a chip, but how many can be fitted economically into a warehouse. Moore’s law will come to an end; but it may first make itself irrelevant.]

David cited Google’s work on “MapReduce: Simplified Data Processing on Large Clusters” as being relevant for the IoT.

The key characteristics of MapReduce are to:

  • Scale down: to minimize number of nodes
  • Scale up: to maximize number of nodes
  • Assume failures are common
  • Move processing to the data (data locality)
  • Avoid random access

Mr. Allan defines IoT as wireless sensor networks connected to the cloud, thereby harnessing the power of distributed computing.

[But not all the wireless sensors/IoT endpoints will directly connect to the cloud. Many will communicate with a local controller/ gateway or to each other. For example, the Intelligent Proximal Connectivity (AllJoyn) is a collaborative open source project of the AllSeen Alliance that aims to enable apps to connect, control and share resources with other nearby apps and connected smart things.]

Deep Reactive-Ion Etching was said to be important for MEMS, but also for advanced 3D wafer level packaging technology, which might be used for IoT sensors and endpoints.

MEMs are used a great deal in mobile devices, as illustrated by the schematic diagram below:

MEMS in mobile devices from Virtuix.

Mr. Allan is quite concerned about IoT standards and non-conformance of sensors to performance specifications. David wrote in an email:

“Yes, we see a great need for standards, in particular harmonized performance standards for sensors. Sensor devices which—according to datasheets—have equivalent performance, often differ in reality.

For example, after the iPhone 5S switched from a three-axis STMicro LIS331DLH accelerometer to a seemingly equivalent Bosch BMA220 part, many applications (mostly video games) suffered a loss of accuracy of as much as five degrees! Some magnetometers didn’t perform according to specs.

In the future, we’ll decide which part to populate after extensively testing our production boards. Clear performance standards would make this decision possible up front.”

Somewhat whimsically, David asked the audience: “What will the second machine age look like?” His futuristic answer:

“Our new machines will augment human desires…”

  • Immortality
  • Omniscience
  • Telepathy
  • Teleportation

Personally, I’ve been waiting for teleportation since I watched the original Star Trek in college. “Beam me up Scotty.” Over and out….

Till next time……………..

Summary of IoT Sessions at 2015 GSA Silicon Summit – Part I

Introduction:

This two-part article series summarizes the highlights, key points, and take-aways from the IoT tracks at the excellent GSA Silicon Summit, held April 15, 2015 at the Computer History Museum in Mt View, CA.

The Internet of Things (known as “IoT” or for Cisco, Qualcomm and others “IoE”) was the driving theme throughout this superb symposium. GSA says: “the IoT is driving the expectancy for ubiquitous connectivity and universal access to data, immersive technology is changing our expectations on how we interact with the physical and virtual worlds.”

The excellent GSA summit offered two intriguing IoT sessions this year. We review the morning IoT session in this article. Part II will summarize the afternoon IoT session.

The IoT and the Hyper-connected World:

  • Gregg Bartlett, SVP, Product Management Group, GLOBALFOUNDRIES
  • James Stansberry, SVP and GM, IoT Products, Silicon Labs
  • Rahul Patel, SVP and GM, Wireless Connectivity, Broadcom
  • Dr. Martin Scott, SVP and GM, Cryptography Research Division, Rambus

In the leadoff presentation, Gregg Bartlett opined that silicon technology will be an enabler of IoT innovation at the edge node. Areas to be improved include: reduced power consumption, cost, complexity, integration with other components, and security.

Gregg noted that the IoT already exists in many diverse market segments, such as: energy, home automation, healthcare, and factories. He said “the IoT demands continuation of Moore’s law” and offered a processing method called “fully depleted silicon on insulator” (ED-SOI) technology. Bartlett believes that ED-SOI could lead to breakthroughs in power, cost, and integration. “It’s ideal for IoT,” he added.


James Stanberry identified three critical issues for IoT in his talk titled Engineering the IoT:

  • Energy efficiency IoT device uses only 10% of the power of a cell phone yet must operate for 5 to 10 years
  • Connectivity- including WiFi (perhaps a low power version), Zigbee, BlueTooth, Thread (IP v6 addressable end nodes), 3G/LTE cellular, and proprietary wireless. 2.4GHz, 5GHz, and sub-GHz frequencies will all be used. (There’s also PoE and low cost Ethernet in connected cars)
  • Level of integration- an IoT SoC might include: multi-protocol radios, MCU, sensor interface, energy management, non-volatile memory (NVM) and mixed signal control.

Stanberry said we should expect many IoT technology advancements in 2015, including:

  • Dramatic reductions in energy consumption
  • Low power connectivity as standards gain traction, and
  • Introduction of IoT SoCs (this author believes that there will be many types of IoT SoCs, perhaps optimized for industry vertical markets)

Rahul Patel talked about Connecting Everything in Health Care, a vertical market where this author sees tremendous potential and power. Rahul defined three primary IoT markets:

  • Consumer
  • Industrial
  • Health care/medical

Medical includes: Clinical Health, Telemedicine, Biometric, Medical Devices, etc. Patel said that VC funding for connected health increased over 400% in the last four years. Please refer to the chart below.
Growth of medical devices in recent years.Notice that the leading segment for Connected Health VC funding has been big data/analytics and Rahul said that’s likely to continue in the future. He said that the intersection of IoT and big data/analytics would create new opportunities, citing body borne sensors and computing coupled with big data/analytics in the cloud (via software running on a compute server in a cloud resident data center).

IoT device requirements identified were:

  • Data/network security, encryption, authentication
  • Reliable, consistent across operations
  • Interoperability across open standards based devices (this includes minimal protocol stacks as well as the PHY/MAC for connectivity)
  • Compliance with regulatory bodies such as the FDA, NIC, FCC, FTC, etc.

Broadcom aims to be a major IoT player- not just at the end node, but also with analytics and “app ready” software (presumably for the cloud). Their emphasis, of course, will be on connectivity which Rahul said “will drive innovation like never before.” He cited security, reliability, standards driven, and regulatory compliance as key areas for innovation.

Summing up, Rahul said that:

  • The IoT value proposition (presumably for Broadcom) lies in data and wireless connectivity
  • Opportunities will inspire new technologies and business models
  • Creates a new paradigm, “Healthcare in a connected world”
The elements in the IoT end-to-end system.
The elements in the IoT end-to-end system.

During the Question and Answer portion of the panel, Rahul said that the key silicon issue for IoT is the integration of CMOS Non-Volatile Memory (NOT Flash!) with RF functionality. When asked why not Flash, he said, “it doesn’t scale to the small geometries needed for IoT.


Dr. Martin Scott’s talk was titled: Secure Root-of-Trust- Feature Management Provides Foundational Security for the IoT.

Dr. Scott says we don’t even have to wait for the 50B+ connected devices in 2020 to be aware of the huge IoT security problem, which is evident today in unprecedented security breaches at all levels: Data Center, network and “edge,” as well as device “end points.”

Martin noted that all endpoints are not created equal. “Obviously, a refrigerator isn’t analogous to critical national infrastructure such as a power grid or pumping station. Nevertheless, the security of any complex system is defined by its weakest link,” he explained. “Imagine if someone gained unauthorized access to a home WiFi network via a smart refrigerator or washing machine. Once on the network, an attacker could theoretically assume control of a wide range of sensitive devices and systems, including pacemakers, insulin pumps and even connected cars.”

Security at the endpoint should be of paramount concern to IoT device makers, systems integrators and users. If a system relies on software, says Scott, it is inherently hackable. In contrast, a hardware-based approach, such as one offered by Rambus’ CryptoManager, is one of the most secure ways to protect sensitive keys, data and infrastructure.

Of the three levels of security depicted in the graphic below, the highest level is silicon-based security integrated into the IoT endpoint device (as Intel and Freescale claim they’re also doing).

Different approaches to secure IoT endpoints.
Different approaches to secure IoT endpoints.

Dr. Scott made some very important statements regarding the importance of good IoT security:

“It’s important for us to address the inevitable security vulnerabilities that go along with the rapid deployment of smart edge nodes and sensors. According to IDC, 90% of all IT networks will have an IoT-based security breach within two years. To make matters worse, there is fresh motivation for those seeking IoT-related vulnerabilities…”

“Money, greed and the desire for power are some of the usual suspects, although there are also people who are interested in exploiting security vulnerabilities and causing national harm as a way to express an ideology. The good news? Silicon, in the form of a hardware-based root-of-trust, can go a long way in helping to secure the IoT.”

Dr Scott concluded with a very informative slide depicting security in silicon:

The foundation of trusted services is Silicon.

About GSA:

The Global Semiconductor Alliance (GSA) is the voice of the global semiconductor industry with nearly 400 member companies throughout 32 countries and representing over 75% of the industry revenues. GSA provides a neutral environment for semiconductor executives to meet and collaborate on ways to improve efficiencies and address industry wide topics and concerns.

Stay tuned for Part II –MEMS and Sensors, Shaping the Future of the IoT.

IDC Directions 2015: Major Network Transformations Needed to Adopt to 3rd Platform

Introduction:
Network realignment was a very hot topic at IDC Directions 2015 last Wednesday in San Jose, CA. We review selected presentations that cover the types of new mobility and cloud network transformations that will reside on the 3rd platform (cloud, mobile, social business, big data/analytics).

The major wide area network (WAN) transformation needed is one that moves from remote/central site private line/virtual private line connectivity to all sites having a reliable, available, and high performance connection to one or more Cloud Service Providers (CSPs). New strategies and partnerships are forming to address these challenges for wireless and wire-line carriers/MSOs as well as for newer players providing cloud connect solutions such as cloud exchanges.

Presentation Summaries & Take-Aways:
(1) During an early bird session on the big SMB Technology Reset, IDC’s Ray Boggs noted that on average, SMB outperformers (those citing net revenue gains in past year) were 61% more likely than the average SMB to prefer cloud delivery over on premise when deploying new IT solutions. Laggards (those citing net decreases in revenue over the past year) have almost the same response rate as the average. With a cloud access/delivery first model, SMBs need to revamp their WANs from the typical point to point private line/virtual private line or network model to one where all sites have high-speed/high availability access to cloud compute and storage resources.

Ray added that those same SMB outperformers are much more likely than the average SMB to prioritize mobile support (BoD, 3G/4G, WiFi) as a key 2015 spending priority. In particular, Small Business Outperformers are 58% more likely, Mid Market Outperformers are 60% more likely than the average SMB to have a solid mobile workforce strategy in place by 2015.

(2) In his morning keynote presentation on Tech Disruption and Data Center Transformation, IDC’s Rick Villars said that only 11% of WAN managers said they don’t need to change their networks to accommodate cloud services (likely because they weren’t planning to use them anytime soon). The remaining 89% of WAN managers are pursuing multiple options to realign their networks from the typical branch office-central site connectivity to more of a star topology where the majority of compute and storage services are delivered from one or more clouds. Some of the questions those managers were said to be concerned with were:

  • Where’s Your Data? Is it stored locally, cached, or in the cloud?
  • What’s In Your Service Catalog? For access by both internal line of business’ and external customers/partners.
  • Is Your Network Congested? If so, how to alleviate it without too much over-provisioning?
  • 50% of new IT hardware will be bought as a “converged bundle” in 2018. New software defined models (OpenStack, Hyper Convergence, Software Containers, etc) will influence IT hardware purchases.
  • 58% of IT budget in 2016 will be for managed services.

(3) In a very intriguing presentation on Industry Clouds for line of business (LOB) to line of business communications, IDC’s Scott Lundstrom made these key points:

  • Numerous examples exist in life sciences, biotech, financial services, retail, manufacturing, government, healthcare, and energy
  • The number of Cloud Industry Platforms will expand to +500 by 2016, generating over a billion dollars in IT spending
  • Industrial Data Lakes – Big Data on industry-specific platforms (e.g. GE, Merck, UHG)
  • Industry platforms will disrupt 1/3 of the Top 20 Market Leaders in most industries by 2018
  • Industry Cloud Participants include: existing enterprise suppliers, emerging cloud platform operators and networks, industry process and community specialists, services, software, and hardware vendors. Effectively, one Line of Business (LOB) to another LOB.
  • New joint ventures will emerge
  • Industry developer communities will gather and grow
  • End users becoming suppliers – Global 2000
  • LOB-2-LOB is the next B2B

Digital networks (LANs and WANs) are having a huge impact and disrupting business models:

  • Innovation Accelerators drive change in every industry
  • Connected products create new service opportunities
  • Improving the process with sensors and automation
  • Distribute intelligence and determine the next best action

(4) The spot on highlight of this year’s IDC Directions for me was Courtney Munroe’s presentation “The Future of Telecommunications Networking: Resurgence or Obsolescence?” With digital traffic and content continuing its exponential growth trajectory, and ARPUs flat or declining, both wireless and wireline telcos have an unsustainable business model. What steps they take to ensure their survivability depends on the market they’re addressing: wireless, residential broadband, enterprise wire-line, or cloud connect.

Consumer (wireless and residential broadband) market requirements for telcos:

  • Manage the mobile data storm (Courtney didn’t say how – data caps?)
  • Recognize that pure play connectivity/Internet access is dead. Instead, implement a multi-play strategy (Verizon, AT&T, and Comcast have certainly done that with their double and triple play bundles)
  • Create an Over The Top (OTT) strategy – either alone or with partner companies. An example is Vodafone partnering with Dropbox to deliver cloud based storage for smart phones.

Enterprise market requirements for telcos: develop a Cloud Hub Enterprise WAN. This is best illustrated in the chart below titled: Enterprise WAN Requirements vs Internet-based Cloud Connectivity

IDC - Slide 6 - Task at Hand
The Task at Hand: Developing the CSP Enterprise – slide courtesy of IDC

Instead of the plethora of connectivity choices for business customers to interconnect their geographically dispersed locations (private lines, Ethernet virtual private lines/LANs, IP-MPLS VPN, IP SEC VPN, etc) Courtney suggested that all physical sites should be cloud connected. The three choices today, depicted in the illustration below, are: public Internet, private line to CSP POP, and something equivalent to Verizon’s Private IP (one of several network operators that have a cloud network solution).

IDC - Slide 8 - Cloud Connected Devices.
Cloud Connect Choices – slide courtesy of IDC

Among cloud networking solutions similar to Verizon’s Private IP: AT&T Netbond, Orange’s Business VPN Gallerie, NTT Com’s Enterprise Cloud (for NTT’s private cloud service only), Century Link/Savis IP-MPLS VPN, and specifications from the Metro Ethernet Forum on Carrier Ethernet for Cloud Service Delivery  (although we don’t know of any network or cloud providers that have implemented it yet).

NFV was said to be “the new holy grail” for network operators as they’d then be able to virtualize and automate service creation and delivery. NFV examples include: vCPE, vFirewall, vVPN (???), vSet Top Boxes. Courtney said that telcos might be able to save 25% on operational costs and provide cloud based services. He identified AT&T, Telefonica, NTT/Virtela, and China Telecom as telcos that have announced NFV initiatives (Orange is also a leader in testing and deploying NFV in their SF research center). AT&T was quoted as saying that by 2020, 70% of their network would be virtualized.

When it comes to global revenues and profits, the telco space is very concentrated with five major players: AT&T, NTT, Verizon, DT, and China Mobile as per the graphic below:

IDC - Slide 11 - Even more Connections
Global Earnings: Even more Consolidation – slide courtesy of IDC

 

Consolidation is expected to continue in 2015. IDC say that there were ~100 telecom M&A deals in 2014 worth $262B.

Mr. Munroe than presented six telco/MSO business models: mobile first operator, integrated multi-national super carrier, broadband/content first super carrier (mostly MSOs/cablecos), data center exchange/ fiber-cloud interconnection, cloud communications providers, and cloud VPNs.  The Data Center Exchange/Cloud Connect/Cloud Exchange model is shown below:

IDC - Slide 16 - The Evolving Business Model
The Evolving Business Model Datacenter Exchange/Fiber Centric, slide courtesy of IDC

For Data Center Exchanges/ Fiber Centric players, Courtney named several companies: Level 3, Tata Communications and Allied Fiber (see ViodiTV interview with Allied Fiber’s Hunter Newby). For Cloud Exchange, he cited: Equinix, Interxion, and Zayo.


Sidebar: Cloud Exchanges and Cloud Connect Solutions

For several years we’ve heard about cloud exchanges for interconnecting multiple cloud providers, but haven’t seen much deployment yet. Hosting and co-location providers realize that space and power are becoming a commodity service, so they are beginning to offer higher value cloud exchange or cloud connect services to provide direct connectivity for their customers to global carriers, ISPs, Internet exchanges, content and CDN players, storage vendors and enterprise and ecosystem partners.
“Cloud Connect” solutions allow co-location providers to offer enterprise customers high bandwidth, low-latency cloud connections that bypass the public Internet for superior throughput, reliability, security, and economics. Cloud Connect services combine the economics and service velocity of public clouds with the performance, reliability, and security of private connections.

Months ago, I asked a Comcast Business speaker how his company would provide cloud access to business customers he said “Cloud Exchanges” without any hesitation. We think this area deserves close watching in the months ahead.


Summing up with essential guidance for the telco/MSO space:

  • Large Scale Super Carriers will strive for additional scale
  • Cloud Exchanges will expand to Emerging Markets
  • Cloud Platform Providers will become major Players
  • SDN/NFV will create long-term investment opportunities
  • CSPs need help developing Channels (vertical solutions, IoT developers, VARs/OEMs/Systems Integrators, and OTT players)
  • Developers will become important CSP Partners (we think that will be especially true for OTT and IoT solutions)

It will be very interesting how all this plays out as the move to the 3rd platform accelerates in the years ahead.

References:

FCC AWS-3 Auction Ends; Raises Record $44.9 Billion!

The Federal Communications Commission (FCC) has just closed its auction of AWS-3 wireless spectrum licenses, raising a record $44.9 billion in the process. The Wall Street Journal reports (on line subscription required) that this is the largest amount of money the FCC has ever collected from a spectrum auction, and is more than double what was earned in 2008 during the much publicized 700MHz auction.

[Reference: Has the 700 MHz Auction Been a Failure?]

The auction was comprised of over 1,600 different licenses.  The FCC said it will announce the auction results within the next few business days.

The AWS-3 spectrum cover frequencies in the 1700MHz and 2100MHz blocks, but they do not overlap with the AWS-1 spectrum that a number of carriers (most notably T-Mobile and Verizon) already use. AWS-3 spectrum is good at carrying large amounts of data and is well suited for cities, where wireless data use is soaring. Winners of the auction will likely use the new spectrum to bolster their existing wireless networks with greater capacity (except for Dish Networks which has been accumulating spectrum but hasn’t deployed a wireless network yet).

Seventy companies participated in this AWS auction, including Verizon, AT&T, T-Mobile, and Dish, but the FCC hasn’t yet released how much each company bid. The auction’s aggressive bidding surprised analysts who thought it would be a quiet affair dominated by AT&T and Verizon. Anonymous results show multiple bidders fought hard for coveted licenses in markets like New York and Los Angeles, which commanded the largest sums. As of the auction close, the four main licenses for the New York region alone totaled about $6.2 billion.

Analysts estimate the bulk of the auction proceeds came from AT&T and Verizon, each of which might have spent $15 billion to $20 billion on bids. It’s possible that both carriers bid around each other since the AWS-3 band plan made it possible for two carriers to land 20 MHz of spectrum. Other major bidders likely were T-Mobile and Dish Network.

The paired blocks have earned the largest bids in many markets. The J Block license for New York City alone has pulled in nearly $3 billion.

The aggressive bidding highlights the enormous scale needed to compete in the U.S. wireless market, a reality that makes it hard for rivals to challenge the market’s leaders. AT&T and Verizon control most of the industry’s most lucrative customers and the bulk of its revenue and profits, which gives them enormous financial firepower in such auctions.

While big markets like New York, Los Angeles and Chicago drew the highest bids, smaller markets including Portland, Maine, and Louisville, Ky., received bids over $20 million. One license in American Samoa commanded the lowest bid, at $2,800.


On Wednesday, January 28, 2015, the FCC said Blocks G, H, I and J garnered no bids or withdrawals during Round 337. Since no proactive activity waivers had been placed and the reserve price has already been met, the Commission closed bidding on those blocks: G Block (1755-1760/2155-2160 MHz), H Block (1760-1765/2160-2165 MHz), I Block (1765-1770/2165-2170 MHz), and J Block(1770-1780/2170-2180 MHz). H, I and J are broken up into 176 licenses each based on economic areas (EA) and G is broken up into 734 licenses based on cellular market areas (CMA).

Security is Biggest Issue for U.S. Infrastructure, Cloud Computing, Open Networking, and the Internet of Things

The Security Threat is Real and Increasing!

“At around 8:15am the Monday before Thanksgiving, that black screen of death came on (all the office PCs). They shut down the entire network. We couldn’t really work the rest of the week, which seemed OK because it was a holiday week. But as Tuesday and Wednesday progressed, it became clear that this wasn’t a simple hack….It wasn’t until Monday or Tuesday of the following week when we realized the extent of it. That’s when we got word that it might take weeks to get (our PCs and Data Centers) back up.”

Those words from an employee of Sony Pictures Entertainment who talked to Fortune magazine.

As is now common knowledge, Sony Pictures Entertainment revealed that it had been hacked by a group calling itself the Guardians of Peace, which the FBI claims was an agent of North Korea. Apparently, that repressive Communist country was using cyber-terrorism in an attempt to repress free speech in the United States.

Few remember that between April and May 2011, Sony Computer Entertainment’s online gaming service, PlayStation Network, and its streaming media service (Qriocity), along with Sony Online Entertainment (the company’s in-house game developer and publisher), were hacked by LulzSec – a splinter group of the hacker collective known as Anonymous.

The latest Sony cyberattack comes after many years where China’s government has been accused of hacking into U.S. State Department, Postal Service, military contractors and government agency computer networks.

Iran has tried to disrupt American banks with denial-of-service attacks, and conducted a destructive attack on a Saudi oil company’s computers in 2012. For years, organized crime groups in Russia have used cyberespionage to commit financial fraud, while the Russian government does nothing to stop it.

Expect to hear of more of our government networks infiltrated by rogue foreign states. A Georgia Institute of Technology report on Emerging Cyber Threats in 2015 states, “Low-intensity online nation-state conflicts become the rule, not the exception.”

It’s not only Sony and the U.S. government being targeted. Let’s not forget the cyber attacks and data breaches on Target, JP Morgan Chase, Home Depot, Apple, EBay, P.F. Chang (restaurants), Domino’s Pizza, Montana Health Department, Google, etc.

Reports, Maps, and Expert Opinions:

In it’s most recent State of the Internet Security report, Akamai states that there were a record setting number of DDoS (Distributed Denial of Service) attacks on websites in Q3 2014.The 22% increase in total DDoS attacks marked an 80% increase in average peak bandwidth compared to Q2 2014 and a 389% increase from the same period a year ago (Q3 2013). That means the largest companies with the highest bandwidth websites are being targeted by hackers.

Kaspersky's real-time cyberthreat map.
Kaspersky’s real-time cyberthreat map.

This terrific interactive cyber map from anti-virus software maker Kaspersky, depicts all the current cyber attacks occurring around the world in real time. It clearly shows the growing intensity of hack attacks as the year progresses.

“Security will never be the same again. It’s a losing battle,” said Martin Casado, PhD during his Cloud Innovation Summit keynote speech on March 27, 2014.  “Currently, cyber security spend is outpacing IT spend, and the only thing outpacing security spend is security losses,” he added.

A recent survey by the Ponemon Institute indicated the average cost of cyber crime for U.S. retail stores more than doubled from 2013 to an annual average of $8.6 million per company in 2014. The annual average cost per company of successful cyber attacks increased to $20.8 million in financial services, $14.5 million in the technology sector, and $12.7 million in communications industries.

Clearly, this isn’t an issue of investment, innovation, or priorities as most large industries are built around security. Mr. Casado believes there is a fundamental architectural issue: that we must trade off between context and isolation when implementing security controls.

Security Top Concern for Cloud Computing and Open Networking:

With today’s huge “cloud” resident data centers (Google, Amazon, Facebook, Microsoft, Yahoo, etc), there is a very large potential “attack surface” or “threat footprint” for malware and other cyber threats. That’s still the number one concern of users who are considering cloud computing.

In a Dec 17, 2014 article KPMG says “Data Security Still Top Cloud Concern.” However, theft of intellectual property (IP) is the most significant challenge IT executives face in doing business in the cloud. Isn’t theft of IP a security issue too?

The mega trend to replace hardware functions by software (known as open networking, software defined networking, network virtualization, and network function virtualization) greatly compound the security problem by exponentially expanding the threat attack surface.

For example, if the (centralized) SDN Controller goes down because of a DDoS attack, the entire network goes down. If multiple NFV “virtual appliances” are implemented on a compute server that has been compromised, all those functions stop working. Similarly, if a server running network virtualization (or tunneling in the overlay SDN model) is attacked, that network goes down too.

U.S. Infrastructure May Be Targeted Next:

Information security experts say the greatest danger is that foreign governments and cyber terrorists will go after the nation’s critical infrastructure — airports, water treatment plants, power companies, oil refineries and chemical plants.

Cyber terrorists could turn off the lights for millions of Americans by attacking power grids, shut down the nation’s airports by seizing control of air-traffic control systems or blow up an oil pipeline from thousands of miles away, experts say.

“This is a much bigger threat over time than losing some credit cards to cyber-criminals,” said Derek Harp, lead instructor at the recent training conference run by SANS Institute, which provides cyber security education and certification for people who run industrial control systems.

Maryland Rep. Dutch Ruppersberger, the senior Democrat on the House Intelligence Committee, said cyber attacks will be “the warfare of the future.”

“Just think what could happen down the future if North Korea wanted to knock out a grid system, an energy system, knock out air- traffic control,” he said in a December 22nd interview on CNN.

What Will U.S. Government Do in Response?

At a news conference last week, President Obama urged Congress to try again next year to pass

“strong cybersecurity laws that allow for information-sharing. … Because if we don’t put in place the kind of architecture that can prevent these attacks from taking place, this is not just going to be affecting movies, this is going to be affecting our entire economy.”

A front page article in the December 26th Wall Street Journal reported “that (U.S. government) officials have held a series of briefings on the issue in 13 cities across the country advising companies not to connect industrial control systems to the Internet.”  The article does not state or infer what those systems should be connected to.

Finally, we infer that the highly touted Internet of Things will be subject to the same cloud security issues as industrial control systems. I shudder at the thought.

References:

 

Highlights of Open Server Summit: Nov 11-13, 2014 in Santa Clara, CA

Executive Summary:

The big box buzz today is “Software Defined Everything” – from compute servers to networks, storage, and data centers. That’s according to the speakers at the 2014 Open Server Summit held last month in Santa Clara, CA. The implication is that a lot of existing hardware based functionality will be implemented as software which runs on compute servers and on network switch/routers built using commodity hardware.

If indeed that’s the case, the big losers will be the server vendors- HP, Dell, Linux (which now owns IBM’s x86 server business), IBM (which still sells Power 8 based servers) Oracle (Sun Micro/SPARC based servers), and Cisco (UCS C-Series Rack Servers). Traditional switch/router vendors like Cisco and Juniper will sell a lot less of their high margin networking gear.

The winners will be the Chinese/Taiwanese ODMs that make servers and “bare metal switches.” Semiconductor companies making processors and SoCs that will be used for commodity servers and bare metal switches will also fare well. That’s largely Intel (high-end processors for servers), but other semiconductor companies are entering the market – mostly with SoCs based on ARM cores (see section below).  Broadcom seems to be very well positioned with its switch/router silicon for all sorts of network equipment.

There have been notable advancements in materials’ technology – the optical connectors, multicore processors, and denser modules. Today, those are built into equipment used in (premises and cloud based) data centers which handle high-performance and mission-critical workloads.

Highlights and Takeaways:

  • Challenges in scaling the Data Center include: dynamic provisioning, migration to Virtual Machines (VMs), dynamic assignment of workloads to VMs, network management, software defined storage & networking (via overlays/virtualization or SDN centralized controller/data forwarding engines).
  • In Microsoft’s Azure public cloud offering, cloud services support hyper-scale workloads alongside enterprise workloads (e.g., Microsoft SQL Server, Microsoft Exchange, etc.). Many of the key concepts driving this infrastructure and the technology within it were developed by Microsoft’s Research group. Variable workloads are inevitable in this kind of environment, which is why scalable infrastructure is so important to providing elastic computing capabilities.
  • Cloud workloads exhibit increasing diversity and scale, according to Microsoft. Software redundancy, with multiple copies of data stored in different machines, is a requirement for such cloud workloads. If a server fails, it’s essential to move the workload to a different server via load balancing. That’s all about software scheduling and data replication in the cloud.
  • Cloud workloads are different from those running on premises based servers/data centers, says Microsoft.  Data is mostly read and processed with only the results stored in main memory or disk. Data retention is only for a few days to one month. A distributed file system is needed for cloud storage.
  • Some of the promising new technologies mentionedwere said to meet the demands of the cloud resident, software defined data centers:
    • Dis-aggregated building-blocks, tied together by high-speed fabrics and high-speed switches. A great example of that is the work being done by the Open Compute project.
    • Optical (light-driven) connectors linking high-speed processing with high-speed storage. It remains to be seen if silicon photonics will be used tor replace optical interconnects.
    • Hyper-converged systems, combining servers, storage and networking for faster performance. [This is a trend, but hasn’t occurred on a large-scale yet.]
    • “Flat networks” linking sections of the compute and storage fabrics within the data center. [That has yet to happen. There are still two networks in the data center- Ethernet for compute servers/routers and Fibre Channel for Storage equipment.]
    • More comprehensive and better, software management – for policy-enforcement, orchestration and automation.
  • Raejeanne Skillern, General Manager of Intel’s Cloud service Provider business, spoke about Software Defined Infrastructure—and the way it is working to provide virtualized pools of compute, storage and networking resources. This is being done to ensure that data services will scale, as needed, based on user demand for those data services. Dynamic orchestration of processing and a high degree of automation are both key enablers those this kind of software-defined infrastructure in next-generation data centers.
  • Intel makes ~ $14 billion in annual revenue from high-end processors used in servers and is predicting 15% compounded annual growth for several years to come, according to Barrons magazine.
  • While Intel holds a dominant share of processors used in servers, ARM Ltd is starting to gain market share. ARM’s “partners” which have licensed their cores for server and networking applications include AMD, Broadcom, Qualcomm, Cavium and Applied Micro.
  • In addition to Intel and ARM based silicon, there is also an effort around IBM’s POWER CPU via the OpenPOWER Foundation. IBM has opened up technology surrounding its Power Architecture such as processor specifications, firmware and software. It is offering this technology on a liberal license and they will be using a collaborative development model with their partners in the Foundation. The goal is to enable the server vendor ecosystem to build their own customized server, networking and storage hardware for future data centers and cloud computing. Processors based on IBM’s IP can now be fabricated on any foundry and mixed with other hardware products of the integrator’s choice.
  • Jian Li, Data Center Architect at Huawei, talked about “Developing the High Throughput Data Center.” He described a High Throughput Computing Data Center (HTC-DC) that supports higher throughput, better resource utilization, greater manageability and more efficient use of power. “Big Data” workloads demand this kind of infrastructure and that is why software-defined networks (SDNs) are so important in leveraging the resources already inside the data center to achieve workload scalability. Problem is there are “N “versions of SDN to choose from.
  • Steve Garrison, VP of Marketing for PICA8, talked about “White Box Switches and Integrating Open Flow (the protocol/API used between the Control plane and Data plane in classical SDN).” Steve said that SDN delivers a policy driven framework which drives down operational costs and enables “business logic” to be included in the network. The concept is to tailor the packet header and select what the business needs for each networking application.
  • “SDN should be about driving business logic into the network, so that it doesn’t constrain the business,” Garrison said. “Business logic requires rethinking the (protocol) stack,” he added. Steve believes that SDN use cases that deliver real business benefits will drive revenue starting in 2015.
  • The Data Center Battleground, according to PICA8, is depicted in the figure below. The company provides a network operating system (PicOS) that is loaded onto bare metal switches (often referred to as “white boxes,” but now established server vendors like Dell are also making them). Three types of ports are supported by PicOS: Conventional L2/L3, Open Flow (policy based traffic flows), and CrossFlow (Open Flow for policy rules combined with L2/L3 for frame/packet transport.
This is a slide describing why the data center is ripe for SDN and NFV and was from a presentation from Steve Garrison of Pica8 at the Open Server Summit.
Image courtesy of Pica8

Note: PICA8 has headquarters in Palo Alto, CA, but its R&D effort is in Beijing, China. They appear to be in competition with Cumulus Networks which also makes a network OS for bare metal switches.

  • Ron DiGiuseppe, Sr. Marketing Manager at Synopsys suggested network overlays was the best approach to SDN (rather than a classical SDN with Open Flow). VxLAN was said to be the most promising L2 tunneling mechanism to move VM traffic to any server within the Data Center, cloud or large campus network. Such an “overlay network” carries data (MAC frames) from individual VMs in an encapsulated format over a logical tunnel. It effectively extends L2 subnetworks across L3 networks while overcoming the limits of conventional (IEEE 802.1d spanning tree) MAC bridging. The target applications are intra data center communications and cloud networking where there are as many as 4M (or more) VMs.

Note: Synopsis has developed a 10Gigabit Ethernet (GE) Controller that incorporates VxLAN tunneling. That and related technologies (40GE MAC, 40GE Physical Coding Sublayer (PCS), and 12GE PHY)  are generally licensed as off-the-shelf products. However, Synopis is willing to work with large volume customers to make custom modifications to those IP cores. Ron said that 10GE implementations with VxLAN are becoming quite popular. Synopis IP was said to respond to needs of Data Center SoC’s by providing low latency, low power consumption, advanced protocols/features along with Reliability-Availability-Serviceability (RAS).

Main Messages:

Another important take-away is that processors with licensed ARM cores are making inroads in the server market. AMD has up till now been strictly an “x-86” processor shop. Yet in the Server Roadmaps session at this conference, the AMD representative said:

“ARM 64-­bit processors will disrupt the server market, primarily through cost and power the only way to differentiate ARM based SoCs for servers is through accelerators. ARM’s open SoC ecosystem makes it easy to integrate accelerators onto the chip.”

Business models are changing and ARM based SoCs may take a larger market share of the server market which would diminish Intel’s dominance of that business. That together with the relentless push for Software Defined Everything, running on commodity hardware, were the key messages of the 2014 Open Server Summit.

SDN and NFV Takeaways from Light Reading's Network Components conference in Santa Clara

Introduction:

For years, we’ve been reading and hearing about the never-ending boom in data traffic, the need for fast provisioning, agile networks, service velocity (quicker time to market for new services) for telcos, etc.  It’s been like a non-stop siren call to battle for network operators.  Yet little has been done to date to remedy the situation.

At the Nov 6, 2014 Light Reading Nex Gen Network Components conference in Santa Clara, CA, Heaving Reading analyst Simon Stanley echoed a familiar solution. He said that SDN and NFV will permit network equipment vendors and telecom carriers/ISPs to keep up with rising traffic demand, reduce OPEX, and create more flexible networks.

Standard platforms form the foundation for network hardware, Simon maintains. “Above that you’ve got a virtualization layer that essentially applies virtual resources,” Stanley said. “Instead of accessing real compute, storage and networking, these resources have become virtualized+. That gives you significant flexibility,” he added.

+ Note: The above remark assumes that the version of SDN chosen is the “overlay model,” which virtualizes or overlays the physical network.  The “classical version of SDN,” adopted by the Open Networking Foundation (ONF), doesn’t do that at all. Instead, it proposes a centralized SDN Controller which implements the Control Plane (i.e. calculates end to end paths) for many Data planes which are often called “data/packet/frame forwarding engines” that run on commodity hardware called “bare metal switches.” There is no overlay or network virtualization in classical SDN.


A representative from Advantech said that on 100 Gbps ports, virtual network functions do not scale well on commodity hardware, which results in cost inefficiencies with waste of data center or central office space and excessive energy consumption.  As no single server can handle the millions of flows embedded in a single 100GbE pipe efficiently, distributing network traffic will become an issue in a virtual infrastructure.

At the conference, Advantech announced a new 100GigE hub blade, which switches traffic between two external 100GigE CFP2 ports, up to eighteen external 10GigE SFP+ ports and twelve 40G node slots on an ATCA backplane.

An Expert’s View of SDN and NFV:

Here’s how Orange’s Christos Kolias, PhD defines SDN and NFV:

  • SDN: Abstraction of the control plane from the data plane. Key benefits are: separates control from data forwarding functionality, Network Programmability, Network Virtualization, and Intelligent Flow Management.
  • NFV: Abstraction of network functions from (dedicated) hardware. Key benefits are: elasticity, agility, scaleability, versatility, and savings on CAPEX/OPEX.  The NFV Concept, according to Christos, is illustrated in the figure below.

 

Graph showing evolution of the vehicle in the digital age.
; Click to enlarge

NFV Management and Orchestration:

“NFV is all about how you can manage and orchestrate all these new virtualized appliances,” Kolias said at the conference. Yet the ETSI NFV Industry Specifications Group (ISG), which Kolias co-founded and participates in,  hasn’t yet specified what that management and orchestration should be or the APIs that interface to such software entities.  Christos says, “It is important for the NFV community to agree on some Management & Orchestration (MANO) specification with emphasis on the interfaces and the APIs.”  Christos thinks “Openstack is a good open-source alternative for the MANO.”


SIDEBAR:  In a presentation at an IETF meeting, Mehmet Ersue, ETSI NFV MANO WG Co-chair, provided the following examples of Virtual Network Functions (VNFs) that might require MANO:

  • Switching: Broadband Network Gateway (BNG), Carrier Grade-Network Address Translation (NAT), IP routers.
  • Mobile network nodes: HLR/HSS, MME, SGSN, GGSN/PDN-Gateway, RNC.
  • Home routers and set top boxes.
  • Tunneling gateway elements (e.g. VxLAN).
  • Traffic analysis: Deep Packet Inspection (DPI).
  • Signalling: Session Border Controllers (SBCs), IP Multimedia Subsystem (IMS).
  • Network-wide functions: AAA servers, Policy control.
  • Application-level optimization: CDNs, Load Balancers.
  • Security functions: Firewalls, intrusion detection systems

Importance of NFV Service Chaining:

Many industry analysts think that service chaining* (the scheduling of multiple virtual appliance based services) is the key to automating a NFV/NVF based network. How will that be achieved?  So far, there’s no standard or specification for that functionality.  Christos doesn’t think OpenStack is a solution for that.  What is?

* Note: Christos prefers the term “service composition and insertion” to service chaining, which seems to be a more accurate description.

What Type of Special Hardware is Needed for NFV?

Another key question is what new or different hardware is needed for NFV “virtual appliances,” which will run on a generic compute server (likely built by ODMs) with commoditzed hardware. At the conference, Christos told me he thinks some type of hardware assist will be necessary, but he didn’t specify what functions would be implemented in hardware or where they might be located.  That was later clarified in an email discussion which is summarized below.

Several industry participants (e.g. Microsoft) think the compute server’s NIC(s) should be augmented to include hardware assist functions like protocol encapsulation/de-encapsulation, encoding/decoding, deep packet inspection, protocol conversion (if necessary), and some security related functions.  Christos says these should not be too “compute intensive.”

“Off-loading processing to the NIC cards could include things related to packet processing (encapsulation, encoding/decoding and may be some security-related functions) – in general not compute intensive,” he added.

Kolias believes that “Data (packet forwarding) plane acceleration could he handled by hardware acceleration,” although exactly what that hardware actually does remains to be seen. It’s important to note that the Data plane is NOT implemented in a compute server for SDN, but rather in a “bare metal switch” built from commodity hardware/SoCs and other off the shelf silicon.

The drivers, challenges, and potential applications for NFV are illustrated below (from Kolias’ presentation):

Graph showing evolution of the vehicle in the digital age.
; Click to enlarge

In addition to the challenges listed in the above figure, another huge concern for NFV implementations will be security. When individual physical box appliances are replaced by virtual appliances running on a compute server, the attack service for threats and malware increases exponentially. How will that be dealt with and how will security functions be partitioned between software and hardware?  No one seems to be worried about this now, despite increased cyber attacks in recent months and years.

The Myth of NFV Compliance:

This author and Christos wonder how ANY vendor can claim to be “NFV compliant” when there are no NFV standards/specifications in place and no testing/interoperability facility to provide the certification of compliance.   Yet those false claims have been the norm for over two years!

This author believes that without solid NFV standards/specifications AND multiple vendors passing some certification/compliance test there will be no interoperability, which defeats the purpose of all the work the ETSI NFV ISG has done to date in producing architecture reference models, functional requirements, proof of concepts, etc.

An Open Platform for NFV:

Perhaps, as step in the right direction is the formation of the Open Platform for NFV (OPNFV) – a Linux Foundation collaborative project. Here are the stated OPNFV Project Goals:

  • Develop an integrated and tested open source platform that can be used to build NFV functionality, accelerating the introduction of new products and services.
  • Include participation of leading end users to validate OPNFV meets the needs of user community.
  • Contribute to and participate in relevant open source projects that will be leveraged in the OPNFV platform; ensure consistency, performance and interoperability among open source components.
  • Establish an ecosystem for NFV solutions based on open standards and software.
  • Promote OPNFV as the preferred open reference platform.

We’ll be watching this industry initiative closely and reveal what we learn in subsequent articles covering NFV.

Yet we wonder where innovation will come from if the new network paradigm is to use open source software running on commoditized/open hardware. Where’s the value add or competitive differentiation between vendors and their “open” SDN/NFV products?

Intersection of SDN and NFV: 

The figure below depicts how Christos believes SDN and NFV might work together to achieve maximum benefit for the network operator and its customers.

Graph showing evolution of the vehicle in the digital age.
; Click to enlarge

In summary, Christos says that both NFV and SDN enable the “softwarization” of the network. Like so many others, he says that software is king and does eat everything else.  Acknowledging security threats, he cautions “beware of bugs and hackers!”