Making Sense of the Telco Cloud

In recent years the cloudification of communication networks, or “telco cloud” has become a byword for telecom modernisation. This Telecoms.com Intelligence Monthly Briefing aims to analyse what telcos’ transition to cloud means to the stakeholders in the telecom and cloud ecosystems. Before exploring the nooks and crannies of telco cloud, however, it is worthwhile first taking an elevated view of cloud native in general. On one hand, telco cloud is a subset of the overall cloud native landscape, on the other, telco cloud almost sounds an oxymoron. Telecom operator’s monolithic networks and cloud architecture are often seen as two different species, but such impressions are wrong.

(Here we are sharing the opening section of this Telecoms.com Intelligence special briefing to look into how telco cloud has changing both the industry landscape and operator strategies.

The full version of the report is available for free to download here.)

What cloud native is, and why we need it

“Cloud native” have been buzz words for a couple of years though often, like with many other buzz words, different people mean many different things when they use the same term. As the authors of a recently published Microsoft ebook quipped, ask ten colleagues to define cloud native, and there’s good chance you’ll get eight different answers. (Rob Vettor, Steve “ardalis” Smith: Architecting Cloud Native .NET Applications for Azure, preview edition, April 2020)

Here are a couple of “cloud native” definitions that more or less agree with each other, though with different stresses.

The Cloud Native Computing Foundation (CNCF), an industry organisation with over 500 member organisations from different sectors of the industry, defines cloud native as “computing (that) uses an open source software stack to deploy applications as microservices, packaging each part into its own container, and dynamically orchestrating those containers to optimize resource utilization.”

Gabriel Brown, an analyst from Heavy Reading, has a largely similar definition for cloud native, though he puts it more succinctly. For him, cloud native means “containerized micro-services deployed on bare metal and managed by Kubernetes”, the de facto standard of container management.

Although cloud native has a strong inclination towards containers, or containerised services, it is not just about containers. An important element of cloud native computing is in its deployment mode using DevOps. This is duly stressed by Omdia, a research firm, which prescribes cloud native as “the first foundation is to use agile methodologies in development, building on this with DevOps adoption across IT and, ideally, in the organization as well, and using microservices software architecture, with deployment on the cloud (wherever it is, on-premises or public).”

Some would argue the continuous nature of DevOps is as important to cloud native as the infrastructure and containerised services. Red Hat, an IBM subsidiary and one of the leading cloud native vendors and champions for DevOps practices, sees cloud native in a number of common themes including “heavily virtualized, software-defined, highly resilient infrastructure, allowing telcos to add services more quickly and centrally manage their resources.”

These themes are aligned with the understanding of cloud native by Telecoms.com Intelligence, and this report will discuss cloud native and telco cloud along this line. (A full Q&A with Azhar Sayeed, Chief Architect, Service Provider at Red Hat can be found at the end of this report).

The main benefits of cloud native computing are speed, agility, and scalability. As CNCF spells it out, “cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach. These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.”

To adapt such thinking to the telecom industry, the gains from migrating to cloud native are primarily a reflection of, and driven by, the increasing convergence between network and IT domains. The first candidate domain that cloud technology can vastly improve on, and to a certain degree replace the heavy infrastructure, is the support for the telcos’ own IT systems, including the network facing Operational Support Systems and customer facing Business Support System (OSS and BSS).

But IT cloud alone is far from what telcos can benefit from the migration to cloud native. The rest of this report will discuss how telcos can and do embark on the journey to cloud native, as a means to deliver true business benefits through improved speed, agility, and scalability to their own networks and their customers.

The rest of the report include these sections:

  • The many stratifications of telco cloud
  • Clouds gathering on telcos
  • What we can expect to see on the telco cloud skyline
  • Telco cloud openness leads to agility and savings — Q&A with Azhar Sayeed, Chief Architect, Service Provider, Red Hat
  • Additional Resources

The full version of the report is available for free to download here.

Microsoft doubles down on the telco cloud with Metaswitch acquisition

Don’t say you weren’t warned, telecoms industry. The tech big guns are trained on your home turf and they’re not afraid to splash the cash.

Less than two months ago Microsoft bought into NFV by acquiring Affirmed Networks. Now it has doubled-down on that investment with the acquisition of Metaswitch Networks, which is also all about the virtual network, for an undisclosed sum.

“This announcement builds on our recent acquisition of Affirmed Networks, which closed on April 23, 2020,” explained the Microsoft blog on the matter. “Metaswitch’s complementary portfolio of ultra-high-performance, cloud-native communications software will expand our range of offerings available for the telecommunications industry. Microsoft intends to leverage the talent and technology of these two organizations, extending the Azure platform to both deploy and grow these capabilities at scale in a way that is secure, efficient and creates a sustainable ecosystem.

“As the industry moves to 5G, operators will have opportunities to advance the virtualization of their core networks and move forward on a path to an increasingly cloud-native future. Microsoft will continue to meet customers where they are, working together with the industry as operators and network equipment providers evolve their own operations.”

So it seems clear that Microsoft is pretty serious about the telco cloud. It already has some of the best cloud infrastructure in the world and it’s rapidly adding the software required to make it telecoms-friendly. Metaswitch is small, so this seems to be as much about talent as products. Either way Microsoft is rapidly building a telco cloud capability that specialist vendors can only dream about.

Ericsson under pressure to sell Iconectiv operations – report

Ericsson is reportedly under pressure from activist investors to sell OSS/BSS business unit Iconectiv, a deal which could be worth more than $1.5 billion.

According to Bloomberg, activist investor and Ericsson’s largest shareholder Cevian Capital is kicking up a fuss. Several new ideas have been presented to the management team, as well as demands to sell Iconectiv, a business unit which provides solutions for network and operations management, numbering, registry and fraud prevention.

Ericsson has said it would not be able to provide confirmation or comment for market rumours.

With 5G deployment plans being slowed in recent months thanks to the on-going COVID-19 pandemic, vendors are starting to feel the pinch. Although Asian radio equipment vendors seem to be surviving the slowdown, European rivals are seemingly under pressure.

Nokia recently said COVID-19 had a €200 million negative impact on the business, with revenues for the Networks unit down 6% year-on-year, while Ericsson reported a group revenue decline of 2%.

Ericsson CEO Börje Ekholm has put a brave face on the situation, and it did appear investors were rallying around the Swedish telecom infrastructure vendor. The divestment rumours would suggest otherwise, however.

While there has been a positive reaction from the financial markets following Ericsson’s most recent earnings call, share price has dropped 4% the weekend albeit there has been a minor recovery today (May 4).

Under Ekholm, Ericsson has been stripping back investments in areas which would be considered outside core competencies. Mobile telecoms infrastructure is front and centre of the business, which might please some of the more traditional investors who wear the scars of attempted diversification, but there is such a thing as going too far.

Such a move is certainly in-line with the slash and crash Ekholm strategy to double down on network infrastructure, but it still remains to be seen whether such a restrictive and finite approach to business is sustainable in the long-term.

SD-WAN grows up with MEF 70

Telecoms.com periodically invites expert third parties to share their views on the industry’s most pressing issues. In this piece Charuhas Ghatge, senior product and solutions marketing manager at Nuage Networks, looks at recent developments in the SD-WAN sector.

Enterprise IT continues to embrace private and public clouds and increasingly needs to connect remote users and branch offices across public networks to cloud services. SD-WAN has become a favorite tool, not only for providing Wide Area Network (WAN) connectivity between branches and cloud services, but for managing and securing it too. The beauty of SD-WAN is that it is a service overlay abstracting diverse WAN connectivity. It provides a standard and consistent view of the enterprise’s network and users, wherever and however they are connected across the world. And yet, until recently, the standards around SD-WAN itself have not been so transparent.

Fortunately, this has changed with the publishing of the MEF 3.0 SD-WAN services standard, MEF 70, in November (2019). It sets forth what counts as SD-WAN, the basic terms and component parts of an SD-WAN solution, and the key attributes of an SD-WAN service that is provided by service providers to enterprises with an SLA. This will make it easier for enterprises to understand, evaluate and choose SD-WAN services in the future. Ultimately, it will ensure interoperability between different SD-WAN services, with MEF providing certificates of conformance to SD-WAN vendors that meet the standard.

The MEF is also hoping through this initiative to address skills shortages in the industry by providing professional certification for SD-WAN professionals. It will streamline automation of SD-WAN through lifecycle service orchestration and the introduction of intent-based networking principles. Finally, it aims to better define SD-WAN security, which given the increasing number of attacks, will be essential to ensuring that security is maintained across multi-vendor SD-WAN implementations.

The need for the MEF standards arises from what is still a bit of a wild west in the SD-WAN vendor market. There are over 70 vendors and widely different backgrounds and approaches. There are four major groups of vendors:

  • Legacy enterprise vendors who pivot from their core competency (e.g., WAN optimization, security, routing, etc.) by bolting on some SD-WAN capabilities
  • Large enterprise-focused conglomerates who upsell SD-WAN to their extensive customer base — often locking the customer into proprietary hardware
  • Smaller pure play SD-WAN vendors that have depended upon venture capital and have been acquired by a larger conglomerate already, are looking for a suitor, or are exploring different selling channels by partnering with other players in the space
  • Cloud-based vendors offering a set of prescribed SD-WAN capabilities all hosted in the cloud.

With this degree of choice and range of SD-WAN service options, it is important that the market adopts a global standard with an associated certification program. Having a standard delivered from a respected and proven body like MEF will provide an agreed upon framework that defines SD-WAN service performance and behavior now and into the future. This standard also equips CSPs and enterprises with an objective metric that can be used to cut through the noise and identify SD-WAN vendors whose technology has been certified to deliver a set of prescribed and expected results.

MEF 3.0 and the MEF 70 standard

MEF has been working with many of the world’s leading service and technology providers, open source projects, standards associations, and enterprises to realize a shared vision of dynamic services orchestrated across automated networks.

“MEF has defined the language and how a subscriber would specify and measure behavior for a service provider delivering managed SD-WAN,” MEF CTO Pascal Menezes tells Light Reading. “It allows everybody to implement in their own ways, but yet it all has to behave the same way, which means we can measure that and certify that.”

The MEF 3.0 SD-WAN certification tests the service attributes and requirements defined in the MEF 70 standard. Customers who purchase MEF 3.0 certified solutions will now have greater confidence that they have deployed an SD-WAN service that meets the highest levels of performance established by MEF – the world’s defining authority for standardized network services.

The MEF 70 standard defines the externally-visible behavior of SD-WAN services, as well as a common set of terminology and attributes for an SD-WAN service. The MEF 70 standard places a focus on how application traffic is handled while establishing definitions for the SD-WAN service and its components in the context of both the provider and subscriber of the service.

The MEF 70 standard describes SD-WAN as a service instead of detailing the underlying protocol level implementations and seeks to establish a common framework and language between providers and subscribers of the service. The MEF 70 standards “service behavior” approach allows for flexibility of implementation within the vendor community.

The benefits of standardization in a fluid vendor community are that it

  • Enables a wide range of ecosystem stakeholders to use the same terminology when buying, selling, assessing, deploying, and delivering SD-WAN services
  • Makes it easier to interface policy with intelligent underlay connectivity services to provide a better end-to-end application experience with guaranteed service resiliency
  • Facilitates inclusion of SD-WAN services in standardized LSO architectures, thereby advancing efforts to orchestrate MEF 3.0 SD-WAN services across automated networks
  • Paves the way for creation and implementation of certified MEF 3.0 SD-WAN services, which will give users confidence that a service meets a fundamental set of requirements
  • Provides the foundation for the development of SD-WAN APIs to support multiple interfaces.

Final thoughts

It is essential that, in this fluid and crowded SD-WAN market, standardization is adopted to drive consistency of behavior and operation as the SD-WAN market matures and evolves. MEF 3.0 certification is a necessary step that enables the vendor and service provider communities to collaborate on standards and ratify this technology. For service providers, it should be a cornerstone supporting the development of APIs, thus enabling them to automate how their customers request and enable SD-WAN service in minutes and in not days. For the end users, MEF will help them connect to essential applications in the cloud while controlling service levels and costs.

 

Charuhas Ghatge is a senior product and solutions marketing manager at Nuage Networks and is responsible for promoting SDN and SD-WAN products and solutions for service providers and enterprises. Charuhas has held a number of engineering, product management and marketing roles during his 27 years in the networking industry. He was educated at the University of Oklahoma with a master’s degree in computer science.

Microsoft buys into NFV with Affirmed Networks acquisition

US software giant Microsoft has made one its most aggressive moves into the telecoms sector with the acquisition of virtualization specialist Affirmed Networks.

Affirmed is all about virtualized mobile network solutions and Microsoft seems to have decided it’s time it got more involved in that sort of things too. It’s already a datacentre and cloud giant, of course, so as telecoms increasingly moves in that direction it make perfect strategic sense for Microsoft to do so too.

“At Microsoft, we intend to empower the telecommunications industry as it continues its move to 5G and support both network equipment manufacturers and operators in their efforts to find solutions that are faster, easier and cost effective,” blogged Yousef Khalidi Corporate Vice President of Azure Networking at Microsoft.

“Today, I am pleased to announce that we have signed a definitive agreement to acquire Affirmed Networks. Affirmed Networks’ fully virtualized, cloud-native mobile network solutions enable operators to simplify network operations, reduce costs and rapidly create and launch new revenue-generating services.

“This acquisition will allow us to evolve our work with the telecommunications industry, building on our secure and trusted cloud platform for operators. With Affirmed Networks, we will be able to offer new and innovative solutions tailored to the unique needs of operators, including managing their network workloads in the cloud.”

We don’t know what Microsoft paid because Affirmed is private, but it will be in the hundreds of millions. If traditional telecoms vendors aren’t alarmed by this acquisition then they should be. It seems like a classic example of the IT sector taking advantage of the new opportunities presented by NFV and virtualization in general and if Microsoft starts sniffing around things like OpenRAN then outright panic would seem appropriate.

Why NFV is no longer a buzzword

Telecoms.com periodically invites expert third parties to share their views on the industry’s most pressing issues. In this piece Hannes Gredler, CTO at RtBrick, has a look at why people aren’t talking about NFV as much as they used to.

Network Functions Virtualisation (NFV) was poised to bring virtualisation into the realm of network gateways and other functions, breaking the hard linkage between the hardware and software provided in integrated monolithic systems. It was positioned as the alternative to running networks on traditional equipment, delivering scalability, elasticity and adaptability and allowing operators to select software from any vendor as they wished.

But the discussion around NFV seems to have died down, and many in the industry are wondering: where did all the hype go? Has NFV proved more difficult to implement that anyone thought? Were the benefits less than we hoped? Or has NFV just been quietly getting on with it?

The virtualisation challenge

Many operators want to move applications to the cloud, so it’s no surprise that they’re seeking nimble, cost-effective and agile infrastructures which can be used across multiple applications. Yet they often still find themselves still bogged down by legacy architectures and traditional telecom systems, which are hard to migrate to open systems.

And it’s not just the specialist functions within the telco networks which have proved hard to virtualise. What about the network itself – remember Software Defined Networks (SDN)? In theory, NFV and SDN should have complimented each other, with SDN bringing flexibility to the network and NFV bringing speed and agility for new functions. But, as we now know, it hasn’t quite worked out that way.

Like NFV, SDN was supposed to bring about innovation, but the ‘classical SDN’ model lacked the scalability required by the large carriers. Several disadvantages emerged. A highly centralised control system made it vulnerable to catastrophic failure, and it was hard to contain any ‘blast radius’. It was restricted by the I/O limits of a single controller. And it was hard to migrate – ensuring centrally controlled network elements could work side-by-side with legacy routers.

Ensuring success

However, virtualisation can be deployed effectively in a carrier network! And a good example of this is the Broadband Network Gateway (BNG) that terminates residential Internet subscriber traffic in the access network.

Traditional BNGs were based on monolithic routing systems. They often left carriers in a perpetual hardware replacement cycle, as each element of the chassis-based system needed upgrading in tur. Carriers couldn’t mix and match the best hardware with the best software – so equipment selection was always a compromise. Multiservice systems have to provide all the service features that might be required by any service, whether they are being used or not. This is fundamentally bad economics as well as being a test nightmare!

But operators are now able to undergo successful virtualisation by applying a web-scale approach to their carrier networks. For example, Deutsche Telekom’s 4.0 Access project makes use of merchant silicon-based bare-metal switches and container-based routing software that has more in common with cloud-computing than traditional telecoms systems.

With the increasing growth in Internet traffic, it’s clear that things aren’t going to be slowing down any time soon. Providers are going to have to find ways to upgrade their infrastructure and remain competitive. Although telco operators need to make sure they’re picking the best functions for virtualisation, starting now and learning as they go will be essential to ensuring that they develop an agile network and become more ‘internet-native’.

So, whatever happened to NFV? Well whether we hear the term used as much or not, disaggregation of network software from hardware is happening for real and happening at scale. Expect to see a lot more of it.

 

Hannes leads the vision and direction at RtBrick Inc., a startup which builds a novel bare-metal OS which blends routing and cloud technologies. He has built 20+ years of expertise in engineering and support roles working with Alcatel (now Nokia Networks) and Juniper Networks. Hannes believes that the networking industry is undergoing a tectonic shift. The resulting disaggregation between hardware and software will fundamentally transform the business and operational model of edge and aggregation networks Hannes is co-author and contributor to multiple IETF drafts, he is also a regular speaker at industry events and conferences and holds 20+ patents in the space of IP/MPLS.

Nokia and Microsoft bundle their cloud offerings

A strategic collaboration between Nokia and Microsoft is banking on companies wanting to buy their cloud hardware and software together.

Here’s the official pitch: “By bringing together Microsoft cloud solutions and Nokia’s expertise in mission-critical networking, the companies are uniquely positioned to help enterprises and communications service providers transform their businesses.” This seems to be mainly about things like private networks, SD-WAN and private cloud, but specific commercial use-cases are thin on the ground at this stage.

“We are thrilled to unite Nokia’s mission-critical networks with Microsoft’s cloud solutions,” said Kathrin Buvac, President of Nokia Enterprise and Chief Strategy Officer. “Together, we will accelerate the digital transformation journey towards Industry 4.0, driving economic growth and productivity for both enterprises and service providers.”

“Bringing together Microsoft’s expertise in intelligent cloud solutions and Nokia’s strength in building business and mission-critical networks will unlock new connectivity and automation scenarios,” said Jason Zander, EVP of Microsoft Azure. “We’re excited about the opportunities this will create for our joint customers across industries.”

This initiative is more than just good PowerPoint and canned quote, however, with BT announced as its first paying punter. Apparently it’s already offering a managed service that integrates Microsoft Azure cloud and Nokia SD-WAN stuff. Specifically this means Azure vWAN and Nuage SD-WAN 2.0.

Apart from that the joint announcement mainly just bangs on about how great both companies are at this sort of thing – in other words a thinly-veiled sales pitch. The market will decide if it needs this kind of complete virtual WAN package and whether or not Nokia and Microsoft are the best companies to provide it. But there’s no denying BT is a strong first customer win.

Nvidia takes 5G to the edge with help from Ericsson and Red Hat

Graphics chip maker Nvidia has unveiled its EGX Edge Supercomputing Platform that is designed to boost 5G, IoT and AI processing at the edge of the network

Nvidia has long been the market leader in GPUs (graphics processing units), which has enabled it to get a strong position in supercomputing, where the parallel processing qualities of GPUs come in especially handy. This EGX initiative seems to be Nvidia’s attempt to translate that position from datacentres to the edge computing.

“We’ve entered a new era, where billions of always-on IoT sensors will be connected by 5G and processed by AI,” said Jensen Huang, Nvidia CEO. “Its foundation requires a new class of highly secure, networked computers operated with ease from far away. We’ve created the Nvidia EGX Edge Supercomputing Platform for this world, where computing moves beyond personal and beyond the cloud to operate at planetary scale.”

There seems to be a fair bit of support for this new platform, with a bunch of companies and even a couple of US cities saying they’re already involved. “Samsung has been an early adopter of both GPU computing and AI from the beginning,” said Charlie Bae, EVP of foundry sales and marketing at Samsung Electronics. “NVIDIA’s EGX platform helps us to extend these manufacturing and design applications smoothly onto our factory floors.”

“At Walmart, we’re using AI to define the future of retail and re-think how technology can further enhance how we operate our stores,” said Mike Hanrahan, CEO of Walmart Intelligent Retail Lab. “With NVIDIA’s EGX edge computing platform, Walmart’s Intelligent Retail Lab is able to bring real-time AI compute to our store, automate processes and free up our associates to create a better and more convenient shopping experience for our customers.”

On the mobile side Ericsson is getting involved to build virtualized 5G RANs on EGX. As you would expect the reason is all about being able to introduce new functions and services more easily and flexibly. More specifically Ericsson hopes the platform will make virtualizing the complete RAN solution cheaper and easier.

“5G is set to turbocharge the intelligent edge revolution,” said Huang. “Fusing 5G, supercomputing, and AI has enabled us to create a revolutionary communications platform supporting, someday, trillions of always-on, AI-enabled smart devices. Combining our world-leading capabilities, Nvidia and Ericsson are helping to invent this exciting future.”

On the software side a key partner for all this virtualized 5G fun will be Red Hat, which is getting its OpenShift Kubernetes container platform involved. It will combine with Nvidia’s own Aerial software developer kit to help operators to make the kind of software-defined RAN tech that can run on EGX.

“The industry is ramping 5G and the ‘smart everything’ revolution is beginning,” said Huang. “Billions of sensors and devices will be sprinkled all over the world enabling new applications and services. We’re working with Red Hat to build a cloud-native, massively scalable, high-performance GPU computing infrastructure for this new 5G world. Powered by the Nvidia EGX Edge Supercomputing Platform, a new wave of applications will emerge, just as with the smartphone revolution.”

Things seemed to have gone a bit quiet on the virtualization front, with NFV, SDN, etc having apparently entered the trough of disillusionment. Nvidia is a substantial cloud player these days, however, and judging by the level of support this new initiative has, EGX could a key factor in moving the telecoms cloud onto the slope of enlightenment.