Why NFV is no longer a buzzword

Telecoms.com periodically invites expert third parties to share their views on the industry’s most pressing issues. In this piece Hannes Gredler, CTO at RtBrick, has a look at why people aren’t talking about NFV as much as they used to.

Network Functions Virtualisation (NFV) was poised to bring virtualisation into the realm of network gateways and other functions, breaking the hard linkage between the hardware and software provided in integrated monolithic systems. It was positioned as the alternative to running networks on traditional equipment, delivering scalability, elasticity and adaptability and allowing operators to select software from any vendor as they wished.

But the discussion around NFV seems to have died down, and many in the industry are wondering: where did all the hype go? Has NFV proved more difficult to implement that anyone thought? Were the benefits less than we hoped? Or has NFV just been quietly getting on with it?

The virtualisation challenge

Many operators want to move applications to the cloud, so it’s no surprise that they’re seeking nimble, cost-effective and agile infrastructures which can be used across multiple applications. Yet they often still find themselves still bogged down by legacy architectures and traditional telecom systems, which are hard to migrate to open systems.

And it’s not just the specialist functions within the telco networks which have proved hard to virtualise. What about the network itself – remember Software Defined Networks (SDN)? In theory, NFV and SDN should have complimented each other, with SDN bringing flexibility to the network and NFV bringing speed and agility for new functions. But, as we now know, it hasn’t quite worked out that way.

Like NFV, SDN was supposed to bring about innovation, but the ‘classical SDN’ model lacked the scalability required by the large carriers. Several disadvantages emerged. A highly centralised control system made it vulnerable to catastrophic failure, and it was hard to contain any ‘blast radius’. It was restricted by the I/O limits of a single controller. And it was hard to migrate – ensuring centrally controlled network elements could work side-by-side with legacy routers.

Ensuring success

However, virtualisation can be deployed effectively in a carrier network! And a good example of this is the Broadband Network Gateway (BNG) that terminates residential Internet subscriber traffic in the access network.

Traditional BNGs were based on monolithic routing systems. They often left carriers in a perpetual hardware replacement cycle, as each element of the chassis-based system needed upgrading in tur. Carriers couldn’t mix and match the best hardware with the best software – so equipment selection was always a compromise. Multiservice systems have to provide all the service features that might be required by any service, whether they are being used or not. This is fundamentally bad economics as well as being a test nightmare!

But operators are now able to undergo successful virtualisation by applying a web-scale approach to their carrier networks. For example, Deutsche Telekom’s 4.0 Access project makes use of merchant silicon-based bare-metal switches and container-based routing software that has more in common with cloud-computing than traditional telecoms systems.

With the increasing growth in Internet traffic, it’s clear that things aren’t going to be slowing down any time soon. Providers are going to have to find ways to upgrade their infrastructure and remain competitive. Although telco operators need to make sure they’re picking the best functions for virtualisation, starting now and learning as they go will be essential to ensuring that they develop an agile network and become more ‘internet-native’.

So, whatever happened to NFV? Well whether we hear the term used as much or not, disaggregation of network software from hardware is happening for real and happening at scale. Expect to see a lot more of it.


Hannes leads the vision and direction at RtBrick Inc., a startup which builds a novel bare-metal OS which blends routing and cloud technologies. He has built 20+ years of expertise in engineering and support roles working with Alcatel (now Nokia Networks) and Juniper Networks. Hannes believes that the networking industry is undergoing a tectonic shift. The resulting disaggregation between hardware and software will fundamentally transform the business and operational model of edge and aggregation networks Hannes is co-author and contributor to multiple IETF drafts, he is also a regular speaker at industry events and conferences and holds 20+ patents in the space of IP/MPLS.

Nokia and Microsoft bundle their cloud offerings

A strategic collaboration between Nokia and Microsoft is banking on companies wanting to buy their cloud hardware and software together.

Here’s the official pitch: “By bringing together Microsoft cloud solutions and Nokia’s expertise in mission-critical networking, the companies are uniquely positioned to help enterprises and communications service providers transform their businesses.” This seems to be mainly about things like private networks, SD-WAN and private cloud, but specific commercial use-cases are thin on the ground at this stage.

“We are thrilled to unite Nokia’s mission-critical networks with Microsoft’s cloud solutions,” said Kathrin Buvac, President of Nokia Enterprise and Chief Strategy Officer. “Together, we will accelerate the digital transformation journey towards Industry 4.0, driving economic growth and productivity for both enterprises and service providers.”

“Bringing together Microsoft’s expertise in intelligent cloud solutions and Nokia’s strength in building business and mission-critical networks will unlock new connectivity and automation scenarios,” said Jason Zander, EVP of Microsoft Azure. “We’re excited about the opportunities this will create for our joint customers across industries.”

This initiative is more than just good PowerPoint and canned quote, however, with BT announced as its first paying punter. Apparently it’s already offering a managed service that integrates Microsoft Azure cloud and Nokia SD-WAN stuff. Specifically this means Azure vWAN and Nuage SD-WAN 2.0.

Apart from that the joint announcement mainly just bangs on about how great both companies are at this sort of thing – in other words a thinly-veiled sales pitch. The market will decide if it needs this kind of complete virtual WAN package and whether or not Nokia and Microsoft are the best companies to provide it. But there’s no denying BT is a strong first customer win.

Nvidia takes 5G to the edge with help from Ericsson and Red Hat

Graphics chip maker Nvidia has unveiled its EGX Edge Supercomputing Platform that is designed to boost 5G, IoT and AI processing at the edge of the network

Nvidia has long been the market leader in GPUs (graphics processing units), which has enabled it to get a strong position in supercomputing, where the parallel processing qualities of GPUs come in especially handy. This EGX initiative seems to be Nvidia’s attempt to translate that position from datacentres to the edge computing.

“We’ve entered a new era, where billions of always-on IoT sensors will be connected by 5G and processed by AI,” said Jensen Huang, Nvidia CEO. “Its foundation requires a new class of highly secure, networked computers operated with ease from far away. We’ve created the Nvidia EGX Edge Supercomputing Platform for this world, where computing moves beyond personal and beyond the cloud to operate at planetary scale.”

There seems to be a fair bit of support for this new platform, with a bunch of companies and even a couple of US cities saying they’re already involved. “Samsung has been an early adopter of both GPU computing and AI from the beginning,” said Charlie Bae, EVP of foundry sales and marketing at Samsung Electronics. “NVIDIA’s EGX platform helps us to extend these manufacturing and design applications smoothly onto our factory floors.”

“At Walmart, we’re using AI to define the future of retail and re-think how technology can further enhance how we operate our stores,” said Mike Hanrahan, CEO of Walmart Intelligent Retail Lab. “With NVIDIA’s EGX edge computing platform, Walmart’s Intelligent Retail Lab is able to bring real-time AI compute to our store, automate processes and free up our associates to create a better and more convenient shopping experience for our customers.”

On the mobile side Ericsson is getting involved to build virtualized 5G RANs on EGX. As you would expect the reason is all about being able to introduce new functions and services more easily and flexibly. More specifically Ericsson hopes the platform will make virtualizing the complete RAN solution cheaper and easier.

“5G is set to turbocharge the intelligent edge revolution,” said Huang. “Fusing 5G, supercomputing, and AI has enabled us to create a revolutionary communications platform supporting, someday, trillions of always-on, AI-enabled smart devices. Combining our world-leading capabilities, Nvidia and Ericsson are helping to invent this exciting future.”

On the software side a key partner for all this virtualized 5G fun will be Red Hat, which is getting its OpenShift Kubernetes container platform involved. It will combine with Nvidia’s own Aerial software developer kit to help operators to make the kind of software-defined RAN tech that can run on EGX.

“The industry is ramping 5G and the ‘smart everything’ revolution is beginning,” said Huang. “Billions of sensors and devices will be sprinkled all over the world enabling new applications and services. We’re working with Red Hat to build a cloud-native, massively scalable, high-performance GPU computing infrastructure for this new 5G world. Powered by the Nvidia EGX Edge Supercomputing Platform, a new wave of applications will emerge, just as with the smartphone revolution.”

Things seemed to have gone a bit quiet on the virtualization front, with NFV, SDN, etc having apparently entered the trough of disillusionment. Nvidia is a substantial cloud player these days, however, and judging by the level of support this new initiative has, EGX could a key factor in moving the telecoms cloud onto the slope of enlightenment.

The private cloud is the fake cloud

Telecoms.com periodically invites third parties to share their views on the industry’s most pressing issues. In this piece Danielle Royston, CEO of Optiva argues in favour of the public, as opposed to private cloud for telcos.

Confusion reigns when it comes to cloud and telecoms. CTOs are looking to cloud architectures to increase their processing power, scalability and savings. Yet they are in the dark ages! For starters, there is confusion about private and public clouds and how they differ. Some CTOs think that they can get the same benefits of public cloud with private cloud. Sure, private clouds are ideal if you have money to burn – it is a lot of effort with no rewards.

Often when most telco CTOs refer to the cloud, they mean private cloud deployments, hosting telecom infrastructures on-premises. Most are unaware that private clouds can be complex and costly. When using private cloud, compute and database resources still need to be pre-purchased and provisioned — typically up to 20-30% above peak capacity. Even when the data is virtualized, there is little or no elasticity. Instead, it’s being held in an over-provisioned, private cloud. Basically — a fake cloud.

Moving to the public cloud, on the other hand, can unlock huge cost savings by using external compute resources and it can quickly deliver benefits only possible with a true cloud deployment. With the public cloud, operators can provision for average capacity and scale up or down as needed. That can in turn make a significant difference to the bottom line.

Learning from the enterprise

Many enterprises have deployed Google Cloud, AWS and Azure. For example, Salesforce uses Google Cloud and AWS to host their application services, and Twitter recently moved to AWS. But the telecoms industry hardly hears about the major cloud moves. Why? Most telcos are stuck in the 90s of application management — virtualizing servers, rather than managing IT stacks in the public cloud.

These are some of the key benefits in enterprise that operators can replicate:

  1. Considerable cost savings: When you consider moving an application to the cloud, the business considers the entire stack — from the ground the servers sit on to the people who manage the application. The TCO includes the cost of installation, database licenses, hardware, hardware renewal, power, management time and so on. The public cloud makes all those expenses go away, which is one way it allows CSPs to reduce TCO by up to 80%.
  2. Elasticity: As operators gear up for 5G and the Internet of Things (IoT), processing power and data storage requirements will increase exponentially.  Although no one knows how much is really needed. Getting ready for the peak data demands of these largely unknown variables will be an expensive guess. Leverage the auto-scaling capabilities of public cloud for compute and database resources – that way you will pay only for what you need, when you need it.
  3. Innovation: The reality is that hyperscale web companies out-gun CSPs in terms of R&D and spending on data centers. Google spent more than US$40 billion building data centers to support its public cloud. It spends more than US$3 billion annually on cloud security and announced an additional US$13 billion spend for 2019 alone.

How not to go ‘cloud-native’

The public cloud’s architecture cannot be replicated with an on-premise cloud. Taking existing software architecture and dropping it into Google Cloud or AWS (known as a “lift and shift”) will not deliver the benefits operators hope to gain from the cloud. While “lift and shift” is a quick and dirty way to move to the cloud, it’s not cloud-native, and it’s not the same as an application architected to run “cloud-natively.”

Cloud-native means that an application is built from the ground up to work in the cloud. It is the responsibility of CTOs to dig into vendor terminology that might be misleading, such as “cloud-ready.” Doing so helps to ensure that what you are considering is genuinely fit for the carrier’s purpose and allows the operator to realize the real, full benefits of the public cloud.

If an application claims to be “cloud-ready” or “cloud-native” but requires an old-school Oracle relational database, you’ll have serious problems — and vast license fees too. Instead, a genuine cloud-native database will be faster, more powerful and cost-effective. A good sign that your application is architected and fully suited for the cloud is also when it utilizes containerization, like Kubernetes.

A turning point

It’s surprising that more CTOs have not connected the dots when it comes to the benefits of public cloud. Given the cost pressures facing the industry, the savings alone of moving to the public cloud should be a no brainer. 2019 has been a turning point for telco engagement with the public cloud — one of the biggest technological innovations of the past decade. Don’t take the easy route and go with a fake cloud.


Danielle Royston OptivaDanielle Royston is CEO of Optiva Inc. (TSX: OPT) and has close to 20 years of executive experience in the technology industry with an emphasis on turning around enterprise software companies. Before joining Optiva, she served as Portfolio CEO for the ESW Capital family of companies, leading over 15 turnarounds during her tenure. Royston holds a B.S. in computer science from Stanford University.

Q&A: Mukaddim Pathan, Principal End-to-End Architecture & Technology Practices at Telstra

Telecoms.com periodically invites third parties to share their views on the industry’s most pressing issues. In this article Network Virtualization Asia spoke to Mukaddim Pathan, Principal End-to-End Architecture & Technology Practices at Telstra about the role of Autonomous Continuous/Continuous Deployment in virtualization roll-outs.

Network Virtualization Asia (NVA): Can you start by telling us a bit about the network virtualization space in Australia at the moment? 

Mukaddim Pathan (MP): Alike the rest of the global industry, the Telecommunications companies in Australia are focusing on developing their capabilities in the area of virtualization. In particular, the focus has been on transforming the network, as SDN/NFV technology paradigm gets maturity to provide the same level of Telco-grade performance, resiliency, disaster recovery, and failover capabilities.

In order to reduce the unit cost of delivery, whilst maintaining a lean and modern operational model, service providers are consistently relying on virtualization technologies. Furthermore, with an increasing use of virtualization across the technology stack, we often see the rise of multi-cloud, multi-tenancy environments.

NVA: With that in mind, what are Telstra currently doing in the space to advance it? 

MP: Telstra has a dedicated Network Evolution 2020 (NE2020) program, which has a strong focus on Agile ways of working, and is underpinned by virtualization, automation, orchestration, data and standards-based API.

As mentioned, “Network Function Virtualization infrastructure” (NFVi) is an essential part of the NE2020 program, as it helps us to deliver a low cost, open standards network cloud ecosystem, at scale, and to support the transformation of Telstra’s network.

Furthermore, NFV & SDN enable the transition of hardware appliance centric functions to software centric tenants on centrally engineered platform/ecosystem. Cloud-enabling our infrastructure by adopting SDN/NFV throughout the network includes a number of key initiatives that are currently being undertaken:

  • Virtualization of existing/new network functions with the introduction of resource and service orchestration.
  • Automated VNF On-boarding and lifecycle management of virtual network services.
  • Make services available through standards-based APIs and provisioned through Infra as a Code and containerized solutions.
  • Automate delivery/operational aspects for various network services using a home grown end-to-end Continuous Integration and Continuous Deployment (CI/CD) platform for build/configuration management, test management, release management, and inventory management.

NVA: How are VNFs being introduced into the network today? Why do you think they are so important for progress in the industry?

MP: Telstra has been working with partners to progressively introduce VNFs in our network. Whilst the industry is still maturing in establishing best practices in VNF packaging and automated VNF onboarding solutions, it is important for service providers to set guidelines to the partners on the way VNFs are prepared and provided to them.

As an industry leader, Telstra has developed automated onboarding process for VNFs and established workflows using an integrated CI/CD platform, which has led to efficient onboarding of VNFs, potentially reducing the lead time from months to days.

Whilst VNF onboarding focuses on providing a common user experience and standards-based framework to on-board and life cycle manage workloads in a multi-cloud / multi-VIM NFVi environment, it leverages end-to-end CI/CD and automation capabilities for workload evaluation, packaging, certification and deployment of various network functions.

NVA: And, how do things like Automation and CICD practices come into play with this? 

MP: Automation and CI/CD practices are key in managing and evolving virtualized infrastructure within the service provider’s environment. As the network becomes software centric, it is important that these practices are leveraged for ensure seamless maintenance and management (device, configuration etc) of network elements.

At Telstra we have deployed a modular, scalable, extensible and centralized CI/CD and automation platform that provides fully integrated and ready to use capabilities and tools as a shared service across Networks by consolidating and standardizing multiple functionalities (including but not limited to pipeline management, configuration management, repositories, test management, artefact management etc) and toolsets.

A flexible and re-usable test automation enablement layer is added for various network/IT services that converges the test automation process through a single channel. It allows seamless integration of test tools with the agile management layer and exposed to various consumers by Restful APIs.

NVA: With all these advancements, how do you think operators fit into the virtualization space today? What is their role?

MP: The role of service providers in the virtualization space is key, as NFV & SDN enable the “softwarisation” of the network and assist transition of network functions onto a centrally engineered platform/ecosystem. NFV/SDN and evolution to a software defined network should be a foundational tenant for a service provider’s growth strategy, and they can influence this space by:

  • Adopting template driven and modular architecture for highly configurable and extensible solution (enabling multi cloud support)
  • Promoting containerised and Microservices driven approach for distribution and scalability
  • Supporting automated deployment of virtual network overlay including Vlinks, subnets etc. using simplified tools.
  • Enabling automated deployment network functions and virtualized infrastructure as code

NVA: Can you tell us about your upcoming talk at Network Virtualization Asia? What attracted you to the event, and can you reveal a little about what you will be discussing?

MP: In my talk, I’ll discuss how the operators are relying on Automation and Continuous Integration/Continuous Deployment (CICD) practices, as Virtual Network Functions (VNFs) are introduced within the network, yet operators have the need to manage the existing physical infrastructure. It will be linked to Telstra’s NE2020 architecture, whereby we have deployed an abstraction layer to enable underlying technology domains to conduct orchestration, exposing relevant network services for consumption. One specific example I would cover is how VFNs are onboarded onto the network, configured, and lifecycle managed using the software engineering practices, and orchestrated via the stated network abstraction layer.

Virtualization and Automation are key technology themes that are being leveraged in the way we develop solutions to meet Telstra’s long-term objectives of growth and cost reduction. As we focus on our efficiency and productivity targets, industry knowledge and relevant practical applications will be key. It is important that we keep ourselves up-to-date on the industry trends by actively participating in major global events such as Network Virtualization & SDN Asia.

For me, it is the premier event in the APAC region, which focuses on SDN, NFV and automation topics that are highly relevant to us. Crucially, by participating in this event, I’m able to connect with global Telco peers and share knowledge on the emerging topics such as SDN/NFV, Edge Computing, Automation and 5G, as well as learn from peers and apply their findings in the service provider context.


Dr. Mukaddim Pathan heads up the End-to-End Architecture & Technology Practices group for Networks & IT within Telstra, the largest Telecommunications company in Australia. At Telstra, his key responsibility is to deliver architecture and technical solutions towards Telstra 2022 (T22) outcomes, specifically focusing on network abstraction, edge computing, automation, and 5G use cases.

Want to hear more from Mukaddim and a fantastic line-up of other expert speakers at Network Virtualization & SDN Asia 2019?  Get your free visitor pass here.

Three UK claims 5G-ready cloud core first ahead of August launch

Even though it won’t be flicking the 5G switch until next month, Three UK has decided to bang on about its new virtualized core once more.

We first heard about this whizzy new core, that has been built in partnership with Nokia, back in February. At the time we assumed that would be the last we’d hear about it until the formal launch of Three UK’s 5G network, but Three seems to think we need just one more teaser first.

So, once more for those at the back, this is all about actually using this virtualization tech we’ve been hearing about for so long to make a secure, scalable, flexible core that is capable of fully delivering the 5G dream. It will be housed in 20 dedicated data centres scattered around the country to deliver edge computing benefits such as lower latency. This is also a good case study for Nokia to show how good it is at this sort of thing.

“Our new core network is part of a series of connected investments, totalling £2 billion, that will provide a significant step change in our customers’ experience,” said Dave Dyson CEO of Three UK. “UK consumers have an insatiable appetite for data as well as an expectation of high reliability.  We are well positioned to deliver both as we prepare for the launch of the UK’s fastest 5G network.”

“This is an exciting time for both Nokia and Three UK, as together we work towards the future of telecommunications networks,” said Bhaskar Gorti, President of Nokia Software. “This project delivers a joint vision that has been forged from the catalyst of Three’s strategy for complete business transformation. The project will deliver a flexible 5G core network, enabling the next generation of mobile services and cementing Three UK as a true leader of 5G in the UK.”

Three was careful to give shout-outs to some of its other partners in this project, which include Affirmed Networks for traffic management, Mavenir for messaging and Exfo, Mycom and BMC for OSS. Not only will this core network be central to Three UK’s strategy for the next decade, it will also provide a good live test of the kinds of technology everyone will be reliant upon before long. No pressure then, see you in August.

Q&A with Rupesh Chokshi – Assistant Vice President, Edge Solutions Product Marketing Management, AT&T Business

Telecoms.com periodically invites third parties to share their views on the industry’s most pressing issues. Rupesh Chokshi is a leader in technology with a strategic focus for growth in global technology and telecommunications. He currently leads the product marketing team within Edge Solutions for AT&T Business which focuses on product management, strategy and business development, and is transforming services and networks using software-defined networking (SDN), network function virtualization (NFV) and SD-WAN technologies.

To help determine the state of virtualization today, Light Reading spoke with Rupesh Chokshi – Assistance Vice President, Edge Solutions Product Marketing Management at AT&T Business – and one of the industry leading experts presenting at this years’ Network Virtualization & SDN Americas event in September.

Light Reading (LR): How has network virtualization evolved in the last three years?

Rupesh Chokshi (RC): AT&T has been in the business of delivering software-centric services for several years, and we’ve seen adoption from businesses looking to update their infrastructures, increase their agility and transform their businesses. Networks are almost unrecognizable from what they used to be – data traffic on our mobile network grew more than 470,000% since 2007, and video traffic increased over 75% in the last year. Given the new network demands, companies need to adapt by changing the way they manage networks.

We took a unique approach with our infrastructure ability by using software-defined network (SDN) and network function virtualization (NFV) with our own network, meeting our goal of 65% virtualized network by 2018 and setting us up to achieve our goal of 75% virtualization by 2020. At the same time we started using SDN and NFV in our own network, we utilized SDN to deliver AT&T’s first SDN service, AT&T Switched Ethernet Service with Network on Demand (ASEoD). This allowed thousands of customers to provision and manage their network in a fraction of the time it took in the past, and now enables them to scale bandwidth on demand to meet their business’ seasonality.

ASEoD was only the first of a series of solutions we are creating to address shifting network needs. Three years ago, we introduced our first global software-based solution, AT&T FlexWareSM, which uses both SDN and NFV to increase business agility by simplifying the process for dynamically adding and managing network functions.

LR: What technology developments are you most excited about for in the future?

RC: The work we did up to this point to deliver SDN within our network and for our customers set us up for the next generation of wireless technology, 5G. As the first SDN-enabled wireless technology, and the first wireless network born in the cloud, 5G will ultimately enable new use cases that take advantage of network slicing, the ability to support a high number of IoT devices and greater low-latency edge compute capabilities.

In addition, we are collaborating with VMWare SD-WAN by VeloCloud to implement 5G capabilities into our software-defined wide area networking (SD-WAN). This will give business new levels of control over their networks and is key for companies looking to use SD-WAN with a high-speed, low-latency 5G network as their primary or secondary WAN connection type.

LR: How can businesses move forward with virtualization today?

RC: Today, businesses need to make sense of data faster and more efficiently than ever before, which is driving businesses to evaluate how they use their network for all applications, and to find ways to maximize their resources. One-way companies can do this and move forward with virtualization is through AT&T’s comprehensive SD-WAN portfolio. AT&T’s SD-WAN technology supports this new way of working by letting companies define policies based on individual business needs using centralized software-based control.

LR: How can businesses determine the business benefits and ROI of virtualization today?

RC: Businesses can determine the business benefits of virtualization through cost savings, application-level visibility and near real-time analytics.

Potential cost savings is one of the key benefits of SD-WAN that is touted by technology suppliers and service providers alike. In our experience, it is during the process of fleshing out the technical details of the solution and how to best integrate it into their network that enterprises begin to fully appreciate where those cost benefits may come from, as well as understanding other benefits or features that may also be important to them. Keep in mind the importance of considering potential cost savings in the context of total cost of ownership, not just looking at the relative cost of the CPE vs. the cost of the network access.

Additionally, SD-WAN technology can provide more application-level visibility and control on a per site basis, and these capabilities go far to help customers assess and experience the benefits of the performance of their network access and transport.

SD-WAN also enables customers to access analytics in near real time or on a historical basis for bandwidth consumption and application visibility. This is instrumental in setting KPIs and measuring ROI and planning for future network growth.

LR: What virtualization strategies should businesses be focusing on?

RC: Businesses need to adopt efficient, high-performing networks to take advantage of the newest technology and bandwidth needs. Automation is a great example of this. As businesses require more bandwidth, we need to provide more elegant solutions in order for them to take full advantage of more ubiquitous, high-speed broadband.

Additionally, while digital transformation is top of mind for businesses of all sizes and in every industry, dynamic SD-WAN is still in a relatively early stage of growth and adoption. And for others, MPLS and IPsec remain important options. Hybrid WAN designs will continue to be popular as customers utilize multiple technologies (MPLS, IPSec, SD-WAN) for optimal results.

LR: How can businesses build these technologies into their long-term business models?

RC: We live in a digital economy, and AT&T provides fundamental platforms for businesses to grow, differentiate and innovate. We work with businesses of all sizes to help transform their long-term business models through technology solutions delivered in the form of a managed service. Customers come to us because of our expertise, breadth and depth of capabilities, global scale and innovation in areas such as software defined networking, network function virtualization, mobility, IoT and SD-WAN.

As businesses grow, they need to think about their overall networking health. And how they can use their networks to meet all their business objectives. Key considerations in bringing that to life include:

  • Holistic solutions that can combine SD-WAN functionality with network services from AT&T or other providers, virtualized CPE, wired and wireless LAN, security, voice over IP and much more;
  • Reduced operational expense and less need for in-house expertise with a managed solution that handles all aspects of the end-to-end solution design and setup;
  • Global deployment options that remove the headaches of onsite installation and support in countries around the world; and
  • Flexible SD-WAN policy management where the customer can choose to set and update application level policies themselves or rely on AT&T experts to manage this for them.

Want to deep dive into real-world issues and virtualization deployment challenges with Rupesh and other industry leaders?


Join Light Reading at the annual Network Virtualization& SDN Americas event in Dallas, September 17-19. Register now for this exclusive opportunity to learn from and network with industry experts. Communications service providers get in free!

Open source and collaboration stacks up in telco virtualization on the cloud

OpenStack Ecosystem Technical Lead Ildikó Vancsa drives NFV related feature development activities in projects like OpenStack’s Nova and Cinder, as well as onboarding and training. The Network Virtualization event team caught up with her ahead of Network Virtualization & SDN Europe in May.

“I’m an open source advocate both personally and professionally and when the movement towards virtualisation and the cloud started, telcos started looking into OpenStack and we began to explore how they could use us,” she explained.

“It has been a very interesting journey, even if it hasn’t always been easy. Virtualisation really is a transformation both from mind set perspective as well as technology perspective,” she said.

The concept sounds simple enough – lift functions from a physical hardware stack and put it into a virtual machine in the cloud. The realty is quite different. In an ideal world, Vancsa suggested you would just rewrite everything from scratch, but this approach is not possible.

“It’s a short sentence to summarize it but it’s a really hard thing to do; especially because those functions are often tightly coupled with specialised hardware,” she said.

This hardware traditionally represented the whole functions stack, all the way out to software. As Vancsa put it: “We needed to support this journey while at the same time looking for where network functions can go. We need to be able to support the legacy network functions as well as providing an environment for new applications, written with the with this new mind set”.

“We do not have to reinvent the wheel and we [OpenStack] didn’t try that. We worked with companies and vendors in the in the NFV and networking space to be able to plug the components that they prefer to use into OpenStack and provide the functionality that they need,” she said.

OpenStack has now moved away from being one huge code stack to become more modular, offering standalone components such as load balancing as a service.

One crucial aspect of getting OpenStack right is collaboration across telco as open source becomes more and more widespread. In Vancsa’s words: “Back in the old days you had one vendor and they supplied your hardware and software and it was all tightly integrated and, no questions asked, it was supposed to work.”

“As open source has become more popular in telecom environments, we see more operators picking up commodity hardware and have a couple of vendors supplying software. So they might have one vendor supplying the MANO components and another for the orchestration layer,” she explained. “This means it is critical to keep an eye on interfaces and interoperability.”

For Vancsa, collaboration, open infrastructure and open source software are vital for virtualization to succeed, especially as telcos move more into the cloud, and events such as Network Virtualization & SDN Europe are vital for this.

“You get the chance to talk to people and basically make the first connection after which is just so much easier to collaborate on other forums,” she enthused.


Ildikó Vancsa was also a panellist on a recent webinar from Network Virtualization & SDN Europe on Virtualzation and the Cloud. You can listen to the webinar on-demand here. Network Virtualization & SDN Europe 2019 takes place 21-23 May at the Palace Hotel in Berlin. Find out more here.

Red Hat gives thanks for Turkcell virtualization win

Turkish operator Turkcell has launched a virtualization platform called Unified Telco Cloud that’s based on Red Hat’s OpenStack Platform.

As the name implies this new platform is all about centralising all its services onto a single virtualized infrastructure. This NFVi then allows east selection and implementation of virtual network functions, or so the story goes. Examples of operators going all-in on this stuff are still sufficiently rare for this to be noteworthy.

As a consequence this deal win is also a big deal for Red Hat, which has invested heavily in attacking the telco virtualization market from an open source direction, as is its wont. Red Hat OpenStack Platform is its carrier-grade distribution of the open source hybrid cloud platform. Turkcell is also using Red Hat Ceph Storage, a software-defined storage technology designed for this sort of thing.

“Our goal is to remake Turkcell as a digital services provider, and our business ambitions are global,” said Gediz Sezgin, CTO of Turkcell. “While planning for upcoming 5G and edge computing evolution in our network, we need to increase vendor independence and horizontal scalability to help maximise the efficiency and effectiveness of our CAPEX investment.

“With the Unified Telco Cloud, we want to lower the barrier to entry of our own network to make it a breeding ground for innovation and competition. In parallel, we want to unify infrastructure and decrease operational costs. Red Hat seemed a natural choice of partner given its leadership in the OpenStack community, its interoperability and collaboration with the vendor ecosystem and its business transformation work with other operators.”

Another key partner for Turkcell in this was Affirmed Networks, which specialises in virtualized mobile networks. “We initially selected Affirmed Networks based on their innovation in the area of network transformation and virtualization and their work with some of the world’s largest operators,” said Sezgin.

It’s good to see some of the endlessly hyped promise of NFV actually being put into effect and it will be interesting to see what kind of ROI Turkcell claims to have got from its Unified Telco Cloud. With organisations such as the O-RAN Alliance apparently gathering momentum, open source could be a major theme of this year’s MWC too.

Why open source is the backbone enabling 5G for telcos

Telecoms.com periodically invites third parties to share their views on the industry’s most pressing issues. In this piece Alla Goldner looks at ONAP and its contribution to virtualization and preparing the way for 5G for telcos.

5G is a technology revolution – paving the way for new revenue streams, partnerships and innovative business models. More than a single technology, 5G is about the integration of an entire ecosystem of technologies. Indeed, a recent Amdocs survey found that nearly 80% of European communications service providers (CSPs) expect the introduction of 5G to expand revenue opportunities with enterprise customers. It also found that 34% of operators plan to offer 5G services commercially to this sector by the end of 2019, a figure that will more than double to 84% by the end of 2020.

As with every revolution, to extract its full potential value, it will require a set of enablers or tools to connect the new technology with the telco network. For CSPs in particular, the need for new and enhanced network management system is an established fact, with more than half of European operators saying they would need to enhance their service orchestration capabilities. But, they want to do this in a flexible, agile and open manner, and not be burdened with constraints and limitations of traditional tools approaches.

ONAP: the de-facto automation platform

This is where ONAP (Open Network Automation Platform) enters the picture. Developed by a community of open source network evangelists from across the industry, it has become the de-facto automation platform for carrier grade service provider networks. Since its inception in February 2017, the community has expanded beyond the pure technical realm to include collaboration with other open source projects such as OPNFV, CNCF, and PNDA, as well as standards communities such as ETSI, MEF, 3GPP and TM Forum. We also anticipate collaboration with the Acumos Project to feed ONAP analytics with AI/ML data and parameters. Such collaboration is essential when it comes to delivering revolutionary use cases, such as 5G and edge automation, as its implementation requires alignment with evolving industry standards.

ONAP and 5G

CSPs consider 5G to be more than just a radio and core network overhaul, but rather a significant architecture and network transformation. And ONAP has a key role to play in this change. As an orchestration platform, ONAP enables the instantiation, lifecycle management and assurance of 5G network services. As part of the roadmap, ONAP will eventually have the ability to implement resource management and orchestration of 5G physical network functions (PNFs) and virtual network functions (VNFs). It will also have the ability to provide definition and implementation of closed-loop automation for live deployments.

The 5G blueprint is a multi-release effort, with Casablanca, ONAP’s latest release, introducing some key capabilities around PNF integration and network optimization. Given that the operators involved with ONAP represent more than 60% of mobile subscribers and the fact that they are directly able to influence the roadmap, this paves the way for ONAP, over time, to become a compelling management and orchestration platform for 5G use cases, including hybrid VNF/PNF support.

Another capability in high-demand is support for 5G network slicing, which is aggregated from access network (RAN), transport and 5G core network slice subnet services. These, in turn are composed of a combination of other services, virtual network functions (VNFs) and physical network functions (PNFs). To support this, ONAP is working on supporting the ability to model complex network services, as part of the upcoming Dublin release.

To summarize the above, 5G and ONAP are together two critical pieces of the same puzzle:

  • ONAP is the defacto standard for end-to-end network management systems, a crucial enabler of 5G
  • ONAP enables support of existing and future networking use cases, and provides a comprehensive solution to enable network slicing as a key embedded capability of 5G
  • By leveraging a distributed and virtualized architecture, ONAP is active in the development of network management enhancements and distributed analytics capabilities, which are required for edge automation – a 5G technology enabler

The importance of vendor involvement: Amdocs case study

Amdocs has been involved in ONAP since its genesis as ECOMP (Enhanced Control, Orchestration, Management and Policy), the orchestration and network management platform developed at AT&T. Today, Amdocs is one of the top vendors participating in ONAP developments, and has supported proven deployments with leading service providers.

Amdocs supports both platform enhancements and use case development activities including:

  • SDC (Service Design and Creation)
  • A&AI (Active and Available Inventory)
  • Logging and OOM (ONAP Operations Manager) projects
  • Modeling and orchestration of complex 5G services, such as network slicing

Amdocs’ and other vendors participation in ONAP enables the ecosystem to benefit through a best-in-class NFV orchestration platform, supporting the full lifecycle of support of 5G services in an open, multi-vendor environment – from service ideation, modeling, through its instantiation, commission, modification, automatic closed-loop operations, analytics and finally, decommissioning.

The result is a win-win for CSPs, Amdocs, other vendors, as well as the ONAP community as a whole.

The benefits of collaboration for CSPs are that it provides them comprehensive monetization capabilities that enable them to capture every 5G revenue opportunity. The benefit for vendors such as Amdocs is to further their knowledge of best practices, which then flow back to the ONAP community.


About the author: Since ONAP’s inception, Alla Goldner has been a member of the ONAP Technical Steering Committee (TSC) and Use Case subcommittee chair. She also leads all ONAP activities at Amdocs.

Alla Golder is on the advisory board of Network Virtualization & SDN Europe. Find out what’s on the agenda and why you should be in Berlin this May