The private cloud is the fake cloud

Telecoms.com periodically invites third parties to share their views on the industry’s most pressing issues. In this piece Danielle Royston, CEO of Optiva argues in favour of the public, as opposed to private cloud for telcos.

Confusion reigns when it comes to cloud and telecoms. CTOs are looking to cloud architectures to increase their processing power, scalability and savings. Yet they are in the dark ages! For starters, there is confusion about private and public clouds and how they differ. Some CTOs think that they can get the same benefits of public cloud with private cloud. Sure, private clouds are ideal if you have money to burn – it is a lot of effort with no rewards.

Often when most telco CTOs refer to the cloud, they mean private cloud deployments, hosting telecom infrastructures on-premises. Most are unaware that private clouds can be complex and costly. When using private cloud, compute and database resources still need to be pre-purchased and provisioned — typically up to 20-30% above peak capacity. Even when the data is virtualized, there is little or no elasticity. Instead, it’s being held in an over-provisioned, private cloud. Basically — a fake cloud.

Moving to the public cloud, on the other hand, can unlock huge cost savings by using external compute resources and it can quickly deliver benefits only possible with a true cloud deployment. With the public cloud, operators can provision for average capacity and scale up or down as needed. That can in turn make a significant difference to the bottom line.

Learning from the enterprise

Many enterprises have deployed Google Cloud, AWS and Azure. For example, Salesforce uses Google Cloud and AWS to host their application services, and Twitter recently moved to AWS. But the telecoms industry hardly hears about the major cloud moves. Why? Most telcos are stuck in the 90s of application management — virtualizing servers, rather than managing IT stacks in the public cloud.

These are some of the key benefits in enterprise that operators can replicate:

  1. Considerable cost savings: When you consider moving an application to the cloud, the business considers the entire stack — from the ground the servers sit on to the people who manage the application. The TCO includes the cost of installation, database licenses, hardware, hardware renewal, power, management time and so on. The public cloud makes all those expenses go away, which is one way it allows CSPs to reduce TCO by up to 80%.
  2. Elasticity: As operators gear up for 5G and the Internet of Things (IoT), processing power and data storage requirements will increase exponentially.  Although no one knows how much is really needed. Getting ready for the peak data demands of these largely unknown variables will be an expensive guess. Leverage the auto-scaling capabilities of public cloud for compute and database resources – that way you will pay only for what you need, when you need it.
  3. Innovation: The reality is that hyperscale web companies out-gun CSPs in terms of R&D and spending on data centers. Google spent more than US$40 billion building data centers to support its public cloud. It spends more than US$3 billion annually on cloud security and announced an additional US$13 billion spend for 2019 alone.

How not to go ‘cloud-native’

The public cloud’s architecture cannot be replicated with an on-premise cloud. Taking existing software architecture and dropping it into Google Cloud or AWS (known as a “lift and shift”) will not deliver the benefits operators hope to gain from the cloud. While “lift and shift” is a quick and dirty way to move to the cloud, it’s not cloud-native, and it’s not the same as an application architected to run “cloud-natively.”

Cloud-native means that an application is built from the ground up to work in the cloud. It is the responsibility of CTOs to dig into vendor terminology that might be misleading, such as “cloud-ready.” Doing so helps to ensure that what you are considering is genuinely fit for the carrier’s purpose and allows the operator to realize the real, full benefits of the public cloud.

If an application claims to be “cloud-ready” or “cloud-native” but requires an old-school Oracle relational database, you’ll have serious problems — and vast license fees too. Instead, a genuine cloud-native database will be faster, more powerful and cost-effective. A good sign that your application is architected and fully suited for the cloud is also when it utilizes containerization, like Kubernetes.

A turning point

It’s surprising that more CTOs have not connected the dots when it comes to the benefits of public cloud. Given the cost pressures facing the industry, the savings alone of moving to the public cloud should be a no brainer. 2019 has been a turning point for telco engagement with the public cloud — one of the biggest technological innovations of the past decade. Don’t take the easy route and go with a fake cloud.

 

Danielle Royston OptivaDanielle Royston is CEO of Optiva Inc. (TSX: OPT) and has close to 20 years of executive experience in the technology industry with an emphasis on turning around enterprise software companies. Before joining Optiva, she served as Portfolio CEO for the ESW Capital family of companies, leading over 15 turnarounds during her tenure. Royston holds a B.S. in computer science from Stanford University.

Q&A: Mukaddim Pathan, Principal End-to-End Architecture & Technology Practices at Telstra

Telecoms.com periodically invites third parties to share their views on the industry’s most pressing issues. In this article Network Virtualization Asia spoke to Mukaddim Pathan, Principal End-to-End Architecture & Technology Practices at Telstra about the role of Autonomous Continuous/Continuous Deployment in virtualization roll-outs.

Network Virtualization Asia (NVA): Can you start by telling us a bit about the network virtualization space in Australia at the moment? 

Mukaddim Pathan (MP): Alike the rest of the global industry, the Telecommunications companies in Australia are focusing on developing their capabilities in the area of virtualization. In particular, the focus has been on transforming the network, as SDN/NFV technology paradigm gets maturity to provide the same level of Telco-grade performance, resiliency, disaster recovery, and failover capabilities.

In order to reduce the unit cost of delivery, whilst maintaining a lean and modern operational model, service providers are consistently relying on virtualization technologies. Furthermore, with an increasing use of virtualization across the technology stack, we often see the rise of multi-cloud, multi-tenancy environments.

NVA: With that in mind, what are Telstra currently doing in the space to advance it? 

MP: Telstra has a dedicated Network Evolution 2020 (NE2020) program, which has a strong focus on Agile ways of working, and is underpinned by virtualization, automation, orchestration, data and standards-based API.

As mentioned, “Network Function Virtualization infrastructure” (NFVi) is an essential part of the NE2020 program, as it helps us to deliver a low cost, open standards network cloud ecosystem, at scale, and to support the transformation of Telstra’s network.

Furthermore, NFV & SDN enable the transition of hardware appliance centric functions to software centric tenants on centrally engineered platform/ecosystem. Cloud-enabling our infrastructure by adopting SDN/NFV throughout the network includes a number of key initiatives that are currently being undertaken:

  • Virtualization of existing/new network functions with the introduction of resource and service orchestration.
  • Automated VNF On-boarding and lifecycle management of virtual network services.
  • Make services available through standards-based APIs and provisioned through Infra as a Code and containerized solutions.
  • Automate delivery/operational aspects for various network services using a home grown end-to-end Continuous Integration and Continuous Deployment (CI/CD) platform for build/configuration management, test management, release management, and inventory management.

NVA: How are VNFs being introduced into the network today? Why do you think they are so important for progress in the industry?

MP: Telstra has been working with partners to progressively introduce VNFs in our network. Whilst the industry is still maturing in establishing best practices in VNF packaging and automated VNF onboarding solutions, it is important for service providers to set guidelines to the partners on the way VNFs are prepared and provided to them.

As an industry leader, Telstra has developed automated onboarding process for VNFs and established workflows using an integrated CI/CD platform, which has led to efficient onboarding of VNFs, potentially reducing the lead time from months to days.

Whilst VNF onboarding focuses on providing a common user experience and standards-based framework to on-board and life cycle manage workloads in a multi-cloud / multi-VIM NFVi environment, it leverages end-to-end CI/CD and automation capabilities for workload evaluation, packaging, certification and deployment of various network functions.

NVA: And, how do things like Automation and CICD practices come into play with this? 

MP: Automation and CI/CD practices are key in managing and evolving virtualized infrastructure within the service provider’s environment. As the network becomes software centric, it is important that these practices are leveraged for ensure seamless maintenance and management (device, configuration etc) of network elements.

At Telstra we have deployed a modular, scalable, extensible and centralized CI/CD and automation platform that provides fully integrated and ready to use capabilities and tools as a shared service across Networks by consolidating and standardizing multiple functionalities (including but not limited to pipeline management, configuration management, repositories, test management, artefact management etc) and toolsets.

A flexible and re-usable test automation enablement layer is added for various network/IT services that converges the test automation process through a single channel. It allows seamless integration of test tools with the agile management layer and exposed to various consumers by Restful APIs.

NVA: With all these advancements, how do you think operators fit into the virtualization space today? What is their role?

MP: The role of service providers in the virtualization space is key, as NFV & SDN enable the “softwarisation” of the network and assist transition of network functions onto a centrally engineered platform/ecosystem. NFV/SDN and evolution to a software defined network should be a foundational tenant for a service provider’s growth strategy, and they can influence this space by:

  • Adopting template driven and modular architecture for highly configurable and extensible solution (enabling multi cloud support)
  • Promoting containerised and Microservices driven approach for distribution and scalability
  • Supporting automated deployment of virtual network overlay including Vlinks, subnets etc. using simplified tools.
  • Enabling automated deployment network functions and virtualized infrastructure as code

NVA: Can you tell us about your upcoming talk at Network Virtualization Asia? What attracted you to the event, and can you reveal a little about what you will be discussing?

MP: In my talk, I’ll discuss how the operators are relying on Automation and Continuous Integration/Continuous Deployment (CICD) practices, as Virtual Network Functions (VNFs) are introduced within the network, yet operators have the need to manage the existing physical infrastructure. It will be linked to Telstra’s NE2020 architecture, whereby we have deployed an abstraction layer to enable underlying technology domains to conduct orchestration, exposing relevant network services for consumption. One specific example I would cover is how VFNs are onboarded onto the network, configured, and lifecycle managed using the software engineering practices, and orchestrated via the stated network abstraction layer.

Virtualization and Automation are key technology themes that are being leveraged in the way we develop solutions to meet Telstra’s long-term objectives of growth and cost reduction. As we focus on our efficiency and productivity targets, industry knowledge and relevant practical applications will be key. It is important that we keep ourselves up-to-date on the industry trends by actively participating in major global events such as Network Virtualization & SDN Asia.

For me, it is the premier event in the APAC region, which focuses on SDN, NFV and automation topics that are highly relevant to us. Crucially, by participating in this event, I’m able to connect with global Telco peers and share knowledge on the emerging topics such as SDN/NFV, Edge Computing, Automation and 5G, as well as learn from peers and apply their findings in the service provider context.

 

Dr. Mukaddim Pathan heads up the End-to-End Architecture & Technology Practices group for Networks & IT within Telstra, the largest Telecommunications company in Australia. At Telstra, his key responsibility is to deliver architecture and technical solutions towards Telstra 2022 (T22) outcomes, specifically focusing on network abstraction, edge computing, automation, and 5G use cases.

Want to hear more from Mukaddim and a fantastic line-up of other expert speakers at Network Virtualization & SDN Asia 2019?  Get your free visitor pass here.

Three UK claims 5G-ready cloud core first ahead of August launch

Even though it won’t be flicking the 5G switch until next month, Three UK has decided to bang on about its new virtualized core once more.

We first heard about this whizzy new core, that has been built in partnership with Nokia, back in February. At the time we assumed that would be the last we’d hear about it until the formal launch of Three UK’s 5G network, but Three seems to think we need just one more teaser first.

So, once more for those at the back, this is all about actually using this virtualization tech we’ve been hearing about for so long to make a secure, scalable, flexible core that is capable of fully delivering the 5G dream. It will be housed in 20 dedicated data centres scattered around the country to deliver edge computing benefits such as lower latency. This is also a good case study for Nokia to show how good it is at this sort of thing.

“Our new core network is part of a series of connected investments, totalling £2 billion, that will provide a significant step change in our customers’ experience,” said Dave Dyson CEO of Three UK. “UK consumers have an insatiable appetite for data as well as an expectation of high reliability.  We are well positioned to deliver both as we prepare for the launch of the UK’s fastest 5G network.”

“This is an exciting time for both Nokia and Three UK, as together we work towards the future of telecommunications networks,” said Bhaskar Gorti, President of Nokia Software. “This project delivers a joint vision that has been forged from the catalyst of Three’s strategy for complete business transformation. The project will deliver a flexible 5G core network, enabling the next generation of mobile services and cementing Three UK as a true leader of 5G in the UK.”

Three was careful to give shout-outs to some of its other partners in this project, which include Affirmed Networks for traffic management, Mavenir for messaging and Exfo, Mycom and BMC for OSS. Not only will this core network be central to Three UK’s strategy for the next decade, it will also provide a good live test of the kinds of technology everyone will be reliant upon before long. No pressure then, see you in August.

Q&A with Rupesh Chokshi – Assistant Vice President, Edge Solutions Product Marketing Management, AT&T Business

Telecoms.com periodically invites third parties to share their views on the industry’s most pressing issues. Rupesh Chokshi is a leader in technology with a strategic focus for growth in global technology and telecommunications. He currently leads the product marketing team within Edge Solutions for AT&T Business which focuses on product management, strategy and business development, and is transforming services and networks using software-defined networking (SDN), network function virtualization (NFV) and SD-WAN technologies.

To help determine the state of virtualization today, Light Reading spoke with Rupesh Chokshi – Assistance Vice President, Edge Solutions Product Marketing Management at AT&T Business – and one of the industry leading experts presenting at this years’ Network Virtualization & SDN Americas event in September.

Light Reading (LR): How has network virtualization evolved in the last three years?

Rupesh Chokshi (RC): AT&T has been in the business of delivering software-centric services for several years, and we’ve seen adoption from businesses looking to update their infrastructures, increase their agility and transform their businesses. Networks are almost unrecognizable from what they used to be – data traffic on our mobile network grew more than 470,000% since 2007, and video traffic increased over 75% in the last year. Given the new network demands, companies need to adapt by changing the way they manage networks.

We took a unique approach with our infrastructure ability by using software-defined network (SDN) and network function virtualization (NFV) with our own network, meeting our goal of 65% virtualized network by 2018 and setting us up to achieve our goal of 75% virtualization by 2020. At the same time we started using SDN and NFV in our own network, we utilized SDN to deliver AT&T’s first SDN service, AT&T Switched Ethernet Service with Network on Demand (ASEoD). This allowed thousands of customers to provision and manage their network in a fraction of the time it took in the past, and now enables them to scale bandwidth on demand to meet their business’ seasonality.

ASEoD was only the first of a series of solutions we are creating to address shifting network needs. Three years ago, we introduced our first global software-based solution, AT&T FlexWareSM, which uses both SDN and NFV to increase business agility by simplifying the process for dynamically adding and managing network functions.

LR: What technology developments are you most excited about for in the future?

RC: The work we did up to this point to deliver SDN within our network and for our customers set us up for the next generation of wireless technology, 5G. As the first SDN-enabled wireless technology, and the first wireless network born in the cloud, 5G will ultimately enable new use cases that take advantage of network slicing, the ability to support a high number of IoT devices and greater low-latency edge compute capabilities.

In addition, we are collaborating with VMWare SD-WAN by VeloCloud to implement 5G capabilities into our software-defined wide area networking (SD-WAN). This will give business new levels of control over their networks and is key for companies looking to use SD-WAN with a high-speed, low-latency 5G network as their primary or secondary WAN connection type.

LR: How can businesses move forward with virtualization today?

RC: Today, businesses need to make sense of data faster and more efficiently than ever before, which is driving businesses to evaluate how they use their network for all applications, and to find ways to maximize their resources. One-way companies can do this and move forward with virtualization is through AT&T’s comprehensive SD-WAN portfolio. AT&T’s SD-WAN technology supports this new way of working by letting companies define policies based on individual business needs using centralized software-based control.

LR: How can businesses determine the business benefits and ROI of virtualization today?

RC: Businesses can determine the business benefits of virtualization through cost savings, application-level visibility and near real-time analytics.

Potential cost savings is one of the key benefits of SD-WAN that is touted by technology suppliers and service providers alike. In our experience, it is during the process of fleshing out the technical details of the solution and how to best integrate it into their network that enterprises begin to fully appreciate where those cost benefits may come from, as well as understanding other benefits or features that may also be important to them. Keep in mind the importance of considering potential cost savings in the context of total cost of ownership, not just looking at the relative cost of the CPE vs. the cost of the network access.

Additionally, SD-WAN technology can provide more application-level visibility and control on a per site basis, and these capabilities go far to help customers assess and experience the benefits of the performance of their network access and transport.

SD-WAN also enables customers to access analytics in near real time or on a historical basis for bandwidth consumption and application visibility. This is instrumental in setting KPIs and measuring ROI and planning for future network growth.

LR: What virtualization strategies should businesses be focusing on?

RC: Businesses need to adopt efficient, high-performing networks to take advantage of the newest technology and bandwidth needs. Automation is a great example of this. As businesses require more bandwidth, we need to provide more elegant solutions in order for them to take full advantage of more ubiquitous, high-speed broadband.

Additionally, while digital transformation is top of mind for businesses of all sizes and in every industry, dynamic SD-WAN is still in a relatively early stage of growth and adoption. And for others, MPLS and IPsec remain important options. Hybrid WAN designs will continue to be popular as customers utilize multiple technologies (MPLS, IPSec, SD-WAN) for optimal results.

LR: How can businesses build these technologies into their long-term business models?

RC: We live in a digital economy, and AT&T provides fundamental platforms for businesses to grow, differentiate and innovate. We work with businesses of all sizes to help transform their long-term business models through technology solutions delivered in the form of a managed service. Customers come to us because of our expertise, breadth and depth of capabilities, global scale and innovation in areas such as software defined networking, network function virtualization, mobility, IoT and SD-WAN.

As businesses grow, they need to think about their overall networking health. And how they can use their networks to meet all their business objectives. Key considerations in bringing that to life include:

  • Holistic solutions that can combine SD-WAN functionality with network services from AT&T or other providers, virtualized CPE, wired and wireless LAN, security, voice over IP and much more;
  • Reduced operational expense and less need for in-house expertise with a managed solution that handles all aspects of the end-to-end solution design and setup;
  • Global deployment options that remove the headaches of onsite installation and support in countries around the world; and
  • Flexible SD-WAN policy management where the customer can choose to set and update application level policies themselves or rely on AT&T experts to manage this for them.

Want to deep dive into real-world issues and virtualization deployment challenges with Rupesh and other industry leaders?

 

Join Light Reading at the annual Network Virtualization& SDN Americas event in Dallas, September 17-19. Register now for this exclusive opportunity to learn from and network with industry experts. Communications service providers get in free!

Open source and collaboration stacks up in telco virtualization on the cloud

OpenStack Ecosystem Technical Lead Ildikó Vancsa drives NFV related feature development activities in projects like OpenStack’s Nova and Cinder, as well as onboarding and training. The Network Virtualization event team caught up with her ahead of Network Virtualization & SDN Europe in May.

“I’m an open source advocate both personally and professionally and when the movement towards virtualisation and the cloud started, telcos started looking into OpenStack and we began to explore how they could use us,” she explained.

“It has been a very interesting journey, even if it hasn’t always been easy. Virtualisation really is a transformation both from mind set perspective as well as technology perspective,” she said.

The concept sounds simple enough – lift functions from a physical hardware stack and put it into a virtual machine in the cloud. The realty is quite different. In an ideal world, Vancsa suggested you would just rewrite everything from scratch, but this approach is not possible.

“It’s a short sentence to summarize it but it’s a really hard thing to do; especially because those functions are often tightly coupled with specialised hardware,” she said.

This hardware traditionally represented the whole functions stack, all the way out to software. As Vancsa put it: “We needed to support this journey while at the same time looking for where network functions can go. We need to be able to support the legacy network functions as well as providing an environment for new applications, written with the with this new mind set”.

“We do not have to reinvent the wheel and we [OpenStack] didn’t try that. We worked with companies and vendors in the in the NFV and networking space to be able to plug the components that they prefer to use into OpenStack and provide the functionality that they need,” she said.

OpenStack has now moved away from being one huge code stack to become more modular, offering standalone components such as load balancing as a service.

One crucial aspect of getting OpenStack right is collaboration across telco as open source becomes more and more widespread. In Vancsa’s words: “Back in the old days you had one vendor and they supplied your hardware and software and it was all tightly integrated and, no questions asked, it was supposed to work.”

“As open source has become more popular in telecom environments, we see more operators picking up commodity hardware and have a couple of vendors supplying software. So they might have one vendor supplying the MANO components and another for the orchestration layer,” she explained. “This means it is critical to keep an eye on interfaces and interoperability.”

For Vancsa, collaboration, open infrastructure and open source software are vital for virtualization to succeed, especially as telcos move more into the cloud, and events such as Network Virtualization & SDN Europe are vital for this.

“You get the chance to talk to people and basically make the first connection after which is just so much easier to collaborate on other forums,” she enthused.

 

Ildikó Vancsa was also a panellist on a recent webinar from Network Virtualization & SDN Europe on Virtualzation and the Cloud. You can listen to the webinar on-demand here. Network Virtualization & SDN Europe 2019 takes place 21-23 May at the Palace Hotel in Berlin. Find out more here.

Red Hat gives thanks for Turkcell virtualization win

Turkish operator Turkcell has launched a virtualization platform called Unified Telco Cloud that’s based on Red Hat’s OpenStack Platform.

As the name implies this new platform is all about centralising all its services onto a single virtualized infrastructure. This NFVi then allows east selection and implementation of virtual network functions, or so the story goes. Examples of operators going all-in on this stuff are still sufficiently rare for this to be noteworthy.

As a consequence this deal win is also a big deal for Red Hat, which has invested heavily in attacking the telco virtualization market from an open source direction, as is its wont. Red Hat OpenStack Platform is its carrier-grade distribution of the open source hybrid cloud platform. Turkcell is also using Red Hat Ceph Storage, a software-defined storage technology designed for this sort of thing.

“Our goal is to remake Turkcell as a digital services provider, and our business ambitions are global,” said Gediz Sezgin, CTO of Turkcell. “While planning for upcoming 5G and edge computing evolution in our network, we need to increase vendor independence and horizontal scalability to help maximise the efficiency and effectiveness of our CAPEX investment.

“With the Unified Telco Cloud, we want to lower the barrier to entry of our own network to make it a breeding ground for innovation and competition. In parallel, we want to unify infrastructure and decrease operational costs. Red Hat seemed a natural choice of partner given its leadership in the OpenStack community, its interoperability and collaboration with the vendor ecosystem and its business transformation work with other operators.”

Another key partner for Turkcell in this was Affirmed Networks, which specialises in virtualized mobile networks. “We initially selected Affirmed Networks based on their innovation in the area of network transformation and virtualization and their work with some of the world’s largest operators,” said Sezgin.

It’s good to see some of the endlessly hyped promise of NFV actually being put into effect and it will be interesting to see what kind of ROI Turkcell claims to have got from its Unified Telco Cloud. With organisations such as the O-RAN Alliance apparently gathering momentum, open source could be a major theme of this year’s MWC too.

Why open source is the backbone enabling 5G for telcos

Telecoms.com periodically invites third parties to share their views on the industry’s most pressing issues. In this piece Alla Goldner looks at ONAP and its contribution to virtualization and preparing the way for 5G for telcos.

5G is a technology revolution – paving the way for new revenue streams, partnerships and innovative business models. More than a single technology, 5G is about the integration of an entire ecosystem of technologies. Indeed, a recent Amdocs survey found that nearly 80% of European communications service providers (CSPs) expect the introduction of 5G to expand revenue opportunities with enterprise customers. It also found that 34% of operators plan to offer 5G services commercially to this sector by the end of 2019, a figure that will more than double to 84% by the end of 2020.

As with every revolution, to extract its full potential value, it will require a set of enablers or tools to connect the new technology with the telco network. For CSPs in particular, the need for new and enhanced network management system is an established fact, with more than half of European operators saying they would need to enhance their service orchestration capabilities. But, they want to do this in a flexible, agile and open manner, and not be burdened with constraints and limitations of traditional tools approaches.

ONAP: the de-facto automation platform

This is where ONAP (Open Network Automation Platform) enters the picture. Developed by a community of open source network evangelists from across the industry, it has become the de-facto automation platform for carrier grade service provider networks. Since its inception in February 2017, the community has expanded beyond the pure technical realm to include collaboration with other open source projects such as OPNFV, CNCF, and PNDA, as well as standards communities such as ETSI, MEF, 3GPP and TM Forum. We also anticipate collaboration with the Acumos Project to feed ONAP analytics with AI/ML data and parameters. Such collaboration is essential when it comes to delivering revolutionary use cases, such as 5G and edge automation, as its implementation requires alignment with evolving industry standards.

ONAP and 5G

CSPs consider 5G to be more than just a radio and core network overhaul, but rather a significant architecture and network transformation. And ONAP has a key role to play in this change. As an orchestration platform, ONAP enables the instantiation, lifecycle management and assurance of 5G network services. As part of the roadmap, ONAP will eventually have the ability to implement resource management and orchestration of 5G physical network functions (PNFs) and virtual network functions (VNFs). It will also have the ability to provide definition and implementation of closed-loop automation for live deployments.

The 5G blueprint is a multi-release effort, with Casablanca, ONAP’s latest release, introducing some key capabilities around PNF integration and network optimization. Given that the operators involved with ONAP represent more than 60% of mobile subscribers and the fact that they are directly able to influence the roadmap, this paves the way for ONAP, over time, to become a compelling management and orchestration platform for 5G use cases, including hybrid VNF/PNF support.

Another capability in high-demand is support for 5G network slicing, which is aggregated from access network (RAN), transport and 5G core network slice subnet services. These, in turn are composed of a combination of other services, virtual network functions (VNFs) and physical network functions (PNFs). To support this, ONAP is working on supporting the ability to model complex network services, as part of the upcoming Dublin release.

To summarize the above, 5G and ONAP are together two critical pieces of the same puzzle:

  • ONAP is the defacto standard for end-to-end network management systems, a crucial enabler of 5G
  • ONAP enables support of existing and future networking use cases, and provides a comprehensive solution to enable network slicing as a key embedded capability of 5G
  • By leveraging a distributed and virtualized architecture, ONAP is active in the development of network management enhancements and distributed analytics capabilities, which are required for edge automation – a 5G technology enabler

The importance of vendor involvement: Amdocs case study

Amdocs has been involved in ONAP since its genesis as ECOMP (Enhanced Control, Orchestration, Management and Policy), the orchestration and network management platform developed at AT&T. Today, Amdocs is one of the top vendors participating in ONAP developments, and has supported proven deployments with leading service providers.

Amdocs supports both platform enhancements and use case development activities including:

  • SDC (Service Design and Creation)
  • A&AI (Active and Available Inventory)
  • Logging and OOM (ONAP Operations Manager) projects
  • Modeling and orchestration of complex 5G services, such as network slicing

Amdocs’ and other vendors participation in ONAP enables the ecosystem to benefit through a best-in-class NFV orchestration platform, supporting the full lifecycle of support of 5G services in an open, multi-vendor environment – from service ideation, modeling, through its instantiation, commission, modification, automatic closed-loop operations, analytics and finally, decommissioning.

The result is a win-win for CSPs, Amdocs, other vendors, as well as the ONAP community as a whole.

The benefits of collaboration for CSPs are that it provides them comprehensive monetization capabilities that enable them to capture every 5G revenue opportunity. The benefit for vendors such as Amdocs is to further their knowledge of best practices, which then flow back to the ONAP community.

 

About the author: Since ONAP’s inception, Alla Goldner has been a member of the ONAP Technical Steering Committee (TSC) and Use Case subcommittee chair. She also leads all ONAP activities at Amdocs.

Alla Golder is on the advisory board of Network Virtualization & SDN Europe. Find out what’s on the agenda and why you should be in Berlin this May

Dell flies through Q3 with 15% revenue growth

Dell Technologies has reported its financials for the third quarter of 2018, with few complaining about15% revenue growth to $22.5 billion.

While the company still has a considerable bill to pay off following the $67 billion acquisition of EMC in 2016, the firm has boasted about paying off approximately $1.3 billion of core debt after three months of positive growth across the group.

“The digital transformation of our world is underway, and we are in the early stages of a massive, technology-led investment cycle,” said Michael Dell, CEO of Dell Technologies. “Dell Technologies was created to meet this opportunity head on for our customers and our investors. You can see the proof in our strong growth, in our powerful innovation and in the depth of our customer relationships.”

With total revenues standing at $22.482 billion, most of the numbers are heading in the right direction. The company is still loss-making, though this has narrowed to $356 million for the last three months and $522 million for 2018 so far, improvements of 13% and 78% respectively compared to the same periods of 2017.

Starting with the Infrastructure Solutions Group, revenue for the third quarter was $8.9 billion, a 19% increase, with the servers and networking delivering its sixth consecutive quarter of double-digit revenue growth. Storage products saw a 6% increase in revenues taking the total up to $3.9 billion.

The Client Solutions Group saw revenues increase by 11% to $10.9 billion, with Dell suggested strong growth in both the commercial and consumer units. Commercial revenue grew 12% to $7.6 billion, and Consumer revenue was up 8% to $3.3 billion, while the firm outperformed the PC industry for total worldwide units.

In the VMWare business unit, revenue for the third quarter was $2.2 billion, up 15%, with operating income of $768 million. This is one area where the Dell management team feel some of the biggest benefits of the EMC acquisition are being felt, with the dreaded ‘synergies’ tag emerging. However, it’s the external AWS partnership which seems to be claiming the majority of the plaudits.

“Overall, I think yesterday’s announcement at re:Invent just reinforced the momentum that we have in the partnership with Amazon,” said Patrick Gelsinger, CEO of VMWare. “And clearly, the VMware Cloud on AWS, we continue to see great customer uptake for that. We reinforce the expansion of that with the Relational Database Service, the RDS announcement that we did at VMworld and yesterday’s Outposts announcement just puts another pillar in that relationship. So now I’d say, we’re on Chapter 3 of the partnership. And overall, we just can see the continued momentum.”

Dell Technologies is not a company which get a huge amount of press inches nowadays, though trends are certainly heading in the right direction here.

Italians clearly aren’t that suspicious of Huawei

Despite governments around the world turning against Chinese vendors, Telecom Italia has agreed a new partnership with Huawei based on Software Defined Wide Area Network (SD-WAN) technology.

As part of a strategy aimed at evolving TIM’s network solutions for business customers, Huawei’s SD-WAN technology will be incorporated to create a new TIM service model which will allow customers companies to manage their networks through a single console.

“Today, more than ever, companies need networks that can adapt to different business needs over time, in particular to enable Cloud and VoIP services,” said Luigi Zabatta, Head of Fixed Offer for TIM Chief Business & Top Clients Office. “Thanks to the most advanced technologies available, these networks can be managed both jointly and by customers themselves through simple tools.

“The partnership with Huawei allows us to expand our value proposition for companies and to enrich our offer through the adoption of a technological model that is increasingly and rapidly emerging in the ICT industry.”

The partnership is a major win for Huawei considering the pressure the firm must be feeling over suspicions being peaked around the world. Just as more countries are clamping down on the ability for Huawei to do business, TIM has offered a windfall.

Aside from the on-going Chinese witch hunt over in the US, the Australians have banned Huawei from participating in the 5G bonanza and Korean telcos have left the vendor off preferred supplier lists. Just to add more misery, the UK is seemingly joining in on the trends.

In recent weeks, a letter was sent out from the Department of Digital, Culture, Media and Sport, and the National Cyber Security Centre, warning telcos of potential impacts to the 5G supply chain from the Future Telecom Infrastructure Review. China was not mentioned specifically, and neither was Huawei, but sceptical individuals might suggest China would be most squeezed by a security and resilience review.

The rest of the world might be tip-toeing around the big question of China, but this partnership suggests TIM doesn’t have the same reservations.

Nokia launches some actual applications for SDN

All the hype surrounding software-defined networking is finally starting to yield some tangible results in the form of three apps from Nokia.

Deciding to kill two buzzwords with one stone, Nokia is claiming its new WaveSuite open applications will jump-start optical network digital transformation. It consists of three apps: Service Enablement, Node Automation and Network Insight. The point of these apps is apparently to offer businesses a new degree of access to networks that is expected to yield novel commercial opportunities.

To help us get our heads around this new piece of networking arcana we spoke to Kyle Hollasch, Director of Marketing for Optical Networking at Nokia. He was most keen to focus on the service enablement app, which he said is “the first software that tackles the issue of resell hierarchy.”

Specifically we’re talking about the reselling of fixed line capacity. This app is designed to massively speed up the capacity reselling process, with the aim of turning it into a billable service. The slide below visualises the concept, in which we have the actual network owner at the base and then several levels of capacity reselling, allowing greater degrees of specialisation and use-case specific solutions.

Nokia WaveSuite slide 1

The node automation app allows network nodes to be controlled via an app on a smartphone, thanks to the magic of SDN. In fact this would appear to be the epitome of SDN as it’s only made possible by that technology. The slide below shows how is it is, at least in theory, possible to interact with a network element via a smartphone, which also opens up the ability to use other smartphone tools such as the GPS and camera.

Nokia WaveSuite slide 2

The network insight app seems to do what is says on the tin, so there doesn’t seem to be the need for further explanation at this stage. “These innovations are the result of years of working closely with our customers to address all aspects of optical networking with open applications enhancing not just operations, but opening up new services and business models,” said Sam Bucci, Head of Optical Networks for Nokia.

As a milestone in the process of virtualizing networks and all the great stuff that’s supposed to come with that, the launch of actual SDN apps seems significant. Whether or not the market agrees and makes tangible business use of these is another matter, however, and only time will tell if good PowerPoint translates into business reality.