Future proofing networks with open source technology

Telecoms.com periodically invites third parties to share their views on the industry’s most pressing issues. In this piece Tom Canning, VP of IoT at Canonical makes the case for open source software as the future of telecoms networks.

We live in a time of immediacy and excess. No more so than with our relationships with the phones in our pockets. Unlimited data packages, media streaming, and calls over the air waves wherever you happen to be. As a result, mobile operators face unprecedented levels of pressure to deliver more data, faster connectivity, better coverage, and more functionality every month.

To put even greater weight on the shoulders of telecoms providers, a new phenomenon is taking centre stage – that of business intelligence. End users are now less likely to be the ones consuming information, but the individual products or machines that make up the Internet of Things (IoT).

IDC predicts that by 2025, 60 per cent of the world’s data will be generated by enterprises and not consumers – double that of 2017. In other words, the infrastructure behind smart devices. Autonomous cars, smart cities, sensor networks, and connected industrial equipment all require extensive bandwidth to function. Operators need to rethink how the connected network is architected. They must support the faster transfer of data, greater density, and dramatically reduced latency. And they must add this functionality and flexibility whilst simultaneously driving down the costs of deploying, sustaining, and managing network infrastructure.

A new way of thinking

For decades, the telecoms industry was dominated by proprietary businesses and operating models. As market pressures evolved, however, providers were forced to find new, innovative solutions. It has resulted in telcos embracing open-source principles in recent years – an approach that transformed the computer industry from transactions to supercomputing, smartwatches and wearables, and then to a wireless network infrastructure supporting each one. The lesson has come just in time, with the dawn of  5G promising faster speeds and more reliable connections for internet-enabled devices across a diverse set of locations.

The success of 5G rests on software-defined networking (SDN), whose main concept is to decouple the infrastructure of wireless networks from expensive, closed hardware and shift it to an intelligent software layer running on top of commodity hardware. 5G and open source, therefore, have become an attractive combination for telecoms, with major operators worldwide pioneering new technologies and use cases.

Open source software in particular is key to 5G and the IoT developments, because the software can power the automation of mission-critical functions required to support the high speeds and low latency of 5G, as well as the huge number of endpoints in IoT. In short, it is the democratisation of wireless network infrastructure that will allow telcos to stay relevant in the world of 5G and connected devices.

Spotlighting the innovators

Several initiatives are already in motion, which seek to break the propriety stranglehold of telcos players and deliver SDN to the wireless network. They include both major operators and wireless infrastructure vendors, while disruptive challengers and startups are making an impact, too.

The operator community, as well as businesses, are actively engaged in collaborative alliances to help drive the uptake of open source. These include, but are not limited to, the O-RAN Alliance, which includes members such as AT&T, Deutsche Telekom, Intel, Verizon, and SK Telecom, and the Open vRAN initiative, which is backed by Cisco. The MyriadRF open source initiative, meanwhile, was founded by Lime Microsystems in 2012, with the purpose of democratising wireless innovation. It has grown to include a vast array of contributors, from hobbyists and wireless enthusiasts to professional engineers and large equipment manufacturers.

Vodafone is one such partner. The company had a goal of extending coverage and adding additional services to its 4G corporate network. Working with Lime Microsystems’ CrowdCell – a network-in-a-box solution that runs on top of commodity hardware – Vodafone was able to deliver to IT managers a new SDR-based, high capacity network ideally suited to IoT applications. IoT is the area in which you need regular intelligence within the network, to gather data and predict outcomes in real-time. An SDR network is perfectly optimised for this.

Looking to the future

As 5G begins to roll out across the enterprise, the need for more affordable, capable and agile networks is imperative. SDN holds the key, with its increased intelligence directly on top of commodity hardware. The future of mobile connectivity, therefore, is software-defined. As an approach, it also promotes third-party app development and greater community involvement, which allows operators to add value and differentiate from the competition beyond the traditional measures of coverage and subscription costs.

Open source is the answer to future proofing network infrastructure, with its collaborative and diverse heritage the perfect partner for innovations across 5G and IoT. An open, software-defined model will help operators meet the growing need for faster, more flexible, and more secure systems. It’s a case of adapt and survive.

 

Tom Canning is VP of IoT at Canonical Group, the developer of the open source OS Ubuntu. Prior to joining Canonical in 2017, Canning held a number of senior positions in the UK and the US., including at HP, Cisco and, most recently, Spigit. He is based in London and holds an electrical engineering degree from the University of Ottawa.

Open source and collaboration stacks up in telco virtualization on the cloud

OpenStack Ecosystem Technical Lead Ildikó Vancsa drives NFV related feature development activities in projects like OpenStack’s Nova and Cinder, as well as onboarding and training. The Network Virtualization event team caught up with her ahead of Network Virtualization & SDN Europe in May.

“I’m an open source advocate both personally and professionally and when the movement towards virtualisation and the cloud started, telcos started looking into OpenStack and we began to explore how they could use us,” she explained.

“It has been a very interesting journey, even if it hasn’t always been easy. Virtualisation really is a transformation both from mind set perspective as well as technology perspective,” she said.

The concept sounds simple enough – lift functions from a physical hardware stack and put it into a virtual machine in the cloud. The realty is quite different. In an ideal world, Vancsa suggested you would just rewrite everything from scratch, but this approach is not possible.

“It’s a short sentence to summarize it but it’s a really hard thing to do; especially because those functions are often tightly coupled with specialised hardware,” she said.

This hardware traditionally represented the whole functions stack, all the way out to software. As Vancsa put it: “We needed to support this journey while at the same time looking for where network functions can go. We need to be able to support the legacy network functions as well as providing an environment for new applications, written with the with this new mind set”.

“We do not have to reinvent the wheel and we [OpenStack] didn’t try that. We worked with companies and vendors in the in the NFV and networking space to be able to plug the components that they prefer to use into OpenStack and provide the functionality that they need,” she said.

OpenStack has now moved away from being one huge code stack to become more modular, offering standalone components such as load balancing as a service.

One crucial aspect of getting OpenStack right is collaboration across telco as open source becomes more and more widespread. In Vancsa’s words: “Back in the old days you had one vendor and they supplied your hardware and software and it was all tightly integrated and, no questions asked, it was supposed to work.”

“As open source has become more popular in telecom environments, we see more operators picking up commodity hardware and have a couple of vendors supplying software. So they might have one vendor supplying the MANO components and another for the orchestration layer,” she explained. “This means it is critical to keep an eye on interfaces and interoperability.”

For Vancsa, collaboration, open infrastructure and open source software are vital for virtualization to succeed, especially as telcos move more into the cloud, and events such as Network Virtualization & SDN Europe are vital for this.

“You get the chance to talk to people and basically make the first connection after which is just so much easier to collaborate on other forums,” she enthused.

 

Ildikó Vancsa was also a panellist on a recent webinar from Network Virtualization & SDN Europe on Virtualzation and the Cloud. You can listen to the webinar on-demand here. Network Virtualization & SDN Europe 2019 takes place 21-23 May at the Palace Hotel in Berlin. Find out more here.

F5 makes agile move with $670 million NGNIX acquisition

App security outfit F5 is buying open-source application platform specialist NGINX to augment its multi-cloud offering.

F5 is hardly the first to notice the importance of the cloud in the evolution of the entire tech industry, nor is it unique in realising that open-source is a great way of making a multi-cloud environment work. But for a company of its size (revenues of $563 million in 2018) this certainly qualifies as putting your money where your mouth is.

“F5’s acquisition of NGINX strengthens our growth trajectory by accelerating our software and multi-cloud transformation,” said François Locoh-Donou, CEO of F5. “By bringing F5’s world-class application security and rich application services portfolio for improving performance, availability, and management together with NGINX’s leading software application delivery and API management solutions, unparalleled credibility and brand recognition in the DevOps community, and massive open source user base, we bridge the divide between NetOps and DevOps with consistent application services across an enterprise’s multi-cloud environment.”

“NGINX and F5 share the same mission and vision,” said Gus Robertson, CEO of NGINX. “We both believe applications are at the heart of driving digital transformation. And we both believe that an end-to-end application infrastructure – one that spans from code to customer – is needed to deliver apps across a multi-cloud environment. “I’m excited to continue this journey by adding the power of NGINX’s open source innovation to F5’s ADC leadership and enterprise reach. F5 gains depth with solutions designed for DevOps, while NGINX gains breadth with access to tens of thousands of customers and partners.”

Open source and DevOps are often referred to in the same breath as part of a broader narrative around ‘agility’. One of the main benefits of the move to the cloud is the far greater choice, efficiency and flexibility it promises, but without a culture geared towards exploiting those opportunities they’re likely to be wasted. With this acquisition F5 is positioning itself as a partner for telcos heading in an agile direction.

Here’s a diagram outlining the rationale of the move.

F5+NGINX

Red Hat gives thanks for Turkcell virtualization win

Turkish operator Turkcell has launched a virtualization platform called Unified Telco Cloud that’s based on Red Hat’s OpenStack Platform.

As the name implies this new platform is all about centralising all its services onto a single virtualized infrastructure. This NFVi then allows east selection and implementation of virtual network functions, or so the story goes. Examples of operators going all-in on this stuff are still sufficiently rare for this to be noteworthy.

As a consequence this deal win is also a big deal for Red Hat, which has invested heavily in attacking the telco virtualization market from an open source direction, as is its wont. Red Hat OpenStack Platform is its carrier-grade distribution of the open source hybrid cloud platform. Turkcell is also using Red Hat Ceph Storage, a software-defined storage technology designed for this sort of thing.

“Our goal is to remake Turkcell as a digital services provider, and our business ambitions are global,” said Gediz Sezgin, CTO of Turkcell. “While planning for upcoming 5G and edge computing evolution in our network, we need to increase vendor independence and horizontal scalability to help maximise the efficiency and effectiveness of our CAPEX investment.

“With the Unified Telco Cloud, we want to lower the barrier to entry of our own network to make it a breeding ground for innovation and competition. In parallel, we want to unify infrastructure and decrease operational costs. Red Hat seemed a natural choice of partner given its leadership in the OpenStack community, its interoperability and collaboration with the vendor ecosystem and its business transformation work with other operators.”

Another key partner for Turkcell in this was Affirmed Networks, which specialises in virtualized mobile networks. “We initially selected Affirmed Networks based on their innovation in the area of network transformation and virtualization and their work with some of the world’s largest operators,” said Sezgin.

It’s good to see some of the endlessly hyped promise of NFV actually being put into effect and it will be interesting to see what kind of ROI Turkcell claims to have got from its Unified Telco Cloud. With organisations such as the O-RAN Alliance apparently gathering momentum, open source could be a major theme of this year’s MWC too.

Seven success factors for partnering in the age of open source

Telecoms.com periodically invites third parties to share their views on the industry’s most pressing issues. In this piece Susan James, Senior Director of Telecommunications Strategy at Red Hat, explores the ideal balance of in-house and outsourced talent in order to make the most of open source opportunities.

The breadth of technology knowledge that service providers now require has increased exponentially, as the number of employees is stable or in decline in most service providers, and the pool of needed specialist digital talent hasn’t been able to keep pace.

With a more diverse and collaborative ecosystem than ever before, choices are available – but what’s the right balance of making the most of your own talent versus looking externally? Here are several considerations when deciding how, when, and with whom to partner.

  1. Focus on your core business

For most companies, success comes from the core business. You have to closely examine your business and ask what your core business can provide that no one else can.

It’s easy to look at other companies and be tempted to duplicate the way they do things as a template for your own business. However, you need to recognise what’s good about your business, and focus on maximizing those aspects with innovation, rather than trying to replicate what others are doing.

When you know your core business, you can make informed decisions on what to build yourself, where to use external people to solve a problem, and what areas need third party software to do the job.

  1. Diversify to STEAM ahead

With a growing shortfall of specialized digital skills, we need to encourage people to enter emerging technology from outside of the expected STEM fields (science, technology, engineering and maths). Hence the A for ‘arts’ in the emerging term STEAM. We must be open to diverse types of skills and perspectives coming into the industry; they can help us understand the diverse customer base we all have.

In a recent internal meeting, I did a quick straw poll of how many people were engineers – less than half had an engineering background. I myself have an economics degree and ended up as a software product manager. You don’t need an engineering degree to use software, and you don’t need one to understand how to build software. I encourage you to look at your business as a whole, and what different perspectives can bring, when developing software. This applies to hiring as well as team building and partnering.

  1. Choose thought leaders

You have your core strategy and your internal teams. What’s your criteria for choosing who to team up with inside and outside your organization? Thought leadership is a priority, surely. But what makes a thought leader?

I would argue that true thought leadership today is not just someone who can solve today’s technology problems; nor is it painting a technical vision of the future. It’s recognizing the need to shape your organization with the capabilities that enable you to seize unknown opportunities that lie ahead. Red Hat’s CEO Jim Whitehurst calls this ‘organizing for innovation’. He argues that in today’s dynamic environment, planning as we know it is dead. Instead, you must build the mindset and the mechanisms that enable you to move faster and change as needed.

  1. Seek shared philosophy

Ask yourself what kind of partners you want to be associated with, and how you want to be perceived in the market. Move forward with partners whose core values align with yours. For example, as the world moves increasingly towards open source, do you want to be perceived as a leader in upstream* innovation? Do you want to be an active contributor to open source communities and help influence new features that are developed for real world needs? Or, do you desire to be a leader in technology adoption for bringing new capabilities to market? Look for a partner that displays or complements what you want to see in your business.

*Upstream communities are the projects and people that participate in open source software development and are also known as innovation engines.

  1. Prioritize honesty

Some say the measure of a true friend is someone brave enough to tell you the truth, and this can apply in business as well. Some companies aren’t used to openness and honesty that brings – they are accustomed to being told what they want to hear by partners. So when they do hear the truth, it can come as a shock. They may see it as confrontational or they may look for a second meaning.

But being honest and upfront enables companies to grow together. It provides the opportunity to quickly identify threats and know what you’re dealing with, both good and bad. Working in open source communities makes it easier to be honest about deficiencies because everyone can see them. All code is fully exposed and everyone works on the same information and code base, which allows everyone to more easily unite around common causes and problems.

When evaluating potential partners, determine their participation in open source communities, and be clear about the importance of open, honest communication in all business dealings.

  1. Establish clear partner engagement models

A relationship with a partner can be multifaceted. Collaborating in open source development is a separate engagement from working with that same partner on the business side of things, and likely looks quite different.

Upstream communities are all about rapid iteration, creativity, and innovation. Going to market with a product must be about reliability, security, and making sure the product works in practice.

It’s possible for these two areas to overlap: business needs can influence communities in a certain direction, and upstream collaboration between partners can solidify a business relationship. Or, you might work harmoniously with a company upstream, yet go out and compete with each other fiercely on the sales side.

Therefore, be clear from the outset about your engagement models and what constitutes success in each area.  If you are not able to see a “win” for both parties, then long term success of the partnership is questionable.

  1. Understand open source

The proliferation of open source across industries is allowing new players to enter markets, and existing players to work more closely together. There are different ways to leverage open source technologies, and it’s important to understand how different uses impact your business.

Downloading open source software for free doesn’t mean that it doesn’t cost you anything. If you make customizations to that open source software, you need to be aware of what you’re taking on internally. Ensuring that the software is secure ongoing and understanding how to manage the lifecycle takes resources and competence.  Making changes that are not delivered upstream requires you to manage monitoring, maintenance, support, updates (including upstream changes), and the full software lifecycle yourself.

Choosing the supported software route (enterprise version of open source community software) requires you to pay a subscription fee to a software vendor, but it’s the vendor’s job to stabilize the software, certify it works with an ecosystem of other hardware and software, ensure it is safe to use over its lifetime, and provide guidance on the best way to integrate it with the existing environment for the desired results.

Different vendors differ in their level of open source community participation. Vendors that do not contribute all changes back to upstream projects become out of sync with the community version and can no longer take advantage of community innovation.

It’s also important to note that as software development in an area becomes less cutting-edge, people can be less likely to stick around. And unless you’re recognized as a leader in that particular area, it can be hard to attract people. Such might be the challenge for a communications service provider trying to recruit talent for working on containers and competing with companies recognized for container innovation.

This relates back to your core business – what would you rather have your people working on? Where can your internal innovation and differentiation bring you the most value?

Final word

This isn’t a ‘one and done’ process – it’s cyclical. You should continually evaluate your business objectives and results in the context of what’s happening around you, and adjust your approach as you go. Remember too that it’s ok to make mistakes – it’s the mistakes that help you learn for the future. A company that is strategic in the projects it gets involved in, and one that is not afraid to change and drop the projects that aren’t successful is going to be more agile and adaptable to change.

Be open to diverse skill sets, and be honest within your organization and with your partners about what’s working and what’s not. These are all long term strategies that will best position you for success.

 

SusanJamesSusan joined Red Hat in May, 2018, after 27 years at Ericsson, where she was head of Product Line NFV infrastructure. While at Ericsson, she worked in Enterprise, Wireline, Network and Cloud organizations. She worked extensively with the IP Multimedia Subsystem (IMS), and was responsible for a number of the network functions in the Ericsson portfolio. A product management veteran, her career has focused on developing products to address technology transitions, and the establishment of new business areas.

Culture is holding back operator adoption of open source

If open source is the holy grail for telcos, more than a few of them are getting lost trying to uncover the treasure; but why?

At a panel session featuring STC and Vodafone at Light Reading’s Software Defined Operations and the Autonomous Network event, the operational culture was suggested a significant roadblock, as well as the threat of ROI due to shortened lifecycles and disappearing support.

Starting with the culture side, this is a simple one to explain. The current workforce has not been configured to work with an open source mentality. This is a different way of working, a notable shift away from the status quo of proprietary technologies. Sometimes the process of incorporating open source is an arduous task, where it can be difficult to see the benefits.

When a vendor puts a working product in front of you, as well as a framework for long-term support, it can be tempting to remain in the clutches of the vendor and the dreading lock-in situation. You can almost guarantee the code has been hardened and is scalable. It makes the concept of change seem unappealing Human nature will largely maintain the status quo, even is the alternative might be healthier in the long-run.

The second scary aspect of open source is the idea of ROI. The sheer breadth and depth of open source groups can be overwhelming at times, though open source is only as strong as the on-going support. If code is written, supported for a couple of months and then discarded in favour of something a bit more trendy, telcos will be fearful of investment due to the ROI being difficult to realise.

Open source is a trend which is being embraced on the surface, but we suspect there are still some stubborn employees who are more charmed by the status quo than the advantage of change.

Broadband Forum unveils first Open Broadband release

The Broadband Forum has announced the release of code and supporting documentation for Broadband Access Abstraction (OB-BAA), the first code release for the Open Broadband project.

The code and documentation offer an alternative approach for telcos looking to upgrade networks ahead of the anticipated stress caused by the introduction of more accessible and faster connectivity. The aim is to facilitate coexistence, seamless migration and the agility to adapt to an increasingly wide variety of software defined access models.

“OB-BAA enables operators to optimize their decision-making process for introducing new infrastructure based on user demand and acceptance instead of being forced into a total replacement strategy,” said Robin Mersh, Broadband Forum CEO. “By reducing planning, risks and execution time, investment in new systems and services can be incremental.”

The Forum’s Open Broadband initiative has been designed to provide an open community for the integration and testing of new open source, standards-based and vendor provided implementations. The group already counts support from the likes of BT, China Telecom, CenturyLink and Telecom Italia, as well as companies such as Broadcom and Nokia on the vendor side.

OB-BAA specifies northbound interfaces, core components and southbound interfaces for functions associated with access devices that have been virtualized. The standardized approach, specifically designed for SDN automation, is what Broadband Forum claims differentiates the launch from other approaches with the benefit of removing the hardware/software decoupling process.

“The first release of OB-BAA marks a major milestone for the industry,” said Tim Carey, Lead Technology Strategist at Nokia. “It delivers an open reference implementation based on standards-compliant interfaces, that operators and vendors worldwide can use to develop and deploy interoperable cloud-based access networks more easily and quickly.”

While this is the first release of the group, the Broadband Forum has promised several planned releases, consisting of code and supporting documentation.

Airship launched by AT&T and SK Telecom

AT&T and SK Telecom have jointly announced the launch of a new open infrastructure project called Airship, intended to simplify the process of deploying cloud infrastructure.

Airship uses the OpenStack-Helm project as a foundation, building a collection of open source tools to allow operators, IT service providers or enterprise organizations to more easily deploy and manage OpenStack, focusing more specifically on container technologies like Kubernetes and Helm. The mission statement is a simple one; make it easier to more predictably build and manage cloud infrastructure.

“Airship gives cloud operators a capability to manage sites at every stage from creation through all the updates, including baremetal installation, OpenStack creation, configuration changes and OpenStack upgrades,” SK Telecom said in a statement. “It does all this through a unified, declarative, fully containerized, and cloud-native platform.”

The initial focus of this project is the implementation of a declarative platform to introduce OpenStack on Kubernetes (OOK) and the lifecycle management of the resulting cloud, with the scale, speed, resiliency, flexibility, and operational predictability demanded of network clouds. The idea of a declarative platform is every aspect of the cloud is defined in standardized documents, where the user manages the documents themselves, submits them and lets the platform takes care of the rest.

The Airship initiative will initially consist of eight sub-projects:

  • Armada – An orchestrator for deploying and upgrading a collection of Helm charts
  • Berth – A mechanism for managing VMs on top of Kubernetes via Helm
  • Deckhand – A configuration management service with features to support managing large cluster configurations
  • Diving Bell – A lightweight solution for bare metal configuration management
  • Drydock – A declarative host provisioning system built initially to leverage MaaS for baremetal host deployment
  • Pegleg – A tool to organize configuration of multiple Airship deployments
  • Promenade – A deployment system for resilient, self-hosted Kubernetes
  • Shipyard – A cluster lifecycle orchestrator for Airship

“Airship is going to allow AT&T and other operators to deliver cloud infrastructure predictably that is 100% declarative, where Day Zero is managed the same as future updates via a single unified workflow, and where absolutely everything is a container from the bare metal up,” said Ryan van Wyk, Assistant VP of Cloud Platform Development at AT&T Labs.

While the emergence of another open source project is nothing too revolutionary, AT&T has stated it will act as the foundation of its network cloud that will power the 5G core supporting the 2018 launch of 5G service in 12 cities. Airship will also be used by Akraino Edge Stack, another project which intends to create an open source software stack supporting high-availability cloud services optimized for edge computing systems and applications. Two early use-cases certainly add an element of credibility.