Ericsson gets multinational 5G core network win with Telenor

Swedish kit vendor Ericsson has got a nice lot of fresh 5G work from Swedish operator group Telenor.

The win is all about 5G core network transformation, featuring a nice lot of NFV goodness. Since the point of issuing press releases about deal wins is to generate a sense of commercial momentum, this is a pretty handy one for Ericsson as it implies an endorsement of its NFV offerings.

Since this still a relatively new, unfamiliar field, any show of commercial faith at this stage counts double. Furthermore the deal encompasses Telenor’s core networks in Sweden, Denmark and Norway, so in terms of publicity, Ericsson gets three for the price of one.

“Ericsson’s portfolio of VNF enables Telenor to become more agile while reducing costs through improved operations,” said Morten Karlsen Sørby, Telenor’s EVP and Acting Cluster Head for Scandinavia. “This transformational deal is an important step towards future-proofing our core network as we look towards 5G. It provides us with state-of-the-art virtual core applications that serve mobile and fixed access and extend the lifecycle of our legacy network.”

“Ericsson is a long-term partner to Telenor in Scandinavia, supporting the company across multiple engagements in fixed and mobile networks in the region,” said Arun Bansal, Ericsson’s Head of Europe & Latin America. “This deal strengthens that partnership by evolving Telenor’s existing network to the cloud, ensuring continued exceptional services to their customers. As we move together towards 5G it also opens up new opportunities in the IoT space.”

In other Ericsson news, the company has quietly launched a new division called Edge Gravity. As Light Reading reports, it seems to be a semi-autonomous unit within Ericsson that focuses on edge computing and “operates a global edge cloud network that links together a core network of data centers with the last-mile networks of more than 80 partners that include cable operators, telcos and mobile service providers.”

Italians clearly aren’t that suspicious of Huawei

Despite governments around the world turning against Chinese vendors, Telecom Italia has agreed a new partnership with Huawei based on Software Defined Wide Area Network (SD-WAN) technology.

As part of a strategy aimed at evolving TIM’s network solutions for business customers, Huawei’s SD-WAN technology will be incorporated to create a new TIM service model which will allow customers companies to manage their networks through a single console.

“Today, more than ever, companies need networks that can adapt to different business needs over time, in particular to enable Cloud and VoIP services,” said Luigi Zabatta, Head of Fixed Offer for TIM Chief Business & Top Clients Office. “Thanks to the most advanced technologies available, these networks can be managed both jointly and by customers themselves through simple tools.

“The partnership with Huawei allows us to expand our value proposition for companies and to enrich our offer through the adoption of a technological model that is increasingly and rapidly emerging in the ICT industry.”

The partnership is a major win for Huawei considering the pressure the firm must be feeling over suspicions being peaked around the world. Just as more countries are clamping down on the ability for Huawei to do business, TIM has offered a windfall.

Aside from the on-going Chinese witch hunt over in the US, the Australians have banned Huawei from participating in the 5G bonanza and Korean telcos have left the vendor off preferred supplier lists. Just to add more misery, the UK is seemingly joining in on the trends.

In recent weeks, a letter was sent out from the Department of Digital, Culture, Media and Sport, and the National Cyber Security Centre, warning telcos of potential impacts to the 5G supply chain from the Future Telecom Infrastructure Review. China was not mentioned specifically, and neither was Huawei, but sceptical individuals might suggest China would be most squeezed by a security and resilience review.

The rest of the world might be tip-toeing around the big question of China, but this partnership suggests TIM doesn’t have the same reservations.

Nokia gets a bunch more cash from Chinese operators

Nokia is so keen for everyone to know how well it’s doing in China that is it makes an announcement every time it wins some business.

Earlier this year we heard all about a ‘framework agreement’ signed with China mobile that was worth around €1 billion. Today Nokia has announced some more ‘frame agreements’, which are presumably the same thing and refer to a kind of pre-contract that amounts to a formal commitment to do a bunch of business in future.

This time we’re talking €2 billion, but split between all three Chinese MNOs – China Mobile, China Telecom and China Unicom. Presumably the China Mobile bit is fresh cash, not just a recycling of the previous bil. The agreements cover delivery for the next year or so of radio, fixed access, IP routing and optical transport equipment, as well as some SDN and NFV goodness. Nokia is excited by all this transitioning and leveraging.

“We are excited to continue our close collaboration with these important customers in China, to drive new levels of network performance as they transition toward 5G,” said Mike Wang, president of Nokia Shanghai Bell. “Leveraging the breadth of our end-to-end network and services capabilities, we will work closely with China Mobile, China Telecom and China Unicom to deploy technologies that meet their specific business needs.”

It wouldn’t be surprising to see some kind of equivalent announcement by Ericsson before long as the two Nordic kit vendors clearly like to compete over this sort of thing. Not long after its first China Mobile announcement Nokia said it was getting £3.5 billion from T-Mobile US to help out with 5G. Within a few weeks Ericsson had countered with an almost identical announcement of its own.

What does cloud-native really mean for operators?

Telecoms.com periodically invites third parties to share their views on the industry’s most pressing issues. In this piece Dominik Pacewicz, Chief Product Manager for BSS at Comarch examines the term ‘cloud-native’ and asks what it signifies.

Cloud-native services are disrupting many industries. The telecoms industry however has long been outstripped by other sectors in the adoption of new technology. At the same time, service providers see a great opportunity to catapult themselves into the digital age through a spirited combination of cloud-nativeness, 5G networks and virtualization.

The term “cloud-native” is two-faceted. It entails both the technology used as well as the more strategic design aspect, signifying the direction many enterprises want to take with their applications. This strategy would require a broader look at the meaning of cloud-nativeness, going beyond the usual cloud-native “triad” or microservices, containers, and PaaS (Platforms as a Service) to include 5G and network virtualization.

Focus on microservices for consistent quality

Microservices are a set of autonomous and loosely-coupled services. It is often contrasted with rigid siloed architecture, but microservices are self-contained. They have their own data models, repository and functions, which can be accessed only through their own API. Microservices essentially break down applications into their core functions. In the case of a hypothetical cloud-based streaming platform, these microservices could fulfil separate functions such as search, customer rating, recommendations and product catalogue.

The practice of using microservices comes from the realization that today’s users expect flexible yet consistent experience across all devices, which entails high demand for modular and scalable cloud-based architecture.

Use containers for service peaks and troughs

Containers are the frameworks used to run individual microservices. They can hold different types of software code, allowing it to run simultaneously over different runtime environments such as production, testing, and integration. Containers make microservice-based applications portable, since they can be created or deleted dynamically. Performance can be scaled up or down with precision to treat bottlenecks – for instance, during Black Friday, a CSP can predict the increased demand for its online and offline sales, which can affect the domain but will have a negligible impact on all others.

Containers are an essential part of cloud-native architecture because the same container, managed with exactly the same Open Source tools, can be deployed on any cloud. It will not impact the operator’s virtual servers or computing systems.

Utilize PaaS for different capabilities

PaaS provides the foundation for software to be developed or deployed – somewhat similar to the operating system for a server or an entire network. All of this happens online and PaaS provides an abstraction layer, for networking, storing and computing, for the network infrastructure to grow and scale. PaaS creates an environment in which the software, the operating system and the underlying hardware and network infrastructure are all taken care of.  The user only has to focus on application development and deployment.

Using PaaS enables the harmonization of all elements of the cloud environment by integrating various cloud services. This in turn leads to virtualized processes of web application development, while developers still retain access to the same tools and standards.

5G is the cloud on steroids

The traditional “triad” of cloud-nativeness is not enough for the perfect, uninterrupted cloud application experience. There’s one asset missing – the 5G network. One reason why 5G is important for cloud-native environments, particularly for mobile cloud app development, is that striking the right balance between efficiency and the number of functionalities is a tough nut to crack. This is due to the high latency and the unreliable connectivity of some mobile devices.

Apart from LAN-like speeds for mobile devices, 5G can deliver lower battery consumption, broader bandwidth, greater capacity (1000 times that of 4G), and a substantial reduction in latency (even 50-fold). This is the main limiting factor when working with client-server architectures. What could follow is improved wireless range, increased capacity per cell tower and greater consistency.

The ‘cloud experience’ for mobile devices will be completely reshaped as a result of the adoption of 5G technology and mobile cloud applications will rival – or even surpass – their versions relying on corporate LAN connectivity to the desktop in terms of the number of offered functionalities.

Bridging the Gap with Network Virtualization

A key innovative element of NFV is the concept of VNF (Virtual Network Functions) forwarding graphs which enable the creation of service topologies that are not dependent on the physical topology. Network virtualization allows operators to allocate resources according to traffic demand. Operators can exert control over the network while managing “slices” of the network, without having to spend on infrastructure upkeep.

For this reason, NFV is leading the evolution of the 5G network ecosystem. Virtualizing the Evolved Packet Core (EPC) has emerged as a leading use case and one of the most tangible examples of the advantages of virtualization. The vEPC abstracts and decomposes the EPC functions, allowing them to run in combinations as COTS software instances. This approach allows CSPs to design networks in new ways that drastically reduce costs and simplify operations. Perfect conditions for 5G.

On the access side, the Cloud Radio Access Network (C-RAN) is a highly complementary technology to vEPC. C-RAN deployment, virtualizing many of the RAN functionalities on standard CPUs is seen as an important technology enabler for reducing the total cost of ownership (TCO) associated with RAN. The amount of investment and the operational costs are expected to decrease fast thanks to maturing cloud technologies and deployment experience. The C-RAN approach facilitates faster radio deployment, drastically reducing the time needed for conventional deployments.

In the race to 5G, telcos are steadily introducing function virtualization to gain software control over their networks. C-RAN and vEPC both help to create bespoke data pathways that meet highly specified network requirements of applications – staying true to 5G‘s vision.

The power of now

So, what does ‘cloud-native’ mean for operators? All the interdependencies between the cloud and the enabling technologies make it necessary for the true cloud-native experience to involve not only just traditional “triad” of microservices, containers, and PaaS. Network virtualization and 5G are key elements in the search for efficient, uninterrupted and feature-rich cloud-based services and applications. This will make previously impossible cloud-native use cases easier feasible.

Thanks to operators who experimented with virtualization and conducted early 5G trials, telcos will be the first to have all the necessary technology in place to succeed in the cloud. Will operators take full advantage of this head start – or will they will once again be beaten to the finish line – and fail to capitalize on the technology they championed?

 

Dominik PacewiczDominik Pacewicz is the head of BSS product management at Comarch. He has been with Comarch for over 6 years and works with a number of mobile operators helping them to simply and automate their networks.

Nokia launches some actual applications for SDN

All the hype surrounding software-defined networking is finally starting to yield some tangible results in the form of three apps from Nokia.

Deciding to kill two buzzwords with one stone, Nokia is claiming its new WaveSuite open applications will jump-start optical network digital transformation. It consists of three apps: Service Enablement, Node Automation and Network Insight. The point of these apps is apparently to offer businesses a new degree of access to networks that is expected to yield novel commercial opportunities.

To help us get our heads around this new piece of networking arcana we spoke to Kyle Hollasch, Director of Marketing for Optical Networking at Nokia. He was most keen to focus on the service enablement app, which he said is “the first software that tackles the issue of resell hierarchy.”

Specifically we’re talking about the reselling of fixed line capacity. This app is designed to massively speed up the capacity reselling process, with the aim of turning it into a billable service. The slide below visualises the concept, in which we have the actual network owner at the base and then several levels of capacity reselling, allowing greater degrees of specialisation and use-case specific solutions.

Nokia WaveSuite slide 1

The node automation app allows network nodes to be controlled via an app on a smartphone, thanks to the magic of SDN. In fact this would appear to be the epitome of SDN as it’s only made possible by that technology. The slide below shows how is it is, at least in theory, possible to interact with a network element via a smartphone, which also opens up the ability to use other smartphone tools such as the GPS and camera.

Nokia WaveSuite slide 2

The network insight app seems to do what is says on the tin, so there doesn’t seem to be the need for further explanation at this stage. “These innovations are the result of years of working closely with our customers to address all aspects of optical networking with open applications enhancing not just operations, but opening up new services and business models,” said Sam Bucci, Head of Optical Networks for Nokia.

As a milestone in the process of virtualizing networks and all the great stuff that’s supposed to come with that, the launch of actual SDN apps seems significant. Whether or not the market agrees and makes tangible business use of these is another matter, however, and only time will tell if good PowerPoint translates into business reality.

IBM aims to boost its strategic imperatives with $34 billion acquisition of Red Hat

IBM has announced by far the largest acquisition in its history with the acquisition of cloud and open source software vendor Red Hat.

$34 billion is several times more than IBM has previously spent on an acquisition, which indicates just how important it thinks this is to its future prosperity. Red Hat has expanded from a developer of Linux-based business software to being involved in most places you might find B2B open source software, including the cloud and telecoms.

While most venerable tech companies seem to be in a constant state of so-called transformation, this has especially been the case with IBM as it seeks to replace its declining legacy businesses with shiny new ones. As a consequence it has four clear strategic imperatives in the form of cloud, security, analytics and mobile, revenue from which recently overtook legacy stuff for the first time.

But IBM has apparently decided this organic transformation isn’t happening quickly enough and has decided a nice, juicy bit of M&A is required to hasten the process. Most reports are focusing on how Red Hat will contribute to IBM’s hybrid cloud efforts, and thus give it a boost in competing with the likes of Amazon, but Red Hat’s activities in the telco cloud specifically shouldn’t be underplayed.

“The acquisition of Red Hat is a game-changer,” hyperbolised IBM Dictator (Chairman, President and CEO) Ginni Rometty. “It changes everything about the cloud market. IBM will become the world’s number one hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses.

“Most companies today are only 20 percent along their cloud journey, renting compute power to cut costs,” she said. “The next 80 percent is about unlocking real business value and driving growth. This is the next chapter of the cloud. It requires shifting business applications to hybrid cloud, extracting more data and optimizing every part of the business, from supply chains to sales.”

IBM Red Hat Rometty Whitehurst cropped

“Open source is the default choice for modern IT solutions, and I’m incredibly proud of the role Red Hat has played in making that a reality in the enterprise,” said Jim Whitehurst, President and CEO, Red Hat (pictured, with Rometty). “Joining forces with IBM will provide us with a greater level of scale, resources and capabilities to accelerate the impact of open source as the basis for digital transformation and bring Red Hat to an even wider audience –  all while preserving our unique culture and unwavering commitment to open source innovation.”

Cloud and open source have been major themes in the tech M&A scene recently. Microsoft continued its transition from closed software box-shifter with the recent $7.5 billion acquisition of code sharing platform GitHub. Meanwhile open source big data vendors Cloudera and Hortonworks have decided to merge and earlier this year Salsforce dropped $6.5 billion on MuleSoft to power its Integration Cloud.

In M&A, the party line from the company being acquired is usually something along the lines of it enabling them to take the next step in its evolution thanks to the greater resources of its new parent, and this was no exception. “Powered by IBM, we can dramatically scale and accelerate what we are doing today,” said Whitehurst in his email to staff announcing the deal. “Imagine Red Hat with greater resources to grow into the opportunity ahead of us. Imagine Red Hat with the ability to invest even more and faster to accelerate open source innovation in emerging areas.” And so on.

He went on to explain that, while he will report directly to Rometty, Red Hat will continue to operate as a ‘distinct unit’, whatever that means. Usually this sort of talk is designed to sell the concept that it will remain the same company it was before the acquisition, but with loads more cash to play with. Let’s see.

IBM would be mad to mess around with Red Hat too much as it seems to be doing just fine and reported 14% revenue growth in its last quarterlies. Then again you don’t pay a 60% premium for a company just to accrue its revenue and how IBM integrates Red Hat into the rest of its offerings will be what determines the success of this bold move. There are, sadly, no signs the company plans to change its name to Big Blue Hat, which is a worrying early a missed opportunity.

BBWF 2018: Consumers don’t care about tech, just connectivity – BT

Today’s consumer is demanding but disinterested. They don’t care about mobile or broadband or wifi, just top-line connectivity. To meet these demands, BT has pointed to network convergence.

Speaking at Broadband World Forum, Howard Watson, BT’s CTIO, outlined the bigger picture. It’s all about convergence where the dividing lines between wireless and fixed or hardware and software are blurred, with connectivity is viewed as a single concept, bringing together network design, technology convergence and customer insight to create a single software-orientated network for device neutral connectivity.

“For the consumer, it’s not about their wifi, or their mobile connection, or their fixed broadband, or even their landline,” said Watson. “It’s about connectivity as a whole. And I’m pleased to say we’re already making strong progress here.”

Of course, it wouldn’t be a telco conference without mentioning 5G, and this is a critical component of the BT story. Trials have already begun in East London, though over the next couple of days 10 additional nodes will be added to expand the test. Plans are already underway to launch a converged hardware portfolio, introduce IP voice for customers and create a seamless wifi experience. All of this will be built on a single core network.

But what does this mean for the consumer? Simplicity in the simplest of terms.

The overall objective is to create a seamless connectivity experience which underpins the consumer disinterest in anything but being connected. Soon enough, devices will be able to automatically detect and select the best connectivity option, whether it is wifi or cellular for example, essentially meaning consumers will not have to check anything on their devices. Gone will be the days where you have to worry about your device clinging onto weak wifi signal or being disrupted by a network reaching out to your device, according to Watson. Signing in will become a distant memory as the consumer seamlessly shift from wifi to mobile.

This is of course a grand idea, and there is still a considerable amount of work to be done. Public wifi is pretty woeful as a general rule, and mobile connectivity is patchy in some of the busiest and remotest regions in the UK, but in fairness to BT, it does look like a sensible and well thought out plan.

With telcos becoming increasingly utilitised, these organizations need to start adding value to the lives of the consumer. Connectivity is not enough anymore, as it has become a basic expectation not a luxury in today’s digitally-defined society; providing the seamless experience might just be one way BT can prove its value. Fortunately, with its broadband footprint, EE’s mobile network and 5000 public wifi spots throughout the UK, BT is in a strong position to make the converged network dream a reality.

US Senators hit out at India’s pro-privacy and localisation laws

Two US Senators have signed a letter addressed to India’s Prime Minister Narendra Modi suggesting new rules to tighten up data practices in the country could lead to a weakened trade relationship with the US.

The US Government has already shown the damage which can be done when it starts throwing around economic sanction and hurdles, almost sending ZTE to join the Dodo on the extinction list, and it appears to be using the same tactics here. However, instead of punishing an organization which broke trade laws, it is attempting to bully a country into its own line of thinking and away from a pro-privacy stance.

“We see this (data localization) as a fundamental issue to the further development of digital trade and one that is crucial to our economic partnership,” the letter signed by Senators John Cornyn and Mark Warner states. The Senators serve as co-chairs of the Senate’s India caucus.

The letter, seen by Reuters, relates to new data protection, privacy and localisation rules which are set to come into play this week (October 15). The rules have been in the making for some time, and while there are some very suspect clauses, this is an attempt to tame the wild-west internet in the country, applying regulations which should be deemed more acceptable for the digital economy.

Back in July, the Indian Government unveiled a report which detailed its new approach to data regulations in the country. Included in the rules are restrictions on how data can be collected and utilised, setting out a similar stance to GDPR in Europe, while also including new approaches such as the right to be forgotten, explicit opt-in consent for certain categories of data (that which is deemed sensitive), and also data localisation. It is a much more stringent approach to the data economy, taking India closer to the European stance on privacy than the US’ views.

Aside from the data protection and privacy benefits of localisation, and not to mention greater influence for the Indian Government, such a strategy also stimulates the economy. Local jobs will have to be created and new data centres will have to be constructed to meet the rising demands of the increasingly digital Indian economy and society. These are clearly benefits for the country, though the threat of an impact on US trade will certainly be a worry for India.

Over the course of 2017, India exported $34.83 billion worth of goods and services to the US. This figure accounted for 16% of the total exports for the country, making the US the largest trading partner. The US Government certainly does have leverage to coerce India into its own way of thinking.

The letter from the Senators also happens to coincide with some pretty heavy lobbying from the likes of Visa, Mastercard and American Express. All would certainly find life simpler if there was no such thing as localisation, though it seems lobbying Senators to fight the cause has been more effective than efforts to persuade Indian officials to head a different direction. The new rules seem to have been influenced by Europe’s GDPR, though the US, both the Government and companies, have a different approach to data than the pro-privacy Europeans.

With India’s economy fast evolving from analogue to digital, there certainly will be profits to be made. Many US companies, most notably those in Silicon Valley, will be looking greedily at the country though such rules would make life more difficult. Not impossible, but not as simple. Perhaps the economic weight of the US Government can bully India into believing the ‘American Dream’.

Virtualization and the deployment and operation of 5G networks

Telecoms.com periodically invites expert third parties to share their views on the industry’s most pressing issues. In this piece John English, Director of Marketing, Service Providers Solutions at Netscout offers a quick overview of the need for virtualization with 5G.

While 5G undoubtedly holds enormous potential, meeting its demands for increased speed, performance, scalability, and flexible service deployment is likely to result in untenable complexity and OpEx for service operators.

The only truly affordable and practical way to operate a 5G network is to virtualise and automate, change network design, and increasingly manage the network and services from the edge. The application of virtualisation technologies such as NFV and SDN to 5G networks is therefore essential if 5G is to be deployed at a reasonable cost.

Many service providers are already adopting NFV and SDN as a means of boosting efficiencies, launching services faster, and supporting a wider range of applications. Indeed, McKinsey has estimated that the newest technologies in NFV and SDN would let operators lower their capital expenditures by up to 40%, and their network operating expenses by a similar amount.

Are any deployments under way?

Yes. Major service providers in the US, Europe, and South Korea, for example, are well advanced in their testing and initial network deployments. Verizon, AT&T, and Korea Telecom are all utilising the 1Gbps capability of 5G to provide services such as fixed mobile broadband. In addition to enabling valuable enterprise applications, this supports their SD-WAN deployments which, due to their ability to flexibly instantiate new services, are increasingly being seen by service providers as a way of monetising 5G, even at this early stage.

Is that the full extent of it?

Right now, both virtualisation and 5G are being deployed on a crawl, walk, and run basis. We’re still very much in the crawl phase, with most service providers tentatively deploying both technologies in contained parts of their networks and businesses so that they can understand how they work, learn how to manage them and understand how to deploy them most effectively.

With a myriad of new use cases and technologies, fragmented standards around how VNFs are introduced and orchestrated in the network, 5G is filled with unknowns. The relative immaturity of both 5G and virtualisation is therefore currently serving as a barrier to full-scale adoption.

How can we overcome this barrier?

5G and virtualisation each rely on a series of other technologies for successful deployment. Both require tools that enable intelligent visibility into the network in order to generate smart data with clear, actionable insights that will feed automated systems, manage performance, and control automation. Power is nothing without control, after all, and accurate control is fundamental to the management of a virtualised network.

Fortunately, these tools exist in the form of virtualised software designed to gather data, analyse it, and present it in a way that it can be actioned by a service provider’s other systems. Critically, the days of using expensive hardware probes to reactively report on network performance are over. They may still be applicable in trial phases during which a new network is established, but only software-based network assurance will be capable of scaling up to handle the volume of data involved as the scale and scope of a 5G deployment heads towards the running phase.

What will happen when 5G is up and running?

The exact nature of the services of the future is still unclear. Despite all the talk about instantaneously downloading 4K video, or enabling connected cars and supporting a plethora of new devices and services with the Internet of Things, the killer apps for 5G are yet to emerge.

What is apparent, though, is that there are universal requirements that will remain the same regardless of what the future holds. The effective running of services and applications on 5G will rely on visibility into the network, and gaining insights that will enable proactive, as opposed to reactive, responses to any network issues.

 

JGE headshot (002)John is Director of Marketing, Service Providers Solutions at Netscout, focusing on NFV/SDN, IoT/DX, Big Data Analytics and 5G. Before joining Netscout John held product management and marketing positions at Empirix and Tekelec. John has over 25 years of telecom experience covering 4G/3G/2G mobile, cable and fixed line technologies.