Ericsson and Intel have announced a new partnership which is aimed at aligning the Swedes efforts for software-defined infrastructure with Intel’s Rack Scale Design.
The resulting hardware management platform will be designed for telcos targeting 5G, NFV, and distributed cloud. In theory, the pair aims to create a common managed hardware pool for all workloads that dynamically scales. It’s the scalable and affordable dream telcos have been promised for years.
The duo has said the new tie-up will allow telcos to take advantage of multi-vendor hardware options, Ericsson’s end-to-end software solutions, and Intel’s latest architectural solutions.
“We have long history of successful collaboration with Intel. Lars Mårtensson, Head of Cloud & NFV infrastructure for Digital Services at Ericsson. “This new collaboration will focus on software in addition to hardware and we see it to be truly transformative for service providers’ ability to successfully deploy open cloud and NFV infrastructure, from centralized data-centres to the edge. Intel’s and Ericsson’s joint efforts significantly strengthens the competitiveness and roadmap of the Ericsson Software Defined Infrastructure offering.”
“5G will be transformative, accelerating today’s applications and triggering a wave of new usages and edge-based innovation,” said Sandra Rivera, SVP of the Network Platform Group at Intel. “Our infrastructure manageability collaboration with Ericsson will help communications service providers remove deployment barriers, reduce costs, and deliver new 5G and edge services with cloudlike speed on a flexible, programmable and intelligent network.”
As part of the tie up, the Ericsson SDI Manager software and Intel RSD reference software will be converged, though the pair reiterated full backward compatibility would be maintained for existing customers. Any new solutions developed moving forwards will be subsequent Ericsson hardware platforms, as well as Intel’s server products which are sold through third-parties and in other industry segments.
We can hear the groans already, but we’re going to do it anyway. Let’s have a look at what 6G could possibly contribute to the connected economy.
Such is our desire for progress, we haven’t even launched 5G but the best and brightest around are already considering what 6G will bring to the world. It does kind of make sense though, to avoid the dreaded staggering of download speeds and the horrific appearance of buffering symbols, the industry has to look far beyond the horizon.
If you consider the uphill struggle it has been to get 5G to this point, and we haven’t even launched glorious ‘G’ properly, how long will it take before we get to 6G? Or perhaps a better question is how long before we actually need it?
“5G will not be able to handle the number of ‘things’ which are connected to the network in a couple of years’ time,” said Scott Petty, CTO of Vodafone UK. “We need to start thinking about 6G now and we have people who are participating in the standards groups already.”
This is perhaps the issue which we are facing in the future; the sheer volume of ‘things’ which will be connected to the internet. As Petty points out, 5G is about being bigger, badder and leaner. Download speeds will be faster, reliability will be better, and latency will be almost none existent, but the weight of ‘things’ will almost certainly have an impact. Today’s networks haven’t been built with this in mind.
Trying to find consensus on the growth of IOT is somewhat of a difficult task, such is the variety of predictions. Everyone predicts the same thing, the number of devices will grow in an extra-ordinary fashion, but the figures vary by billions.
Using Ericsson’s latest mobility report, the team is estimating cellular IoT connections will reach 4.1 billion in 2024, of which 2.7 billion will be in North East Asia. This is a huge number and growth will only accelerate year-on-year. But here is thing, we’re basing these judgments on what we know today; the number of IOT devices will be more dependent on new products, services and business models which will appear when the right people have the 5G tools to play around with. Who knows what the growth could actually be?
Another aspect to consider is the emergence of new devices. As it stands, current IOT devices deliver such a minor slice of the total cellular traffic around the world its not much of a consideration, however with new usecases and products for areas such as traffic safety, automated vehicles, drones and industrial automation, the status quo will change. As IOT becomes more commonplace and complicated, data demands might well increase, adding to network strain.
Petty suggests this will be the massive gamechanger for the communications industry over the next few years and will define the case for 6G. But, who knows what the killer usecase will be for 5G, or what needs will actually push the case for the next evolution of networks. That said, more efficient use of the spectrum is almost certainly going to be one of the parameters. According to Petty, this will help with the tsunami of things but there is a lot of new science which will have to be considered.
Then again, 6G might not be measured under the same requirements as today…
Sooner or later the industry will have to stop selling itself under the ‘bigger, badder, faster’ mantra, as speeds will become irrelevant. If you have a strong and stable 4G connection today, there isn’t much you can’t do. Few applications or videos that are available to the consumer require 5G to function properly, something which telco marketers will have to adapt to in the coming years as they try to convince customers to upgrade to 5G contracts.
4G and arguably todays vision of 5G has always been about making the pipe bigger and faster, because those were the demands of the telcos trying to meet the demands of the consumer. 6G might be measured under different KPIs, for example, energy efficiency.
According to Alan Carlton, Managing Director of InterDigital’s European business, the drive towards more speed and more data is mainly self-imposed. The next ‘G’ can be defined as what the industry wants it to be. The telcos would have to think of other ways to sell connectivity services to the consumer, but they will have to do that sooner or later.
The great thing about 5G is that we are barely scratching the surface of what is capable. “We’re not even at 5.0G yet,” said Carlton. “And this is part of the confusion.”
What 5G is nowadays is essentially LTE-A Pro. We’re talking about 256-QAM and Massive MIMO but that is not really a different conversation. With Release 16 on the horizon and future standards groups working on topics such virtualisation, MMwave and total cost of ownership, future phases of 5G will promise so much more.
The next step for Carlton is not necessarily making everything faster, or more reliable or lower latency, but the next ‘G’ could be all about ditching the wires. Fibre is an inflexible commodity, and while it might be fantastic, why do we need it? Why shouldn’t the next vision of connectivity be one where we don’t have any wires at all?
Carlton’s approach to the future of connectivity is somewhat different to the norm. This is an industry which is fascinated by the pipes themselves and delivering services faster, but these working groups and standards bodies are driving change for the benefit of the industry. It doesn’t necessarily have to be about making something faster, so you can charge more, just a change to the status quo which benefits the industry.
Coming back to the energy efficiency idea, this is certainly something which has been suggested elsewhere. IEEE has been running a series of conferences in California addressing this very issue, as delivering 1000X more data is naturally going to consume more energy to start with. It probably won’t be 1000X more expensive, but it is incredibly difficult to predict what future energy consumption needs will be. Small cells do not consume as much energy as traditional sites, but there will need to be a lot more of them to meet demand. There are a lot of different elements to consider here (for example environment or spectrum frequency), but again, this is a bit of an unknown.
Perhaps this is an area where governments will start to wade in? Especially in the European and North American markets which are more sensitive to environmental impacts (excluding the seemingly blind Trump).
Echoing Petty’s point from earlier, we don’t necessarily know the specifics of how the telco industry is going to be stressed and strained in six- or seven-years’ time. These changes will form the catalyst for change, evolving from 5G to 6G, and it might well be a desire for more energy efficient solutions or it might well be a world free of wires.
Moving across the North Sea, 6G has already captured the attention of those in the Nordics.
Back in April 2018, the Academy of Finland announced the launch of ‘6Genesis’, an eight-year research programme to drive the industry towards 6G. Here, the study groups will start to explore technologies and services which are impossible to deliver in today’s world, and much of this will revolve around artificial intelligence.
Just across the border in Sweden, these new technologies are capturing the attention of Ericsson. According to Magnus Frodigh, Head of Ericsson Research, areas like Quantum computing, artificial intelligence and edge computing are all making huge leaps forward, something which will only be increased with improved connectivity. These are the areas which will define the next generation, and what can be achieved in the long-run.
“One of the new things to think about is the combination of unlimited connectivity as a resource, combined with low latency, more powerful computing,” said Frodigh. “No-one really knows how this is going to play out, but this might help define the next generation of mobile.”
Of course, predicting 6G might be pretty simple. In a couple of years’ time, perhaps we will all be walking around with augmented reality glasses on while holographic pods replace our TVs. If such usecases exist, perhaps the old ‘bigger, badder, faster’ mantra of the telco industry will be called upon once again. One group which is counting on this is EU-funded Terranova, which is currently working on solutions to allow network connection in the terahertz range, providing speeds of up to 400 Gbps.
Another area to consider is the idea of edge computing and the pervasiveness of artificial intelligence. According to Carlton (InterDigital), AI will be every in the future with intelligence embedded in almost every device. This is the vision of the intelligent economy, but for AI to work as promised, latency will have to be so much lower than we can even consider delivering today. This is another demand of future connectivity, but without it the intelligent economy will be nothing more than a shade of what has been promised.
And of course, the more intelligence you put on or in devices, the greater the strain on the components. Eventually more processing power will be moved off the devices and into the cloud, building the case for distributed computing and self-learning algorithms hosted on the edge. It is another aspect which will have to be considered, and arguably 5G could satisfy some of these demands, but who knows how quickly and broadly this field will accelerate.
Artificial intelligence and the intelligent economy have the potential to become a catalyst for change, forcing us to completely rethink how networks are designed, built and upgraded. We don’t know for sure yet, but most would assume the AI demands of the next couple of years will strain the network in the same way video has stressed 4G.
Who knows what 6G has in store for us, but here’s to hoping 5G isn’t an over-hyped dud.
Telecoms.com periodically invites third parties to share their views on the industry’s most pressing issues. In this piece Alla Goldner looks at ONAP and its contribution to virtualization and preparing the way for 5G for telcos.
5G is a technology revolution – paving the way for new revenue streams, partnerships and innovative business models. More than a single technology, 5G is about the integration of an entire ecosystem of technologies. Indeed, a recent Amdocs survey found that nearly 80% of European communications service providers (CSPs) expect the introduction of 5G to expand revenue opportunities with enterprise customers. It also found that 34% of operators plan to offer 5G services commercially to this sector by the end of 2019, a figure that will more than double to 84% by the end of 2020.
As with every revolution, to extract its full potential value, it will require a set of enablers or tools to connect the new technology with the telco network. For CSPs in particular, the need for new and enhanced network management system is an established fact, with more than half of European operators saying they would need to enhance their service orchestration capabilities. But, they want to do this in a flexible, agile and open manner, and not be burdened with constraints and limitations of traditional tools approaches.
ONAP: the de-facto automation platform
This is where ONAP (Open Network Automation Platform) enters the picture. Developed by a community of open source network evangelists from across the industry, it has become the de-facto automation platform for carrier grade service provider networks. Since its inception in February 2017, the community has expanded beyond the pure technical realm to include collaboration with other open source projects such as OPNFV, CNCF, and PNDA, as well as standards communities such as ETSI, MEF, 3GPP and TM Forum. We also anticipate collaboration with the Acumos Project to feed ONAP analytics with AI/ML data and parameters. Such collaboration is essential when it comes to delivering revolutionary use cases, such as 5G and edge automation, as its implementation requires alignment with evolving industry standards.
ONAP and 5G
CSPs consider 5G to be more than just a radio and core network overhaul, but rather a significant architecture and network transformation. And ONAP has a key role to play in this change. As an orchestration platform, ONAP enables the instantiation, lifecycle management and assurance of 5G network services. As part of the roadmap, ONAP will eventually have the ability to implement resource management and orchestration of 5G physical network functions (PNFs) and virtual network functions (VNFs). It will also have the ability to provide definition and implementation of closed-loop automation for live deployments.
The 5G blueprint is a multi-release effort, with Casablanca, ONAP’s latest release, introducing some key capabilities around PNF integration and network optimization. Given that the operators involved with ONAP represent more than 60% of mobile subscribers and the fact that they are directly able to influence the roadmap, this paves the way for ONAP, over time, to become a compelling management and orchestration platform for 5G use cases, including hybrid VNF/PNF support.
Another capability in high-demand is support for 5G network slicing, which is aggregated from access network (RAN), transport and 5G core network slice subnet services. These, in turn are composed of a combination of other services, virtual network functions (VNFs) and physical network functions (PNFs). To support this, ONAP is working on supporting the ability to model complex network services, as part of the upcoming Dublin release.
To summarize the above, 5G and ONAP are together two critical pieces of the same puzzle:
- ONAP is the defacto standard for end-to-end network management systems, a crucial enabler of 5G
- ONAP enables support of existing and future networking use cases, and provides a comprehensive solution to enable network slicing as a key embedded capability of 5G
- By leveraging a distributed and virtualized architecture, ONAP is active in the development of network management enhancements and distributed analytics capabilities, which are required for edge automation – a 5G technology enabler
The importance of vendor involvement: Amdocs case study
Amdocs has been involved in ONAP since its genesis as ECOMP (Enhanced Control, Orchestration, Management and Policy), the orchestration and network management platform developed at AT&T. Today, Amdocs is one of the top vendors participating in ONAP developments, and has supported proven deployments with leading service providers.
Amdocs supports both platform enhancements and use case development activities including:
- SDC (Service Design and Creation)
- A&AI (Active and Available Inventory)
- Logging and OOM (ONAP Operations Manager) projects
- Modeling and orchestration of complex 5G services, such as network slicing
Amdocs’ and other vendors participation in ONAP enables the ecosystem to benefit through a best-in-class NFV orchestration platform, supporting the full lifecycle of support of 5G services in an open, multi-vendor environment – from service ideation, modeling, through its instantiation, commission, modification, automatic closed-loop operations, analytics and finally, decommissioning.
The result is a win-win for CSPs, Amdocs, other vendors, as well as the ONAP community as a whole.
The benefits of collaboration for CSPs are that it provides them comprehensive monetization capabilities that enable them to capture every 5G revenue opportunity. The benefit for vendors such as Amdocs is to further their knowledge of best practices, which then flow back to the ONAP community.
About the author: Since ONAP’s inception, Alla Goldner has been a member of the ONAP Technical Steering Committee (TSC) and Use Case subcommittee chair. She also leads all ONAP activities at Amdocs.
Vodafone Business and IBM have signed-off on a new joint venture which will aim to develop systems to help data and applications flow freely around an organization.
The joint-venture, which will be operational in the first half of 2019, will aim to bring together the expertise of both the parties to solve one of the industry’s biggest challenges; multi-cloud interoperability and the removal of organizational siloes. On one side of the coin you have IBM’s cloud know-how while Vodafone will bring the IoT, 5G and edge computing smarts. A match made in digital transformational heaven.
“IBM has built industry-leading hybrid cloud, AI and security capabilities underpinned by deep industry expertise,” said IBM CEO Ginni Rometty. “Together, IBM and Vodafone will use the power of the hybrid cloud to securely integrate critical business applications, driving business innovation – from agriculture to next-generation retail.”
“Vodafone has successfully established its cloud business to help our customers succeed in a digital world,” said Vodafone CEO Nick Read. “This strategic venture with IBM allows us to focus on our strengths in fixed and mobile technologies, whilst leveraging IBM’s expertise in multi-cloud, AI and services. Through this new venture we’ll accelerate our growth and deepen engagement with our customers while driving radical simplification and efficiency in our business.”
The issue which many organizations are facing today, according to Vodafone, is the complexity of the digital business model. On average, 70% of organizations are operating in as many as 15 different cloud environments, leaning on the individuals USPs of each, but marrying these environments is a complex, but not new, issue.
Back in September, we had the chance to speak to Sachin Sony of Equinix about the emerging Data Transfer Project, an initiative to create interoperability and commonalities between the different cloud environments. The project is currently working to build a common framework with open-source code that can connect any two online service providers, enabling a seamless, direct, user-initiated portability of data between the two platforms This seems to be the same idea which the new IBM/Vodafone partnership is looking to tackle.
With this new joint-venture it’ll be interesting to figure out whether the team can build a proposition which will be any good. Vodafone has promised the new business will operate with a ‘start-up’ mentality, whatever that means when you take away the PR stench, under one roof. Hopefully the walk will be far enough away from each of the parent companies’ offices to ensure the neutral ground can foster genuine innovation.
This is a partnership which has potential. The pair have identified a genuine issue in the industry and are not attempting to solve it alone. Many people will bemoan the number of partnerships in the segment which seem to be nothing more than a feeble attempt to score PR points, but this is an example where expertise is being married to split the spoils.
Huawei has unveiled a new ARM-based CPU called Kunpeng 920, designed to capitalise on the growing euphoria building around big data, artificial intelligence and edge-computing.
The CPU was independently designed by Huawei based on ARMv8 architecture license, with the team claiming it improves processor performance by optimizing branch prediction algorithms, increasing the number of OP units, and improving the memory subsystem architecture. Another bold claim is the CPU scores over 930 in the SPECint Benchmarks test, 25% higher than the industry benchmark.
“Huawei has continuously innovated in the computing domain in order to create customer value,” said William Xu, Chief Strategy Marketing Officer of Huawei.
“We believe that, with the advent of the intelligent society, the computing market will see continuous growth in the future. Currently, the diversity of applications and data is driving heterogeneous computing requirements. Huawei has long partnered with Intel to make great achievements. Together we have contributed to the development of the ICT industry. Huawei and Intel will continue our long-term strategic partnerships and continue to innovate together.”
The launch itself is firmly focused on the developing intelligence economy. With 5G on the horizon and a host of new connected services promised, the tsunami of data and focus on edge-computing technologies is certain to increase. These are segments which are increasingly featuring on the industry’s radar and Huawei might have stolen a couple of yards on the buzzword chasers ahead of the annual get-together in Barcelona.
“With Kirin 980, Huawei has taken smartphones to a new level of intelligence,” said Xu. “With products and services (e.g. Huawei Cloud) designed based on Ascend 310, Huawei enables inclusive AI for industries. Today, with Kunpeng 920, we are entering an era of diversified computing embodied by multiple cores and heterogeneity. Huawei has invested patiently and intensively in computing innovation to continuously make breakthroughs.”
Another interesting angle to this launch is the slight shuffle further away from the US. With every new product which Huawei launches, more of its own technology will feature. In years gone, should Huawei have wanted to launch any new servers or edge computing products it would have had to look externally for CPUs. Considering Intel and AMD have a strong position in these segments, supply may have come from the US.
For any other company, this would not be a problem. However, considering the escalating trade war between the US and China, and the fact Huawei’s CFO is currently awaiting trial for violating US trade sanctions with Iran, this is a precarious position to be in.
Cast you mind back to April. ZTE had just been caught red-handed violating US trade sanctions with Iran and was subsequently banned from using any US components or IP within its supply chain. Should the courts find Huawei guilty of the same offence, it is perfectly logical to assume it would also face the same punishment.
This is the suspect position Huawei finds itself in and is currently trying to correct. Just before Christmas, Huawei’s Rotating CEO Ken Hu promised it’s supply chain was in a better position than ZTE’s and the firm wouldn’t go down the same route, while in the company’s New Year’s message, Rotating Chairman Guo Ping said the focus of 2019 would be creating a more resilient business. These messages are back up by efforts in the R&D team, such as building an alternative to the Android operating system which would power its smartphones should it be banned from using US products.
Perhaps the Kunpeng 920 could be seen as another sign Huawei is distancing itself from the US, while also capitalising on a growing which is about to blossom.
Revenues in the cloud computing world are growing fast with no end in sight just yet, but Oracle can’t seem to cash in on the bonanza.
This week brought joint-CEOs Safra Catz and Mark Hurd in front of analysts and investors to tell everyone nothing has really changed. Every cloud business seems to be hoovering up the fortunes brought with the digital era, demonstrating strong year-on-year growth, but Oracle only managed to bag a 2% increase, 1% for the cloud business units.
It doesn’t matter how you phrase it, what creative accounting processes you use, when you fix the currency exchange, Oracle is missing out on the cash grab.
Total Revenues were unchanged at $9.6 billion and up 2% in constant currency compared to the same three months of 2017, Cloud Services and License Support plus Cloud License and On-Premise License revenues were up 1% to $7.9 billion. Cloud Services and License Support revenues were $6.6 billion, while Cloud License and On-Premise License revenues were $1.2 billion. Cloud now accounts for nearly 70% of the total company revenues and most of it is recurring revenues.
Some might point to the evident growth. More money than last year is of course better, but you have to compare the fortunes of Oracle to those who are also trying to capture the cash.
First, let’s look at the cloud market on the whole. Microsoft commercial cloud services have an annual run rate of $21.2 billion, AWS stands at $20.4 billion, IBM $10.3 billion, Google cloud platform at $4 billion and Alibaba at $2.2 billion. Oracle’s annual run rate is larger than Google and Alibaba, those these two businesses are growing very quickly.
Using the Right Scale State of the Cloud report, enterprises running Google public cloud applications are now 19%, IBM’s applications are 15%, Microsoft at 58% and AWS at 68%. Alibaba is very low, though considering the scale potential it has in China, there is great opportunity for a catapult into the international markets. Oracle’s applications are only running in 10% of enterprise organizations who responded to the research.
Looking at the market share gains for last quarter, AWS is unsurprisingly sitting at the top of the pile collecting 34% over the last three months, Microsoft was in second with around 15%, while Google, IBM and Alibaba exceeded the rest of the market as well. Oracle sits in the group of ten providers which collectively accounted for 15% of cloud spending in the last quarter. These numbers shouldn’t be viewed as the most attractive.
Oracle is not a company which is going to disappear from the technology landscape, it is too important a service provider to numerous businesses around the world. However, a once dominant and influential brand is losing its position. Oracle didn’t react quick enough to the cloud euphoria and it’s looking like its being punished for it now.
The European Telecommunications Standards Institute, ETSI, has released new specifications on packet formatting and forwarding, as well as two reports on transport and network slicing respectively.
The new specification, called Flexilink, focusing on packet formats and forwarding mechanisms to allow core and access networks to support the new services proposed for 5G. The objective of the new specification is to achieve efficient deterministic packet forwarding in user plane for next generation protocols (NGP). In the conventional IP networks, built on the Internet Protocols defined in the 1980s, every packet carries all the information needed to route it to its destination. This is undergoing fundamental changes with new technologies like Software Defined Networking (SDN) and Control and User Plane Separation (CUPS), where most packets are part of a “flow” such as a TCP session or a video stream. As a result, there is increasingly a separation between the processes of deciding the route packets will follow and of forwarding the packets.
“Current IP protocols for core and access networks need to evolve and offer a much better service to mobile traffic than the current TCP/IP-based technology,” said John Grant, chairman of the ETSI Next Generation Protocol Industry Specification Group (ISG). “Our specifications offer solutions that are compatible with both IPv4 and IPv6, providing an upgrade path to the more efficient and responsive system that is needed to support 5G.”
The new specification defines two separate services, a “basic” service suitable for traditional statistically multiplexed packet data, and a “guaranteed” service providing the lowest possible latency for continuous media, such as audio, video, tactile internet, or vehicle position. It is worth noting that Flexilink only specifies user plane packet formats and routing mechanisms. Specifications for the control plane to manage flows have already been defined in an earlier NGP document “Packet Routing Technologies” published in 2017.
The report “Recommendation for New Transport Technologies” analyses the current transport technologies such as TCP and their limitations, whilst also providing high-level guidance on architectural features required in a transport technology to support the new applications proposed for 5G. The report also includes a framework where there is a clear separation between control and data planes. A proof-of-concept implementation was conducted to experiment the recommended technologies, and to demonstrate that each TCP session can obtain bandwidth guaranteed service or minimum latency guaranteed service. The report states:
“With traditional transport technology, for all TCP traffic passes through DIP router, each TCP session can only obtain a fraction of bandwidth. It is related to the total number of TCP sessions and the egress bandwidth (100 M).
“With new transport technology, new TCP session (DIP flows) could obtain its expected bandwidth or the minimum latency. And most [sic.] important thing is that the new service is not impacted by the state that router is congested, and this can prove that new service by new transport technology is guaranteed.”
Importantly, the PoC experiment showed that the current hardware technology is able to support the proposed new transport technology and provide satisfactory scalability and performance.
The report “E2E Network Slicing Reference Framework and Information Model” looks into the design principles behind network slicing. The topic of network slices encompasses the combination of virtualisation, cloud centric, and SDN technologies. But there is gap in normalized resource information flow over a plurality of provider administration planes (or domains). The report aims to “provide a simple manageable and operable network through a common interface while hiding infrastructure complexities. The present document defines how several of those technologies may be used in coordination to offer description and monitoring of services in a network slice.” It describes the high level functions and mechanisms for implementing network slicing, as well as addresses security considerations.
Now with added video!
Despite governments around the world turning against Chinese vendors, Telecom Italia has agreed a new partnership with Huawei based on Software Defined Wide Area Network (SD-WAN) technology.
As part of a strategy aimed at evolving TIM’s network solutions for business customers, Huawei’s SD-WAN technology will be incorporated to create a new TIM service model which will allow customers companies to manage their networks through a single console.
“Today, more than ever, companies need networks that can adapt to different business needs over time, in particular to enable Cloud and VoIP services,” said Luigi Zabatta, Head of Fixed Offer for TIM Chief Business & Top Clients Office. “Thanks to the most advanced technologies available, these networks can be managed both jointly and by customers themselves through simple tools.
“The partnership with Huawei allows us to expand our value proposition for companies and to enrich our offer through the adoption of a technological model that is increasingly and rapidly emerging in the ICT industry.”
The partnership is a major win for Huawei considering the pressure the firm must be feeling over suspicions being peaked around the world. Just as more countries are clamping down on the ability for Huawei to do business, TIM has offered a windfall.
Aside from the on-going Chinese witch hunt over in the US, the Australians have banned Huawei from participating in the 5G bonanza and Korean telcos have left the vendor off preferred supplier lists. Just to add more misery, the UK is seemingly joining in on the trends.
In recent weeks, a letter was sent out from the Department of Digital, Culture, Media and Sport, and the National Cyber Security Centre, warning telcos of potential impacts to the 5G supply chain from the Future Telecom Infrastructure Review. China was not mentioned specifically, and neither was Huawei, but sceptical individuals might suggest China would be most squeezed by a security and resilience review.
The rest of the world might be tip-toeing around the big question of China, but this partnership suggests TIM doesn’t have the same reservations.