Huawei launches Kunpeng 920 chip to bag big data and edge computing

Huawei has unveiled a new ARM-based CPU called Kunpeng 920, designed to capitalise on the growing euphoria building around big data, artificial intelligence and edge-computing.

The CPU was independently designed by Huawei based on ARMv8 architecture license, with the team claiming it improves processor performance by optimizing branch prediction algorithms, increasing the number of OP units, and improving the memory subsystem architecture. Another bold claim is the CPU scores over 930 in the SPECint Benchmarks test, 25% higher than the industry benchmark.

“Huawei has continuously innovated in the computing domain in order to create customer value,” said William Xu, Chief Strategy Marketing Officer of Huawei.

“We believe that, with the advent of the intelligent society, the computing market will see continuous growth in the future. Currently, the diversity of applications and data is driving heterogeneous computing requirements. Huawei has long partnered with Intel to make great achievements. Together we have contributed to the development of the ICT industry. Huawei and Intel will continue our long-term strategic partnerships and continue to innovate together.”

The launch itself is firmly focused on the developing intelligence economy. With 5G on the horizon and a host of new connected services promised, the tsunami of data and focus on edge-computing technologies is certain to increase. These are segments which are increasingly featuring on the industry’s radar and Huawei might have stolen a couple of yards on the buzzword chasers ahead of the annual get-together in Barcelona.

“With Kirin 980, Huawei has taken smartphones to a new level of intelligence,” said Xu. “With products and services (e.g. Huawei Cloud) designed based on Ascend 310, Huawei enables inclusive AI for industries. Today, with Kunpeng 920, we are entering an era of diversified computing embodied by multiple cores and heterogeneity. Huawei has invested patiently and intensively in computing innovation to continuously make breakthroughs.”

Another interesting angle to this launch is the slight shuffle further away from the US. With every new product which Huawei launches, more of its own technology will feature. In years gone, should Huawei have wanted to launch any new servers or edge computing products it would have had to look externally for CPUs. Considering Intel and AMD have a strong position in these segments, supply may have come from the US.

For any other company, this would not be a problem. However, considering the escalating trade war between the US and China, and the fact Huawei’s CFO is currently awaiting trial for violating US trade sanctions with Iran, this is a precarious position to be in.

Cast you mind back to April. ZTE had just been caught red-handed violating US trade sanctions with Iran and was subsequently banned from using any US components or IP within its supply chain. Should the courts find Huawei guilty of the same offence, it is perfectly logical to assume it would also face the same punishment.

This is the suspect position Huawei finds itself in and is currently trying to correct. Just before Christmas, Huawei’s Rotating CEO Ken Hu promised it’s supply chain was in a better position than ZTE’s and the firm wouldn’t go down the same route, while in the company’s New Year’s message, Rotating Chairman Guo Ping said the focus of 2019 would be creating a more resilient business. These messages are back up by efforts in the R&D team, such as building an alternative to the Android operating system which would power its smartphones should it be banned from using US products.

Perhaps the Kunpeng 920 could be seen as another sign Huawei is distancing itself from the US, while also capitalising on a growing which is about to blossom.

The cloud is booming but no-one seems to have told Oracle

Revenues in the cloud computing world are growing fast with no end in sight just yet, but Oracle can’t seem to cash in on the bonanza.

This week brought joint-CEOs Safra Catz and Mark Hurd in front of analysts and investors to tell everyone nothing has really changed. Every cloud business seems to be hoovering up the fortunes brought with the digital era, demonstrating strong year-on-year growth, but Oracle only managed to bag a 2% increase, 1% for the cloud business units.

It doesn’t matter how you phrase it, what creative accounting processes you use, when you fix the currency exchange, Oracle is missing out on the cash grab.

Total Revenues were unchanged at $9.6 billion and up 2% in constant currency compared to the same three months of 2017, Cloud Services and License Support plus Cloud License and On-Premise License revenues were up 1% to $7.9 billion. Cloud Services and License Support revenues were $6.6 billion, while Cloud License and On-Premise License revenues were $1.2 billion. Cloud now accounts for nearly 70% of the total company revenues and most of it is recurring revenues.

Some might point to the evident growth. More money than last year is of course better, but you have to compare the fortunes of Oracle to those who are also trying to capture the cash.

First, let’s look at the cloud market on the whole. Microsoft commercial cloud services have an annual run rate of $21.2 billion, AWS stands at $20.4 billion, IBM $10.3 billion, Google cloud platform at $4 billion and Alibaba at $2.2 billion. Oracle’s annual run rate is larger than Google and Alibaba, those these two businesses are growing very quickly.

Using the Right Scale State of the Cloud report, enterprises running Google public cloud applications are now 19%, IBM’s applications are 15%, Microsoft at 58% and AWS at 68%. Alibaba is very low, though considering the scale potential it has in China, there is great opportunity for a catapult into the international markets. Oracle’s applications are only running in 10% of enterprise organizations who responded to the research.

Looking at the market share gains for last quarter, AWS is unsurprisingly sitting at the top of the pile collecting 34% over the last three months, Microsoft was in second with around 15%, while Google, IBM and Alibaba exceeded the rest of the market as well. Oracle sits in the group of ten providers which collectively accounted for 15% of cloud spending in the last quarter. These numbers shouldn’t be viewed as the most attractive.

Oracle is not a company which is going to disappear from the technology landscape, it is too important a service provider to numerous businesses around the world. However, a once dominant and influential brand is losing its position. Oracle didn’t react quick enough to the cloud euphoria and it’s looking like its being punished for it now.

ETSI publishes new spec and reports on 5G tech

The European Telecommunications Standards Institute, ETSI, has released new specifications on packet formatting and forwarding, as well as two reports on transport and network slicing respectively.

The new specification, called Flexilink, focusing on packet formats and forwarding mechanisms to allow core and access networks to support the new services proposed for 5G. The objective of the new specification is to achieve efficient deterministic packet forwarding in user plane for next generation protocols (NGP). In the conventional IP networks, built on the Internet Protocols defined in the 1980s, every packet carries all the information needed to route it to its destination. This is undergoing fundamental changes with new technologies like Software Defined Networking (SDN) and Control and User Plane Separation (CUPS), where most packets are part of a “flow” such as a TCP session or a video stream. As a result, there is increasingly a separation between the processes of deciding the route packets will follow and of forwarding the packets.

“Current IP protocols for core and access networks need to evolve and offer a much better service to mobile traffic than the current TCP/IP-based technology,” said John Grant, chairman of the ETSI Next Generation Protocol Industry Specification Group (ISG). “Our specifications offer solutions that are compatible with both IPv4 and IPv6, providing an upgrade path to the more efficient and responsive system that is needed to support 5G.”

The new specification defines two separate services, a “basic” service suitable for traditional statistically multiplexed packet data, and a “guaranteed” service providing the lowest possible latency for continuous media, such as audio, video, tactile internet, or vehicle position. It is worth noting that Flexilink only specifies user plane packet formats and routing mechanisms. Specifications for the control plane to manage flows have already been defined in an earlier NGP document “Packet Routing Technologies” published in 2017.

The report “Recommendation for New Transport Technologies” analyses the current transport technologies such as TCP and their limitations, whilst also providing high-level guidance on architectural features required in a transport technology to support the new applications proposed for 5G. The report also includes a framework where there is a clear separation between control and data planes. A proof-of-concept implementation was conducted to experiment the recommended technologies, and to demonstrate that each TCP session can obtain bandwidth guaranteed service or minimum latency guaranteed service. The report states:

“With traditional transport technology, for all TCP traffic passes through DIP router, each TCP session can only obtain a fraction of bandwidth. It is related to the total number of TCP sessions and the egress bandwidth (100 M).

“With new transport technology, new TCP session (DIP flows) could obtain its expected bandwidth or the minimum latency. And most [sic.] important thing is that the new service is not impacted by the state that router is congested, and this can prove that new service by new transport technology is guaranteed.”

Importantly, the PoC experiment showed that the current hardware technology is able to support the proposed new transport technology and provide satisfactory scalability and performance.

The report “E2E Network Slicing Reference Framework and Information Model” looks into the design principles behind network slicing. The topic of network slices encompasses the combination of virtualisation, cloud centric, and SDN technologies. But there is gap in normalized resource information flow over a plurality of provider administration planes (or domains). The report aims to “provide a simple manageable and operable network through a common interface while hiding infrastructure complexities. The present document defines how several of those technologies may be used in coordination to offer description and monitoring of services in a network slice.” It describes the high level functions and mechanisms for implementing network slicing, as well as addresses security considerations.

Italians clearly aren’t that suspicious of Huawei

Despite governments around the world turning against Chinese vendors, Telecom Italia has agreed a new partnership with Huawei based on Software Defined Wide Area Network (SD-WAN) technology.

As part of a strategy aimed at evolving TIM’s network solutions for business customers, Huawei’s SD-WAN technology will be incorporated to create a new TIM service model which will allow customers companies to manage their networks through a single console.

“Today, more than ever, companies need networks that can adapt to different business needs over time, in particular to enable Cloud and VoIP services,” said Luigi Zabatta, Head of Fixed Offer for TIM Chief Business & Top Clients Office. “Thanks to the most advanced technologies available, these networks can be managed both jointly and by customers themselves through simple tools.

“The partnership with Huawei allows us to expand our value proposition for companies and to enrich our offer through the adoption of a technological model that is increasingly and rapidly emerging in the ICT industry.”

The partnership is a major win for Huawei considering the pressure the firm must be feeling over suspicions being peaked around the world. Just as more countries are clamping down on the ability for Huawei to do business, TIM has offered a windfall.

Aside from the on-going Chinese witch hunt over in the US, the Australians have banned Huawei from participating in the 5G bonanza and Korean telcos have left the vendor off preferred supplier lists. Just to add more misery, the UK is seemingly joining in on the trends.

In recent weeks, a letter was sent out from the Department of Digital, Culture, Media and Sport, and the National Cyber Security Centre, warning telcos of potential impacts to the 5G supply chain from the Future Telecom Infrastructure Review. China was not mentioned specifically, and neither was Huawei, but sceptical individuals might suggest China would be most squeezed by a security and resilience review.

The rest of the world might be tip-toeing around the big question of China, but this partnership suggests TIM doesn’t have the same reservations.

Nokia gets a bunch more cash from Chinese operators

Nokia is so keen for everyone to know how well it’s doing in China that is it makes an announcement every time it wins some business.

Earlier this year we heard all about a ‘framework agreement’ signed with China mobile that was worth around €1 billion. Today Nokia has announced some more ‘frame agreements’, which are presumably the same thing and refer to a kind of pre-contract that amounts to a formal commitment to do a bunch of business in future.

This time we’re talking €2 billion, but split between all three Chinese MNOs – China Mobile, China Telecom and China Unicom. Presumably the China Mobile bit is fresh cash, not just a recycling of the previous bil. The agreements cover delivery for the next year or so of radio, fixed access, IP routing and optical transport equipment, as well as some SDN and NFV goodness. Nokia is excited by all this transitioning and leveraging.

“We are excited to continue our close collaboration with these important customers in China, to drive new levels of network performance as they transition toward 5G,” said Mike Wang, president of Nokia Shanghai Bell. “Leveraging the breadth of our end-to-end network and services capabilities, we will work closely with China Mobile, China Telecom and China Unicom to deploy technologies that meet their specific business needs.”

It wouldn’t be surprising to see some kind of equivalent announcement by Ericsson before long as the two Nordic kit vendors clearly like to compete over this sort of thing. Not long after its first China Mobile announcement Nokia said it was getting £3.5 billion from T-Mobile US to help out with 5G. Within a few weeks Ericsson had countered with an almost identical announcement of its own.

What does cloud-native really mean for operators?

Telecoms.com periodically invites third parties to share their views on the industry’s most pressing issues. In this piece Dominik Pacewicz, Chief Product Manager for BSS at Comarch examines the term ‘cloud-native’ and asks what it signifies.

Cloud-native services are disrupting many industries. The telecoms industry however has long been outstripped by other sectors in the adoption of new technology. At the same time, service providers see a great opportunity to catapult themselves into the digital age through a spirited combination of cloud-nativeness, 5G networks and virtualization.

The term “cloud-native” is two-faceted. It entails both the technology used as well as the more strategic design aspect, signifying the direction many enterprises want to take with their applications. This strategy would require a broader look at the meaning of cloud-nativeness, going beyond the usual cloud-native “triad” or microservices, containers, and PaaS (Platforms as a Service) to include 5G and network virtualization.

Focus on microservices for consistent quality

Microservices are a set of autonomous and loosely-coupled services. It is often contrasted with rigid siloed architecture, but microservices are self-contained. They have their own data models, repository and functions, which can be accessed only through their own API. Microservices essentially break down applications into their core functions. In the case of a hypothetical cloud-based streaming platform, these microservices could fulfil separate functions such as search, customer rating, recommendations and product catalogue.

The practice of using microservices comes from the realization that today’s users expect flexible yet consistent experience across all devices, which entails high demand for modular and scalable cloud-based architecture.

Use containers for service peaks and troughs

Containers are the frameworks used to run individual microservices. They can hold different types of software code, allowing it to run simultaneously over different runtime environments such as production, testing, and integration. Containers make microservice-based applications portable, since they can be created or deleted dynamically. Performance can be scaled up or down with precision to treat bottlenecks – for instance, during Black Friday, a CSP can predict the increased demand for its online and offline sales, which can affect the domain but will have a negligible impact on all others.

Containers are an essential part of cloud-native architecture because the same container, managed with exactly the same Open Source tools, can be deployed on any cloud. It will not impact the operator’s virtual servers or computing systems.

Utilize PaaS for different capabilities

PaaS provides the foundation for software to be developed or deployed – somewhat similar to the operating system for a server or an entire network. All of this happens online and PaaS provides an abstraction layer, for networking, storing and computing, for the network infrastructure to grow and scale. PaaS creates an environment in which the software, the operating system and the underlying hardware and network infrastructure are all taken care of.  The user only has to focus on application development and deployment.

Using PaaS enables the harmonization of all elements of the cloud environment by integrating various cloud services. This in turn leads to virtualized processes of web application development, while developers still retain access to the same tools and standards.

5G is the cloud on steroids

The traditional “triad” of cloud-nativeness is not enough for the perfect, uninterrupted cloud application experience. There’s one asset missing – the 5G network. One reason why 5G is important for cloud-native environments, particularly for mobile cloud app development, is that striking the right balance between efficiency and the number of functionalities is a tough nut to crack. This is due to the high latency and the unreliable connectivity of some mobile devices.

Apart from LAN-like speeds for mobile devices, 5G can deliver lower battery consumption, broader bandwidth, greater capacity (1000 times that of 4G), and a substantial reduction in latency (even 50-fold). This is the main limiting factor when working with client-server architectures. What could follow is improved wireless range, increased capacity per cell tower and greater consistency.

The ‘cloud experience’ for mobile devices will be completely reshaped as a result of the adoption of 5G technology and mobile cloud applications will rival – or even surpass – their versions relying on corporate LAN connectivity to the desktop in terms of the number of offered functionalities.

Bridging the Gap with Network Virtualization

A key innovative element of NFV is the concept of VNF (Virtual Network Functions) forwarding graphs which enable the creation of service topologies that are not dependent on the physical topology. Network virtualization allows operators to allocate resources according to traffic demand. Operators can exert control over the network while managing “slices” of the network, without having to spend on infrastructure upkeep.

For this reason, NFV is leading the evolution of the 5G network ecosystem. Virtualizing the Evolved Packet Core (EPC) has emerged as a leading use case and one of the most tangible examples of the advantages of virtualization. The vEPC abstracts and decomposes the EPC functions, allowing them to run in combinations as COTS software instances. This approach allows CSPs to design networks in new ways that drastically reduce costs and simplify operations. Perfect conditions for 5G.

On the access side, the Cloud Radio Access Network (C-RAN) is a highly complementary technology to vEPC. C-RAN deployment, virtualizing many of the RAN functionalities on standard CPUs is seen as an important technology enabler for reducing the total cost of ownership (TCO) associated with RAN. The amount of investment and the operational costs are expected to decrease fast thanks to maturing cloud technologies and deployment experience. The C-RAN approach facilitates faster radio deployment, drastically reducing the time needed for conventional deployments.

In the race to 5G, telcos are steadily introducing function virtualization to gain software control over their networks. C-RAN and vEPC both help to create bespoke data pathways that meet highly specified network requirements of applications – staying true to 5G‘s vision.

The power of now

So, what does ‘cloud-native’ mean for operators? All the interdependencies between the cloud and the enabling technologies make it necessary for the true cloud-native experience to involve not only just traditional “triad” of microservices, containers, and PaaS. Network virtualization and 5G are key elements in the search for efficient, uninterrupted and feature-rich cloud-based services and applications. This will make previously impossible cloud-native use cases easier feasible.

Thanks to operators who experimented with virtualization and conducted early 5G trials, telcos will be the first to have all the necessary technology in place to succeed in the cloud. Will operators take full advantage of this head start – or will they will once again be beaten to the finish line – and fail to capitalize on the technology they championed?

 

Dominik PacewiczDominik Pacewicz is the head of BSS product management at Comarch. He has been with Comarch for over 6 years and works with a number of mobile operators helping them to simply and automate their networks.

Nokia launches some actual applications for SDN

All the hype surrounding software-defined networking is finally starting to yield some tangible results in the form of three apps from Nokia.

Deciding to kill two buzzwords with one stone, Nokia is claiming its new WaveSuite open applications will jump-start optical network digital transformation. It consists of three apps: Service Enablement, Node Automation and Network Insight. The point of these apps is apparently to offer businesses a new degree of access to networks that is expected to yield novel commercial opportunities.

To help us get our heads around this new piece of networking arcana we spoke to Kyle Hollasch, Director of Marketing for Optical Networking at Nokia. He was most keen to focus on the service enablement app, which he said is “the first software that tackles the issue of resell hierarchy.”

Specifically we’re talking about the reselling of fixed line capacity. This app is designed to massively speed up the capacity reselling process, with the aim of turning it into a billable service. The slide below visualises the concept, in which we have the actual network owner at the base and then several levels of capacity reselling, allowing greater degrees of specialisation and use-case specific solutions.

Nokia WaveSuite slide 1

The node automation app allows network nodes to be controlled via an app on a smartphone, thanks to the magic of SDN. In fact this would appear to be the epitome of SDN as it’s only made possible by that technology. The slide below shows how is it is, at least in theory, possible to interact with a network element via a smartphone, which also opens up the ability to use other smartphone tools such as the GPS and camera.

Nokia WaveSuite slide 2

The network insight app seems to do what is says on the tin, so there doesn’t seem to be the need for further explanation at this stage. “These innovations are the result of years of working closely with our customers to address all aspects of optical networking with open applications enhancing not just operations, but opening up new services and business models,” said Sam Bucci, Head of Optical Networks for Nokia.

As a milestone in the process of virtualizing networks and all the great stuff that’s supposed to come with that, the launch of actual SDN apps seems significant. Whether or not the market agrees and makes tangible business use of these is another matter, however, and only time will tell if good PowerPoint translates into business reality.

BBWF 2018: Consumers don’t care about tech, just connectivity – BT

Today’s consumer is demanding but disinterested. They don’t care about mobile or broadband or wifi, just top-line connectivity. To meet these demands, BT has pointed to network convergence.

Speaking at Broadband World Forum, Howard Watson, BT’s CTIO, outlined the bigger picture. It’s all about convergence where the dividing lines between wireless and fixed or hardware and software are blurred, with connectivity is viewed as a single concept, bringing together network design, technology convergence and customer insight to create a single software-orientated network for device neutral connectivity.

“For the consumer, it’s not about their wifi, or their mobile connection, or their fixed broadband, or even their landline,” said Watson. “It’s about connectivity as a whole. And I’m pleased to say we’re already making strong progress here.”

Of course, it wouldn’t be a telco conference without mentioning 5G, and this is a critical component of the BT story. Trials have already begun in East London, though over the next couple of days 10 additional nodes will be added to expand the test. Plans are already underway to launch a converged hardware portfolio, introduce IP voice for customers and create a seamless wifi experience. All of this will be built on a single core network.

But what does this mean for the consumer? Simplicity in the simplest of terms.

The overall objective is to create a seamless connectivity experience which underpins the consumer disinterest in anything but being connected. Soon enough, devices will be able to automatically detect and select the best connectivity option, whether it is wifi or cellular for example, essentially meaning consumers will not have to check anything on their devices. Gone will be the days where you have to worry about your device clinging onto weak wifi signal or being disrupted by a network reaching out to your device, according to Watson. Signing in will become a distant memory as the consumer seamlessly shift from wifi to mobile.

This is of course a grand idea, and there is still a considerable amount of work to be done. Public wifi is pretty woeful as a general rule, and mobile connectivity is patchy in some of the busiest and remotest regions in the UK, but in fairness to BT, it does look like a sensible and well thought out plan.

With telcos becoming increasingly utilitised, these organizations need to start adding value to the lives of the consumer. Connectivity is not enough anymore, as it has become a basic expectation not a luxury in today’s digitally-defined society; providing the seamless experience might just be one way BT can prove its value. Fortunately, with its broadband footprint, EE’s mobile network and 5000 public wifi spots throughout the UK, BT is in a strong position to make the converged network dream a reality.