Bringing Internet to the Other Half of the World

According to data published by the International Telecoms Union (ITU), the United Nations agency overseeing the telecoms industry, 3.9 billion people, or 51.2% of the world’s total population, were already connected to the internet by the end of 2018. While the 50% mark was hit half a year earlier than the agency’s previous estimate, it nevertheless means that half of the world’s population remains unconnected.

Here we are sharing the opening section of this Telecoms.com Intelligence special briefing to look at the status of the unconnected and under-connected parts of the world and explores how the industry as well as the public sector can overcome the challenges to bring internet to the half of the world yet to be connected.

The full version of the report is available for free to download here.

Introduction: why half of the world is still unconnected

The low internet penetration is particularly acute in the developing countries. While 81% of the population in the developed world are already using the internet, only 45% in the developing countries can do so. Among them, less than 20% of the population in the 47 least developed countries, defined as “low-income countries that are suffering from long-term impediments to growth”, enjoy this luxury.

Source: ITU

There are three leading factors at play to leave a large part of the world off the grid. The first two are interlinked one way or another, the third is out of the telecoms industry’s remit. The most obvious one is pure economics. Diminishing marginal return or increasing marginal cost, often both at the same time, means operators will be less and less motivated to connect the next subscriber than the last. This could be down to the distribution of population. The more sparsely populated the location is, the less rewarding for the operators to reach them it becomes, because, even if the returns are assumed to be equal, the cost will be higher. This could also be related to the socio-economic status of the people. The less well-off the population is, the less attractive it becomes for operators to make the effort, because, even if the cost could be assumed equal, the return would be lower.

There are also technology barriers. Unfriendly terrains, for example mountainous areas, prove extremely challenging for operators to overcome. Related to the economic factors, these areas are also typically not the most densely populated. Satellite communication could be used as an emergency solution but would be too costly to use as regular internet access mode, or for operators to provide it if there is not a sizeable user base especially business users.

In some cases, the hurdle is simply too high for telecoms alone to clear. High internet penetration in North Korea is highly unlikely to happen in the near future without a fundamental change to the country itself, for example.

With these considerations in mind, this report will address the first two factors affecting internet penetration: economic and technology. Specifically, it will attempt to provide answers to these questions:

  • On the supply side, what technology solutions have been made available to drive down the cost level, therefore to make connecting the unconnected more appealing to telecom operators? What are still debatable or being desired? What business case do they present to operators?
  • On the demand side, what factors need to be in place for the unconnected population to be able to afford the connection, and to be willing to embrace it? And what are the factors beyond cost that may also drive the demand?

——————————-

The rest of this briefing includes sections on:

  • Supply side solutions: OpenRAN, TIP, and all that
  • The drivers for demand: affordability and more
  • What else should be in place?
  • An interview with Thecla Mbongue, Senior Analyst, Ovum

To access the full briefing please click here

Is AI over hyped in the telecoms space?

At an industry event today, a panel of experts revealed the level of artificial unintelligence the industry has to offer.

Is AI just a buzz word? Is it a trend that doesn’t live up to its punchy abbreviation? The topic came up during an executive panel with the theme: Assessing AI’s application in the telecommunications industry at Telco AI Summit Europe event in London, and the speakers were very open with their views.

“The AI that relates to the network sorting itself out,” said José Palazón, Director CTO, Chief Data Office at Telefónica. “It’s such a huge space and there’s a lot of work we can do with that but there’s a big dependency on the infrastructure. It doesn’t matter if AI is amazing if, at the end of the day, you don’t have the pieces of technology to interact with. Maybe it’s not entirely overhyped but the results are overhyped in the sense that AI has come a little later than people expected.”

Similarly, Shaun Chang, Vice President of Mobility Intelligence at Groundhog Technologies, feels that Customer Experience Management (CEM) is overhyped: “In my opinion, it’s a very complicated problem,” said Chang. “From a vendor point of view, if an operator said they had a magic formula or model for customer experience or for a churn model – don’t believe that.

“In our experience, from working with operators, we found out that these are very complicated and contain many different variables that need to be considered. The better approach is that operators need to develop their inhouse capability to tackle individual problems one by one. Eventually, after time, telcos will have a better understanding of the true customer experience. It will require a lot of AI and analytics but not the traditional way. An organisational change will be required too.”

Panel moderator, Mark Beccue, Principal Analyst at Tractica, sought the opinion of Ludovic Lévy, VP Data Strategy and Governance at Orange Group: “That’s the first time I agree with what’s been said so far,” said Lévy. “And I have a third point: the 360-degree view of the customer, meaning the capacity to reconcile all the customer touch points, including the data that relates to the user.

“Firstly, it’s really difficult to create a unique ID for each customer across all the data, in particular the data coming from the network. It’s really challenging, and we have not caught up on the 360-degree customer, which I think is overhyped too.”

Overall, the hype is here. The hype is real. If we are at the ‘dawn of AI’, as stated by keynote speaker Lucy Lombardi, Senior Vice President, Digital & Ecosystem Innovation at Telecom Italia, then the question of hype versus reality is one that cannot be answered. Yet.

 

Find out more AI content from Telco AI Summit Europe by browsing their Insights & Resources page here.

Xiaomi goes Suomi for camera research

The Chinese smartphone maker Xiaomi has set up in Finland its largest R&D centre outside of China for imaging technologies.

Xiaomi announced today that it has opened an R&D centre in Tampere, west Finland, to focus on smartphone camera technologies, including camera algorithms, machine learning, signal processing, and image and video processing. This will be Xiaomi’s largest Camera R&D team outside of China, the company says.

“The setup of this R&D team in Finnish city Tampere is a milestone in our global expansion journey. In this journey, not only do we consolidate ourselves in operations and business, but also work with local talents to further improve our products with highly innovative technologies,” said Wang Xiang, Senior Vice President of Xiaomi, adding that “this move all the more highlights our longstanding commitment of ‘innovation for everyone’.”

First reported by the website Suomimobiili.fi, Xiaomi’s local business entity, Xiaomi Finland Oy, was incorporated in May, and has rented an office space for around two-dozen employees at the Hermia Technology Park (Hermia-teknologiapuisto), not far from the University of Tampere’s technology campus, which is rated as one of the leading facilities in imaging related research.

Tampere used to be a key R&D centre for Nokia, giving the Finnish phone maker the leadership in camera phones. As Xiaomi’s press releases acknowledged, Tampere “has been greatly contributing to camera and imaging related innovations of leading smartphone brands since the 1990s.” That legacy is not lost. According to an earlier report by the local newspaper, Aamulehit, Nokia entered into a significant patent licensing agreement with Xiaomi two years ago.

Jarno Nikkanen, one of Xiaomi’s first Finnish employees and the Head of Xiaomi Finland R&D, was a Nokia veteran, with a PhD in signal processing from the Tampere University of Technology (now merged with the University of Tampere). He started his current role in June, according to his LinkedIn profile. “Xiaomi’s philosophy has been innovative and highly engaging. It’s all about empowering the teams and individuals to find solutions on their own. What we’re developing in Tampere will end up in the hands of hundreds of millions of users and Mi Fans around the world. That is really motivating,” said Nikkanen in the press release.

Xiaomi was not the first smartphone company to tap into local talents in Finland following the capitulation of Nokia’s phone business. Huawei set up its first R&D centre in Helsinki in 2012, to conduct new technology research for mobile devices, then a new facility in Tampere in 2016, to focus on camera, audio and imaging technologies for consumer electronics.

Vodafone Business and América Móvil team up to woo IoT customers

The B2B group of Vodafone has entered a partnership agreement with América Móvil to provide IoT operators with international roaming service for things.

The press release does disclose much detail on how Vodafone Business and América Móvil will combine their IoT platforms or share their expertise in IoT connectivity and services with each other, for example if this would involve the two platforms running the same applications or adopting the same protocols. Instead, the statement stays high-level, claiming the partnership between the two companies will “make it easier than ever for customers to connect devices globally.”

“With this agreement we further extend our IoT global footprint by partnering with one of Latin America’s strongest players,” said Vinod Kumar, CEO of Vodafone Business. “América Móvil´s coverage and expertise across Latin America will help us support our global customers in a part of the world where we have seen a surge in IoT adoption.”

“In América Móvil we believe in win-win partnerships that benefit our customers,” added Marco Quatorze, Director of Value Added Service at América Móvil. “We are excited about the partnership with Vodafone Business that provides our joint customers with the best user experience of two leading technology providers.”

Vodafone Business has been actively engaged in improving its IoT offers. The company claimed its IoT platform is connecting 89 million devices worldwide. However, even assuming all these connections on cellular-based, it would still be a small fraction of the global total of 1.0 billion Cellular IoT according to the latest (June 2019) Ericsson Mobility Report.

Therefore, the tie-up with América Móvil may indeed become a win-win partnership. Vodafone Business’ own research has shown that the Americas are the market, and transport and logistics the sectors that IoT has seen the fast growth. These are a natural fit for “roaming service” for things, which would enable tracking, monitoring, and optimising of routes for goods to continue even if the cargo has left the coverage of one operator, and in this case, moving from one continent to another.

For América Móvil, better known for its consumer service (the company says it is connecting 362 million access lines) but also becoming more active in serving business customers, the partnership with Vodafone Business will help it expand the footprint to Vodafone territories in Western and South Europe (in Europe, América Móvil operates in Austria and six Eastern European and Balkan countries). Additionally, it may also enable América Móvil to leverage Vodafone’s technology solutions.

At the beginning of the year, Vodafone Business announced a $550 million joint managed service deal with IBM that also covers 5G, AI, and other advanced technologies. Kone, the Finland-based lift company and existing Vodafone customer, has expressed interest in the IoT capability of that new “joint venture”.

Google claims quantum computing breakthrough, IBM disagrees

Google says it has achieved ‘quantum supremacy’, as its Sycamore chip performed a calculation, which would have taken the world’s fastest supercomputer 10,000 years, in 200 seconds.

It seems quite a remarkable upgrade, but this is the potential of quantum computing. This is not a step-change in technology, but a revolution on the horizon.

Here, Google is claiming its 53-qubit computer performed a task in 200 seconds, which would have taken Summit, a supercomputer IBM built for the Department of Energy, 10,000 years. That said, IBM is disputing the claim suggesting Google is massively exaggerating how long it would take Summit to complete the same task. After some tweaks, IBM has said it would take Summit 2.5 days.

Despite the potential for exaggeration, this is still a breakthrough for Google.

For the moment, it seems to be nothing more than a humble brag. Like concept cars at the Tokyo Motor Show, the purpose is to inflate the ego of Google and create a perception of market leadership in the quantum computing world. Although this is an area which could be critically important for the digital economy in years to come, the technology is years away from being commercially viable.

Nonetheless, this is an impressive feat performed by the team. It demonstrates the value of persisting with quantum computing and will have forward-thinking, innovative data scientists around the world dreaming up possible applications of such power.

At the most basic level, quantum computing is a new model of how to build a computer. The original concept is generally attributed to David Deutsch of Oxford University, who at a conference in 1984, pondered the possibility of designing a computer that was based exclusively on quantum rules. After publishing a paper a few months later, which you can see here if you are brave enough, the race to create a quantum computer began.

Today’s ‘classical’ computers store information in binary, where each bit is either on or off. Quantum computation use qubits, which can either be on or off, as well as being both on and off. This might sound incredibly complicated, but the best way to explain is to imagine a sphere.

In classical computing, a bit can be represented by the poles of the sphere, with zero representing the south pole and one representing the north, but in Quantum computing, any point of the sphere can be used to represent any point. This is achieved through a concept called superposition, which means ‘Qbits’ can be represented by a one or a zero, or both at the same time. For example, two qubits in a single superposition could represent four different scenarios.

Irrelevant as to whether you understand the theoretical science behind quantum computing, the important takeaway is that it will allow computers to store, analyse and transfer information much more efficiently. As you can see from the claim Google has made, completing a calculation in 200 seconds as opposed to 10,000 years is a considerable upgrade.

This achieved can be described as ‘quantum supremacy’, in that the chip has enabled a calculation which is realistically impossible on classical computing platforms. From IBM’s perspective, this is a step forward, but not ‘quantum supremacy’ if its computer can complete the same task in 2.5 days.

If this still sounds baffling and overly complex, this is because quantum computing is a field of technology only the tiniest of fractions of the worlds’ population understand. This is cutting-edge science.

“In many ways, the exercise of building a quantum computer is one long lesson in everything we don’t yet understand about the world around us,” said Google CEO Sundar Pichai.

“While the universe operates fundamentally at a quantum level, human beings don’t experience it that way. In fact, many principles of quantum mechanics directly contradict our surface level observations about nature. Yet the properties of quantum mechanics hold enormous potential for computing.”

What is worth taking away here is that understanding the science is not at all important once it has been figured out by people far more intelligent. All normal people need to understand is that this is a technology that will enable significant breakthroughs in the future.

This might sound patronising, but it is not supposed to. Your correspondent does not understand the mechanics of the combustion engine but does understand the journey between London and South Wales is significantly faster by car than on horse.

But what could these breakthroughs actually be?

On the security side, although quantum computing could crack the end-to-end encryption software which is considered unbreakable today, it could theoretically enable the creation of hack-proof replacements.

In artificial intelligence, machine learning is perfect area for quantum computing to be applied. The idea of machine learning is to collect data, analyse said data and provide incremental improvements to the algorithms which are being integrated into software. Analysing the data and applying the lessons learned takes time, which could be dramatically decreased with the introduction of quantum computing.

Looking at the pharmaceutical industry, in order to create new drugs, chemists need to understand the interactions between various molecules, proteins and chemicals to see if medicines will improve cure diseases or introduce dangerous side-effects. Due to the eye-watering number of combinations, this takes an extraordinary amount of time. Quantum computing could significantly reduce the time it takes to understand the interaction but could also be combined with analysing an individual’s genetic make-up to create personalised medicines.

These are three examples of how quantum computing could be applied, but there are dozens more. Weather forecasting could be improved, climate change models could be more accurate, or traffic could be better managed in city centres. As soon as the tools are available, innovators will come up with the ideas of how to best use the technology, probably coming up with solutions to challenges that do not exist today.

Leading this revolutionary approach to computing is incredibly important for any company which wants to dominate the cloud industry in the futuristic digital economy, which is perhaps the reason IBM felt it was necessary to dampen Google’s celebrations.

“Building quantum systems is a feat of science and engineering and benchmarking them is a formidable challenge,” IBM said on its own blog.

“Google’s experiment is an excellent demonstration of the progress in superconducting-based quantum computing, showing state-of-the-art gate fidelities on a 53-qubit device, but it should not be viewed as proof that quantum computers are “supreme” over classical computers.”

Google measured the success of its own quantum computer against IBM’s Summit, a supercomputer which is believed to be the most powerful in the world. By altering the way Summit approaches the same calculation Google used, IBM suggests Summit could come to the same conclusion in 2.5 days rather than 10,000 years.

Google still has the fastest machine, but according to IBM the speed increase does not deserve the title of ‘quantum supremacy’. It might not be practical to ask a computer to process a calculation for 2.5 days, but it is not impossible, therefore the milestone has not been reached.

What is worth noting is that a pinch of salt should be taken with both the Google and IBM claims. These are companies who are attempting to gain the edge and undermine a direct rival. There is probably some truth and exaggeration to both statements made.

And despite this being a remarkable breakthrough for Google, it is of course way too early to get exciting about the applications.

Not only is quantum computing still completely unaffordable for almost every application data scientists are dreaming about today, the calculation was very simple. Drug synthesis or traffic management where every traffic signal is attempting to understand the route of every car in a major city are much more complicated problems.

Scaling these technologies so they are affordable and feasible for commercial applications is still likely to be years away, but as Bill Gates famously stated: “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.”

Creating customer values with AI

5G represents a paradigm shift not only in the deployment, management, and operation of networks, but also the services that can be delivered and the experiences customers can have. For 5G networks to deliver value for the operators, through both improved efficiency and top line growth, artificial intelligence (AI) has important roles to play.

All these should start with operators having full visibility of what is going on in their systems, including data of the networks, of the services running on the networks, as well as of the customers.

Data and analysis into networks cover three dimensions: quality, value, and development.

Specifically, the data gathered by the software should report the network quality down to cell level, for example which cells are generating the highest values, and what the next step network development should focus on.

Such data can then be used in multiple ways to deliver value for the operators. Actionable insights, developed through analysing the comprehensive data, can support precise network planning and value-based network construction. This is particularly meaningful when operators roll out 5G networks, when there is little historical reference. The AI tool should be able to provide precise planning solutions based on the existing 4G networks to make full use of the existing network and environment information to achieve an exact match between 5G network planning and user requirements.

When it comes to network construction, the network expansion plan should be developed by the AI tool with intelligent network prediction, scenario-based site selection and site value sorting. The output should be an agile expansion construction solution that can deliver material improvement of efficiency over the conventional, manual planning mode and flexibly support the phased network deployment.

The AI-devised intelligence will also play a key role in improving operational efficiency, especially by accurately dispatching work orders. This is because the tools, using big data and AI network analytics, will predict faults, and dispatch on-site inspections more accurately. The machine-learning mechanism means that the field data from the inspections can be compared with the prediction, therefore continuously improving the accuracy of the tool.

Another area that AI can help improve efficiency is in testing and optimisation. Drive tests, the conventional way of testing the cell parameters, can be largely replaced by virtual drive tests, which evaluates network performance more comprehensively, and the need for drive tests is minimised.

Without manual intervention, the coverage and capacity are optimised automatically using data analytics, and the power and antenna feeder are adjusted and optimised automatically. The automation will considerably shorten the optimisation time.

Service insight refers to real-time accurate analytics and insight of both voice and data services running on the system.

Most networks have legacies from 2G to LTE, and more and more are adding 5G on top. The AI tools to generate service insights need to report instantly the status of voice services on 2G and 3G networks, as well as Voice overs LTE (VoLTE), the infrastructure of which will also support 5G voice. Meanwhile, the tools should also report the status of services running over the data networks, from entertainment to industry applications, and diagnose the root cause of any failures or sub-optimal performances. The intelligence generated will be used to improve services, and to prevent the same failures from happening again.

Another critical area to apply the intelligence is to support new services. For example, when 5G is first launched, the focus is on eMBB services, including HD video streaming or VR gaming. The AI-powered tool should provide real-time video perception analysis, early deterioration warning, problem location, as well as fast service recovery.

Service intelligence will play a more critical role when telecom operators enter other vertical industries when 5G networks enable the more advanced services. These may include autonomous manufacturing, remote healthcare, IoT like autonomous vehicles and smart city, next generation retail, energy, logistics, transport, and many more. For example, for different industrial requirements in the smart factory, different slice services are selected to realize the intelligent analysis and optimisation service. Such services will generate higher return but will also come with higher demand. In turn, if the system fails to guarantee service stability or to address faults fast enough, the cost to telecom operators will be higher.

User insight should be focused on user experience.

By analysing the user experience on existing networks including their personalised requirements, the tools should be able to generate intelligence that will not only increase loyalty of current users but also attract new users, such as building the intelligent perception system of each user and each service to accurately evaluate and guarantee user perception.

More importantly, the AI-powered tool should be able to develop user profiles around defined attributes (e.g. high value customers, or data heavy users, or users of special apps). These detailed user profiles can guide personalised smart marketing. For example, the tool should help identify potential high value customers and offer customised services.

ZTE has launched the ideal tools to address these challenges.

The AI Insight, Value Operation (AIVO) solution, based on ZTE’s VMAX big data platform, has lived up to its missions for customers. The solution has been developed using ZTE’s evaluation capabilities of over 10,000 different services, as well as the repertoire of over 9,000 user profile attributes across 16 industries. It realises the full-process support of network planning, construction, maintenance, optimization and operation.

In real life, the AIVO solution has demonstrated the efficiency improvement in system deployment, management, and operation. The one-click automatic output for network planning can improve the efficiency by at least 60% compared with the conventional planning mode. The solution can also reduce the drive test time by more than 50% and the work order dispatches by 45%, as well as improving the site visit efficiency by more than 25%.

When used for problem locating, AIVO has reached 80% accuracy rate and has reduced the location time to within 30 minutes. As for the marketing support, it realizes the personalized precision marketing implemented by building user profile system, locking target users and pushing marketing information. The benefit of marketing support is reflected in the 40-fold increase in the customers’ subscription conversion rate.

In addition to the powerful data gathering and analytics capability, AIVO solution is also able to present the data and analytics in the most intuitive visualised way, from network traffic to customer complaints and everything in between, so that immediate actions can be taken when needed.

We are, therefore, confident that the AIVO solution is an ideal answer to telecom operators’ challenges when they embark on 5G commercialisation, both for efficiency improvement and for generating new revenue and profit through strong customer engagement.

Creating a competitor will only help us – Huawei CEO

In the latest edition of ‘A coffee with Ren’ the Huawei founder graced a wide range of topics from data protection to 6G, but perhaps the most important area was the licensing idea which has been floated.

It is an interesting thought. Huawei founder Ren Zhengfei is prepared to license the technology which has fuelled the vendors drive towards the top of the connectivity ecosystem, to create a competitor. And just any competitor, one from the US, the very country which is driving the misery and headaches in Shenzhen.

For some, actively creating a competitor might be considered somewhat of a risk, but this is not how Ren see things.

“First, we will get a lot of money from the licensing,” said Ren. “That will be like adding firewood to fuel our innovation on new technologies. It will mean that we will have a better chance of maintaining our leading position.

“Second, we will bring in a strong competitor. This will prevent our 190,000 employees from becoming complacent. They’ll know that if they sleep on the job, they might wake up and find they have lost their jobs.

“Sheep become stronger when they are chased by wolves. I don’t worry that a strong competitor will emerge and drag Huawei down. In fact, I would be happy to see that, because this would mean that the world is becoming stronger.”

This might sound like a corporation putting a brave face on an uncomfortable situation, but there is some logic to it.

Ren has suggested the new competitor should probably be a US firm, as Europe already has its own vendors in this space. This presents a very interesting opportunity for Huawei. Presumably, a US vendor would have an excellent opportunity to secure valuable contracts with US telcos. If you have a look at the vendors activities in their own domestic markets, they are generally very successful.

Should this presumption prove accurate, Huawei won’t be making money directly from the US market, but through license fees, it will secure indirect revenue. The more successful this company is, the more revenue Huawei can realise through licensing.

The US is an incredibly large and lucrative market for network infrastructure vendors and Huawei has been almost non-existent to date. It might have secured contracts with some of the regional telcos, but these are not the riches which are promised in the ‘Land of the Free’. Huawei will be making money somewhere it has never really made money before. Suddenly, the licensing plan starts to look like an understated but clever move.

The technology will be licensed to the exclusive partner on fair, reasonable and non-discriminatory (FRAND) terms, with the team offering everything associated to 5G. That means software source codes, hardware designs, production technologies, as well as network planning and optimization and testing solutions, as well as chip design technology.

Although the company which undertakes this license will go toe-to-toe with Huawei on a technology basis, it will also have to prove it can support customers in the same way.

One of the reasons Huawei has been a success in recent years isn’t simply down to the technology. CTOs and network executives have noted to us that the support offered to customers post-sale sets the vendor apart, while the team is more open than most to consider customisable solutions to meet the unique demands of each vendor. This attention to detail is one of the reasons Huawei is perhaps considered the leader in the market.

Overall, this is of course a way to ease the tension between the White House and Huawei. We suspect this will not have much of an impact on the overarching trade-war between the two global super-powers, however that is of little concern to Huawei. This is a commercial organisation. It matters little if there is political conflict overhead, just as long as the company is not drawn into the saga.

The big question which remains is whether this will appease the aggression of the US.

The attraction of gaining more traction in the network infrastructure space might well be a tempting offer to disperse the aggression. The US is a company which wants to control the 5G ecosystem after all, as does pretty much every country. This is perhaps one of the contributors to the tension between the US and China.

As Ren pointed out during the coffee session, the saga does need to be resolved before more powerful technologies are being discussed in wider society.

“5G is not that amazing; its power is exaggerated by politicians,” said Ren. “AI will have an even brighter future. I hope we will not be added to the Entity List again in the AI era.”

Huawei is not the biggest and best software company around (just yet) therefore we cannot see the company taking a lead in the AI-era. It’s heritage and excellence primarily lie in the hardware, however it is a risk should the tension continue to remain at a stalemate between the two global superpowers.

Google updates Android for entry-level smartphones to be faster and safer

Google has announced a new version of Android Go, claiming the updated operating system for entry-level smartphones will run apps faster and support new data encryption technology.

The update will take the stripped-down OS to Android 10 (Go Edition). The company claimed the new version will load apps 10% faster then the current iteration, Android 9 (Go Edition).

Android Go was first introduced on top of Android Oreo (Android 8) at the end of 2017. At that time the target was for entry-level Android phones with 1 GB RAM or less. Despite the threshold has risen to 1.5 GB RAM for the latest version, the memory demand is still much lower than those on mid-range or high-end phones. As a comparison, Pixel 3, Google’s own signature smartphone, has 4 GB RAM, while Samsung’s Galaxy S10 and Huawei P30 both have 8 GB RAM.

The new version of OS can also run a native encryption software, called Adiantum, which was launched by Google at the beginning of the year. Google said running Adiantum on Android 10 (Go Edition) will not affect the devices’ performance. Unlike earlier data encryption tools, Adiantum does not need specialist hardware, therefore all entry-level Android phones can command a similar experience.

A number of Google services are also optimised for Android 10 Go. The “Read-out-loud” feature, which was first introduced in 2018, is updated. So is Lens for Google Go that can read out texts on signs and pictures, supported by on-board AI.

There is also the new Gallery Go, which essentially is the light-weight version of Google Photos. The software is only 10MB in size and can use on-board AI to help users sort and arrange pictures

Thanks to its business model, Google has little control over the user experience on the plethora of devices launched by the hundreds of Android OEMs, most of which would modify the OS one way or another. It has tried reining in the fragmentation, especially on the low-end. Sundar Pichai, the current Google CEO, was leading the Android One program, a version of Android that cannot be customised by OEMs. It was initially designed for entry-level Android phones in the Indian market, but has since been expanded to mid- to high-end products. Only a handful of OEMs, primarily Xiaomi, Nokia (HMD), and Motorola, have taken it seriously though.

Google believes the Android Go momentum is stronger and the appeal is broader. It said over 1,600 device models from more than 500 OEMs have been launched with Android Go over the last 18 months. These have made up over 80% of the total entry-level Android phone market, with wholesale prices ranging from $27 to $77.

Go_infographic.max-1000x1000

Huawei pledges $1.5 billion to its new developer program

Huawei has announced that it will invest $1.5 billion in the next five years to boost its developer ecosystem for the Kunpeng and Ascend computing platforms.

SDKs were also released at the same event when its Developer Program 2.0 was unveiled.The announcement was made at the 2019 version of the Chinese vendor’s annual Huawei Connect event in Shanghai. According to Patrick Zhang, CTO of Cloud & AI Products & Services at Huawei, the new program will cover five key areas:

  • Building an open computing industry ecosystem based on Kunpeng + Ascend computing processors
  • Establishing an all-round enablement system
  • Promoting the development of industry standards, specifications, demonstration sites, and technical certification system
  • Building industry-specific application ecosystems and region-specific industry ecosystems
  • Sharing Kunpeng and Ascend computing power, making it available to every developer

The focus areas are related to cloud computing and artificial intelligence. The applications and services the ecosystem aims to support are for server level, either in the centralized cloud or on the edge. To enable the ecosystem development, Huawei also published Kunpeng Developer Kit and ModelArts 2.0 AI development platform.

Despite that x86 architecture is still dominating the server market, ARM has worked to break the monopoly, and Huawei is one of ARM’s leading licensees. Earlier this year Huawei released Kunpeng 920, its CPU based on ARMv8 design. Huawei aims to expand its share in the server market with Kunpeng’s superior computing power claimed by Huawei, most likely starting from the market in China.

But Huawei’s ambitions go way beyond moving more boxes. Its cloud service has been promoted for its strong AI capability, supported by the Ascend AI chips. The Ascend 910, the latest version, was released in August, which the company claimed is the world’s most powerful AI processor.

By enriching its ecosystems, Huawei hopes it will be able to deliver a full suite of solutions, including supporting digital transformation undertake by increasing numbers of telecom operators.

This is the second iteration of Huawei’s Developer Program. The Developer Program 1.0 was launched in 2015.