Making Sense of the Telco Cloud

In recent years the cloudification of communication networks, or “telco cloud” has become a byword for telecom modernisation. This Telecoms.com Intelligence Monthly Briefing aims to analyse what telcos’ transition to cloud means to the stakeholders in the telecom and cloud ecosystems. Before exploring the nooks and crannies of telco cloud, however, it is worthwhile first taking an elevated view of cloud native in general. On one hand, telco cloud is a subset of the overall cloud native landscape, on the other, telco cloud almost sounds an oxymoron. Telecom operator’s monolithic networks and cloud architecture are often seen as two different species, but such impressions are wrong.

(Here we are sharing the opening section of this Telecoms.com Intelligence special briefing to look into how telco cloud has changing both the industry landscape and operator strategies.

The full version of the report is available for free to download here.)

What cloud native is, and why we need it

“Cloud native” have been buzz words for a couple of years though often, like with many other buzz words, different people mean many different things when they use the same term. As the authors of a recently published Microsoft ebook quipped, ask ten colleagues to define cloud native, and there’s good chance you’ll get eight different answers. (Rob Vettor, Steve “ardalis” Smith: Architecting Cloud Native .NET Applications for Azure, preview edition, April 2020)

Here are a couple of “cloud native” definitions that more or less agree with each other, though with different stresses.

The Cloud Native Computing Foundation (CNCF), an industry organisation with over 500 member organisations from different sectors of the industry, defines cloud native as “computing (that) uses an open source software stack to deploy applications as microservices, packaging each part into its own container, and dynamically orchestrating those containers to optimize resource utilization.”

Gabriel Brown, an analyst from Heavy Reading, has a largely similar definition for cloud native, though he puts it more succinctly. For him, cloud native means “containerized micro-services deployed on bare metal and managed by Kubernetes”, the de facto standard of container management.

Although cloud native has a strong inclination towards containers, or containerised services, it is not just about containers. An important element of cloud native computing is in its deployment mode using DevOps. This is duly stressed by Omdia, a research firm, which prescribes cloud native as “the first foundation is to use agile methodologies in development, building on this with DevOps adoption across IT and, ideally, in the organization as well, and using microservices software architecture, with deployment on the cloud (wherever it is, on-premises or public).”

Some would argue the continuous nature of DevOps is as important to cloud native as the infrastructure and containerised services. Red Hat, an IBM subsidiary and one of the leading cloud native vendors and champions for DevOps practices, sees cloud native in a number of common themes including “heavily virtualized, software-defined, highly resilient infrastructure, allowing telcos to add services more quickly and centrally manage their resources.”

These themes are aligned with the understanding of cloud native by Telecoms.com Intelligence, and this report will discuss cloud native and telco cloud along this line. (A full Q&A with Azhar Sayeed, Chief Architect, Service Provider at Red Hat can be found at the end of this report).

The main benefits of cloud native computing are speed, agility, and scalability. As CNCF spells it out, “cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach. These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.”

To adapt such thinking to the telecom industry, the gains from migrating to cloud native are primarily a reflection of, and driven by, the increasing convergence between network and IT domains. The first candidate domain that cloud technology can vastly improve on, and to a certain degree replace the heavy infrastructure, is the support for the telcos’ own IT systems, including the network facing Operational Support Systems and customer facing Business Support System (OSS and BSS).

But IT cloud alone is far from what telcos can benefit from the migration to cloud native. The rest of this report will discuss how telcos can and do embark on the journey to cloud native, as a means to deliver true business benefits through improved speed, agility, and scalability to their own networks and their customers.

The rest of the report include these sections:

  • The many stratifications of telco cloud
  • Clouds gathering on telcos
  • What we can expect to see on the telco cloud skyline
  • Telco cloud openness leads to agility and savings — Q&A with Azhar Sayeed, Chief Architect, Service Provider, Red Hat
  • Additional Resources

The full version of the report is available for free to download here.

US consumers don’t feel there are benefits to data-sharing economy

Only 7.6% of US consumers feel they get the benefits of user tracking behavioural data, as research demonstrates pessimism towards the digital economy.

The reason companies want to track existing or potential customers, while also collecting insight on these individuals, is simple; it is simpler to sell goods and services to someone you know more about. But, in order to do something for free, you have to offer a benefit. This equation does not seem to be balanced currently.

Research from AI firm Cujo suggest 64.2% of the surveyed consumers do not believe tracking is beneficial to the user, while only 28.2% said it could be. A meagre 7.6% believe they get the benefits of tracking.

If users do not see the benefits of tracking and personalisation, there will be resistance and push-back against these practices. Data and insight is being touted as a central cog of new business models, but these strategies will fail if the consumer is not brought forward on the same mission.

Sentiment is clearly moving against data collection, so much so that 61.9% of respondents to the survey would be happy to be tracked less even if personalization was affected.

The question is what service is being provided by tracking users and collecting data?

Google clearly tracks users though the benefits emerge in several different ways. For example, more accurate results are shown when using the search engine, or more favourable restaurants are show on the mapping services. This is a benefit for the user, while also making money.

Netflix is another example where the benefits are clear. The recommendation engine will help customers navigate through the extensive back catalogue, theoretically, while understanding consumer behaviour will also inform decisions on what content is created in the future.

These are logical applications of data insight, something which the user can see benefits from though they might not appreciate them. However, the larger issue is with the majority who collect data but there is no obvious reason as to why or where the benefits are.

For the most part, this might be viewed as a security risk, an unnecessary ‘transaction’ to make, and considering the security credentials of the majority, the consumer is right not to place trust in these organisations.

China deliberates privacy law in the midst of increased state surveillance

China’s parliament has said it will legislate on privacy protection, while the state has vastly increased surveillance since the outbreak of COVID-19.

The National People’s Congress, China’s highest legislature, is back in session after being delayed by two months by COVID-19. In his work report, the Chairman of the People’s Congress’s Standing Committee singled out three pieces of legislation related to state security and society control as priority tasks in the immediate future. Privacy protection is one of them, the other two being laws on data security and biosecurity, according to the reporting by People’s Daily, one of China’s main propaganda outlets.

This does not come as a complete surprise. At the end of last year, the People’s Congress announced at a press conference that a comprehensive privacy law would go through the legislation process in 2020. So far China’s privacy protection legislation is dispersed in different criminal, civil, and commercial laws and it often replies on the interpretation of judges when it comes to specific litigations. This gives those organisations, businesses, and individuals that have almost unbridled access to personal and private data an almost free hand to determine how to use the data. A group of consumers in China actually lost their case against Amazon when their privacy data on the e-commerce giant’s China site was comprised, which led to their losing large amount of money to phishing schemes.

Tencent and Alibaba have deployed facial recognition solutions at retail outlets where users of their online payment systems can pay for their purchases by looking at the camera at the check-out point. It is true that such solutions are both convenient and adding fun to the shopping experience, and it may also be true that the attitudes towards privacy in China are different from that in Europe. “In China, and across Asia, data is not seen as something to be locked down, it’s something that can be used,” according to a Hong Kong-based lawyer.

More recently, while the country was combating COVID-19, various tracing applications have been developed and deployed using personal data including name, date of birth, physical address, ID number, geo location records, and the like. Some of these apps have been jointly developed by commercial entities and public authorities and law enforcement agencies. Some people have raised concern that when the emergency is over, who and for how long such sensitive data should still be kept.

Probably more important is the scope of application of the impending law. The discussion on China’s official media is all about how to protect private data from being misused or abused by businesses, in particular the internet companies that have both access to the data and the technologies to benefit from it. It cannot help but giving the impression that the law is designed to primarily keep big businesses in check, without tying the government’s hands.

While the state legislature announced the new law being codified, China has vastly increased surveillance over its people, especially during the COVID-19 pandemic. Reuters reported that the country has seen “hundreds of millions of cameras in public places” being set up in cities and villages, as well as “increasing use of techniques such as smartphone monitoring and facial recognition.” The authorities have successfully located people infected by COVID-19 with surveillance images and facial recognition technologies, state media reported.

However, despite all the talking about AI, big data, and facial recognition, surveillance in China is still largely done by human beings constantly watching surveillance camera footages on screen and smartphone, which doesn’t come cheap. 4,400 cameras were installed in a village in Hubei, the province where COVID-19 first started, costing $5.6 million, according to the Reuters report.


Telecoms.com Daily Poll:

Should privacy rules be re-evaluated in light of a new type of society?

Loading ... Loading ...

Huawei intelligent IP networks, accelerating intelligent connectivity

SHENZHEN, China – During Huawei Global Analyst Summit 2020, Huawei’s “Leading Intelligent IP Networks, Accelerating the Transformation Towards Intelligent Connectivity” summit was successfully held. This summit shed light on three typical characteristics of intelligent IP networks: super capacity, intelligent experience, and autonomous driving. Besides this, Huawei shared its numerous success stories of intelligent IP networks across industries, signifying the data communications industry’s arrival in the intelligent IP network era.

As 5G, cloud, and AI pick up pace among enterprises of all sizes, enterprises, amid their pursuit for digital transformation, are confronting once-in-a-generation challenges, such as collaboration between hundreds of billions of production and office terminals, 100% migration of enterprise services to the cloud, and 97% AI adoption rate. As a decisive part of enterprises’ digital transformation, IP networks are also encountering a wide range of issues typified by insufficient bandwidth, poor service experience, and low efficiency of network O&M and troubleshooting. Intelligent IP networks are the key to conquering such issues. To better understand what kind of network can be called an intelligent IP network, Huawei took the lead by defining three typical characteristics of such a network:

  1. Super capacity: IP networks achieve a future-proof shift from 100GE to 400GE and from Wi-Fi 5 to Wi-Fi 6, and transform towards intelligent IP networks, boosting bandwidth resources. In addition, such future-oriented networks adopt slice-based bandwidth isolation, implementing flexible bandwidth adjustment.
  2. Intelligent experience: Intelligent IP networks stand out with intelligent identification of service types, service intent inference, and flexible, real-time network resource adjustment upon cloud changes. These highlights deliver always-on network connectivity experience.
  3. Autonomous driving: Intelligent IP networks can be automatically deployed, achieving rapid adjustment of services. In addition, they can perform automatic, AI-powered fault rectification, implementing proactive O&M and ensuring high network availability.

Kevin Hu, president of Huawei Data Communication Product Line, making a keynote at the summit.

Kevin Hu, President of Huawei’s Data Communication Product Line, said: “2020 is the first year for commercial use of intelligent IP networks. The entire industry has witnessed an historic shift of IP networks from Internet IP in the World Wide Web era to video-driven All IP, and is now on the way to intelligent IP oriented at the 5G and cloud era. Looking ahead, Huawei will keep innovating and continuously, proactively increasing investment in super capacity, intelligent experience, and autonomous driving to build end-to-end (E2E) intelligent IP networks for customers.”

Huawei’s innovative intelligent IP network solution achieves a future-proof integration of the three characteristics, and has embraced wide applications in various scenarios, such as campus network, data center network, and wide area network (WAN) scenarios. Specifically, this feature-rich solution is perfectly suited to building high-quality campus networks. It adopts Huawei’s industry-leading AirEngine Wi-Fi 6 that stands out for exclusive 16T16R smart antennas, delivering up to 1.6 Gbps single-user performance (20% higher than the industry average). Another highlight of Huawei’s AirEngine Wi-Fi 6 lies in AI-powered intelligent radio calibration that improves the average downlink rate of stations (STAs) by more than 50%. The solution also employs an AI-powered intelligent O&M system that slashes the mean time to repair (MTTR) from four hours to as short as just 10 minutes. These differentiators significantly optimize user experience, helping build future-proof, fully-wireless, and intelligent campus networks in an extensive range of scenarios, such as Huawei’s super-large campus serving 194,000 employees, and the digital warehouse of SONGMICS — the largest home necessity seller on Amazon in Germany.

The solution also performs well in the data center network domain. It adopts Huawei’s innovative iLossless algorithm that ensures zero packet loss on the Ethernet, thereby improving data computing efficiency by 27% and data storage efficiency by 30% compared with the industry average. The solution also achieves AI-powered intelligent O&M, which can remediate a typical fault in just 9 minutes — fault detection in 1 minute, fault locating in 3 minutes, and fault rectification in 5 minutes. Such superb performance has attracted more than 40 Internet service providers (ISPs) and financial service customers, such as China Merchants Bank, China CITIC Bank, and People’s Insurance Co. (Group) of China Ltd. (PICC).

Besides the campus network and data center network domains, this solution is also highly suited to the WAN domain for its industry-leading FlexE-based slicing that provides 100% bandwidth assurance, achieving 5 times higher slicing granularity than the industry average. In addition, this feature-rich solution uses IPv6+ to select the optimal path based on the service intent, ensuring committed latency for key services. As such, this solution achieves superb transmission of key services and has been widely applied in multiple scenarios, such as China Mobile (smart grid services), Agricultural Bank of China, and China Unicom Beijing branch (services for the Beijing Daxing International Airport). Capitalizing on more than 20 years of expertise in the IP network domain, Huawei keeps on building highly competitive intelligent IP network products and solutions, as well as providing smooth, continuous services for carriers and customers in the financial services, government, transportation, and energy sectors in more than 100 countries and regions. Looking forward, Huawei’s Data Communication Product Line will collaborate with more customers in innovative design and in-depth service cooperation to help more customers achieve digital transformation so as to better embrace the “5G + cloud + AI” era and build intelligent IP networks with continuous leadership.

Finland joins the quest for quantum computing strengths

The Technical Research Centre of Finland is going to build the country’s first quantum computer, joining a growing European contingent to compete at the front of next generation computing technology.

VTT, Finland’s state-owned Technical Research Centre (Teknologian tutkimuskeskus VTT Oy) announced that it will design and build the country’s first quantum computer, in partnership with “progressive Finnish companies from a variety of sectors”, aiming to “bolster Finland’s and Europe’s competitiveness” in this cutting-edge technology.

“In the future, we’ll encounter challenges that cannot be met using current methods. Quantum computing will play an important role in solving these kinds of problems,” said Antti Vasara, CEO of VTT. Referring to the country’s challenge of post-COVID-19 recover, Vasara said “it’s now even more important than ever to make investments in innovation and future technologies that will create demand for Finnish companies’ products and services.”

The multi-year project, with a total cost estimated about €20-25 million, will run in phases. The first checkpoint will be about a year from now, when VTT targets to “get a minimum five-qubit quantum computer in working order”, it said in the press release. Qubit, or “quantum bit”, is the basic information unit in quantum computing, analogous to binary digit, or “bit”, in classical computing.

In all fairness, this is a modest target on a modest budget. To put the 5-qubit target into perspective, by late last year, Google claimed that its quantum computer had achieve 53-qubit computing power. It could perform a task in 200 seconds that would take Summit, one of IBM’s supercomputers, 2.5 days by IBM’s own admission. By the time of writing, VTT has not responded to Telecoms.com’s question on the project’s ultimate target.

When it comes to budget, the VTT amount is easily dwarfed by the more ambitious projects. Although the most advanced quantum computers in the world are developed and run by the leading American technology companies and academic institutions, for example the MIT, IBM, and Google. But other parts of the world are quickly building their own facilities, including businesses and universities in Japan, India, China, and Europe. One of the high-profile cases recently is IBM’s decision to build Europe’s first commercial quantum computer in German’s state-backed research institute in Fraunhofer, near Stuttgart.

In addition to getting closer to and better serving the European markets in the future, IBM’s decision to build a quantum computer in Europe is also to do with GDPR requirement. While European businesses can use IBM’s quantum computer located in the US, through the cloud, they may hesitate when sending user data outside of the EU. The Fraunhofer project has been personally endorsed by Angela Merkel, the German Chancellor. The federal government has pledged €650 million investment for quantum computing, though not in the Fraunhofer project alone.

When it comes to quantum computing applications in the communications industry, at least two areas it can have strong impact. The first is security. Quantum computing will enable new modes of cryptography. The second is new materials. Daimler, the carmaker, has already used IBM’s quantum computers to design new batteries for its electric cars by simulating the complex molecule level chemistry inside the battery cells. On top of batteries, another research topic in new materials in the communications industry is to find silicon replacement as semiconductor in extremely high radio spectrums.

Despite its modest scope, the VTT undertaking is significant. Not only does it give Finland the right to boast of being the first Nordic country to build its own quantum computer, the success of the project would “provide Finland with an exceptional level of capabilities in both research and technology”. Faced with the worst economic crisis since the collapse of the Soviet Union, the Nordic nation is looking to technology breakthroughs for sustainable revival and long-term competitiveness. Quantum computing capability of this project, if not pursuing supremacy, limited by its scope, may at least give Finland the table stake.

Artificial Intelligence for Networks: Understanding It Through ETSI ENI Use Cases and Architecture

On 17 April, ETSI officials from the Experiential Network Intelligence group (ISG ENI) gave a webinar entitled Artificial Intelligence for networks: understanding it through ETSI ENI use cases. This webinar attracted more than 150 online attendees including operators, vendors, research institutions, and international standards development organizations.

The first speaker, Dr. Luca Pesando, TIM, Vice Chair of ETSI ENI ISG introduced the scope of the group, membership and architecture, and Dr. Yue Wang, Samsung, Secretary of the group, gave some insight about selected ENI Use Cases. They highlighted that ENI is meant to be a flexible general-purpose AI Engine able to interface with multiple types of Assisted System, by means of open interfaces and API. Assisted Systems from multiple standards body can be interfaced (e.g. 3GPP, IETF, MEF, ITU, Broadband Forum) being able to control Access, Transport, Core technologies, from infrastructure to service layer of the network operation and management, creating AI based automation loops.

This webinar is available on the Brighttalk website.

This webinar will be followed on 6 May at 5pm CEST by a second webinar entitled ETSI ENI Architecture: AI for robust and manageable systems and applications.

You can register via the Brighttalk website.

The ETSI Industry Specification Group Experiential Network Intelligence created in February 2017 focuses on network intelligence and now comprises 60 organizations. ENI identified viable Use Cases and consequently derived the main Functionalities the ENI Engine has to provide. Five categories of Use Cases have been identified: Infrastructure Management; Network Assurance; Network Operation; Service Orchestration and Management; Network Security.

The ENI Engine aims to provide an easy way of user interaction, using a Human Like language to express the Intent of “What the User wants”, leaving the network with the task to translate it into Policies and How the Network can realize it. Evolution of the Architecture is increasing the possibility for ENI architecture to be applied to multiple Use Cases as well as increasing Security by Design. ISG ENI is working closely with the technologies defined by other ETSI groups including Fifth Generation Fixed Network (F5G), IPv6 integration (IP6), Multi-access Edge Computing (MEC), Network Function Virtualization (NFV), Secure AI (SAI) and Zero touch network and Service Management (ZSM). More information on ENI can be found on the ETSI website.

 

Huawei Technologies

Consensus on 6G is gradually forming

Participants at the virtual 6G Wireless Summit shared their thinking on what 6G can do and what research is needed to get the underlying technologies in place.

The 6G Wireless Summit 2020 would have kicked off in Finnish Lapland this morning. Instead, the organisers have moved it online. Except for the lack of face-to-face conversations, the virtual event is a competent substitute. This may not be the first time that speakers needed to record their presentations, considering companies had been already pulling out other events over the recent weeks. By the time the Summit was scheduled to start, most of the keynote speeches and presentations at the technical streams had been made available online.

A year ago, when Team Finland introduced its 6G Flagship programme (then called 6Genesis) at Mobile World Congress 2019, what 6G was about was almost a blank slate. Twelve months and 800 peer-reviewed papers later, the direction of 6G is much clearer and the vision is increasingly shared by industry experts and their academic partners.

Having watched six of the seven keynotes (Huawei’s speech has yet to be made available by the time of writing), we can see a clear convergence between the speakers’ views on both what 6G is expected to do and where research investment should be made to make those expectations come true.

Even their 6G vision taglines could look rather similar. For example, Harish Viswanathan, Head of Radio Systems Research Group at Nokia Bell Labs, believed 6G will “unify the experience across physical, digital and biological worlds”, while Dr. Fang Min, Director of 6G Research & Collaboration in the ZTE’s Wireless Division, saw 6G “integrating the physical and digital world”.

The leading use cases expected for 6G are shared by most speakers. For instance, they all foresaw vastly increased interaction between human and intelligent machine. Both ZTE’s Dr. Fang and Ericsson’s Dr. Mikael Prytz, Head of Research Area Networks, called it “Internet of Senses”. This includes both enhanced brain-computer interaction, and, in the words of Nokia Bell Lab’s Viswanathan, in-body monitoring.

Another key use case referred to by the speakers is what Ericsson’s Prytz called Connected Intelligence, or what ZTE’s Fang called Internet of AI, meaning AI interacting with each other, intelligent machines serving other intelligent machines. Such a scenario will have strong implications on network designs which are now limited by human senses.

With 6G poised to operate on much higher frequency than 5G (for example the FCC granted >95GHz for experimental use), the shorter wavelengths will allow for higher localisation accuracy, possibly down to centimetre level positioning. One outcome of such precision will be full digital representations of the physical world, or “digital twins”, by also fusing data from other sources including network data. Network operators will also be able to generate interconnected and collaborative digital twins, and digital representation of larger objects and their environment. Nokia Bell Lab demonstrated a digital twin of a New Jersey street with drone-captured high-resolution data for wireless network optimisation, for example accurate signal propagation prediction.

These use cases need to be supported by new, advanced underlying technologies that will provide guidelines for research in the discipline in the coming years. New spectrum technologies are highlighted by all speakers as such a domain. This includes both radio technology on the so-called D-Band (140-180GHz) and above, and progress in material sciences. Bell Lab’s Viswanathan pointed out that transceiver design for such radio frequencies will be more sophisticated, and may need to use glass interposers instead of silicon. ZTE also sees “Beyond Silicon” as one of the leading 6G challenge.

Network architecture is another key technology requirement that needs to advance in the run-up to 6G. One such advancement is what Nokia Bell Lab’s Viswanathan sees in the trend of RAN-Core convergence. This is primarily driven by the need to move the core closer to RAN for low latency service as well as to make the RAN more centralised towards the cloud. A related trend highlighted by Viswanathan is the demand for hyper specialised slicing. He believes that network slicing should move from resource reservation in 5G to providing separate software stacks and functions by using different micro-services.

Both ZTE’s Fang and InterDigital’s Alain Abdel-Majid Mourad, Director Engineering R&D, stressed the importance and demand for innovation to meet 6G’s new KPIs. Network security in 6G is also highlighted. While Nokia Bell Lab’s Viswanathan saw in 6G a “sixth sense”, for example using real-time analytics of sensor data by AI, Ericsson’s Prytz believed that the holistic solution of hardware-based security, trusted computing, and secured enclave will form the base of the future computing networks.

When it comes to the timing, the speakers had a consensus that it would be around 2030 when 6G will start commercialisation. ZTE believed 3GPP will start more concrete 6G specification work in R22, which the company expects to see in 2029. See the chart below for ZTE’s detailed prediction for the timeline from 5G to Beyond 5G (B5G) and 6G.

In general, the speakers at the Summit look to have much more in common with their views on what they expect 6G to look like than a year ago, as well as sharing an understanding on what key research areas will be in the years to come. While there is no guarantee these predictions will turn out to be correct, Nokia Bell Lab’s Viswanathan put it best when he said, “We have 10 years to be proved wrong, and now can have fun predicting the future.”

Source: 6G Wireless Summit 2020, ZTE Keynote

AI and edge computing replaces the Pilgrims in the new Mayflower

IBM’s AI and edge computing technologies are going to guide a crewless boat to chart the same route the Pilgrims did 400 years ago.

I was at an IBM analyst event when I met Don Scott, CTO of Marine Ai, a venture that is working on an automatic boat, named “Mayflower”, that will sail from Plymouth, England, where the Marine Ai is based, to Plymouth, Massachusetts in September this year, 400 years after the original ship carried the Pilgrims across the Atlantic Ocean.

My first question was on why IBM, considering companies like Google would probably have more expertise in autonomous driving. The problem with Google seems to be two-fold. On one hand, Google demands that all new “knowledge” developed from their AI tools should be owned by Google. On the other hand, Google’s AI tools are not transparent enough to satisfy the maritime regulators.

On the other hand, Scott said IBM has responded to his request with enthusiasm. In addition to reversing Google’s position on those two pain points above, IBM is helping develop the boat’s control system on its Power System servers. Meanwhile, other partners in the projects, including the University of Plymouth, one of the world’s leading research institute of marine science, and the non-profit organisation ProMare, are training IBM’s PowerAI engine with real data from the ocean, for example recognising other ships, whales, floating debris.

The boat will be equipped with an edge computing module using the data from the AI engine to make onboard decisions, similar to the way autonomous cars are doing on the road. What is different is that, while autonomous cars are typically always online (it is one of the leading use cases for 5G, for example), connectivity to the internet when the boat sales out to sea will be sporadic at the best. It will use some satellite communication, but the majority of the computing will be done “on the edge”.

The motor power of the boat, which is made of aluminium and composite materials and measures 15 metres by 6 metres, will come from onboard batteries, charge with solar power and back-up biofuel generator. When I asked him what the boat can do in addition to charting ocean geography, Scott said the first mission would include measure the level of microplastic in the sea, which has increasingly become a big concern for those of us that worry about the environment. In the future, similar sea vessels may even be used to clean the ocean.

I was fully aware that Marine Ai was present at the event because it is a showcase for IBM technologies. However, I could not deny that the project fascinated me in its own right. If edge computing and AI, as well as cloud computing and satellite communication, are pushing the boundary of what they can do, this should count as one case.

Exfo uses AI to reassure 5G operators

Testing and service assurance vendor Exfo has launched some new cleverness designed to take the stress out of managing a 5G network.

In case nobody told you, 5G is a lot more complicated than any of the previous Gs, so much so that it’s just too much for mere people to get their heads around. That’s where artificial intelligence comes to the rescue, with its omniscience and ability to learn on the job. Exfo reckoned it was about time its service assurance platform made the most of AI so it has launched Nova Adaptive Service Assurance.

The cleverest bit of it seems to be Nova SensAI (possibly a play on the word ‘Sensei’), which Exfo describes as its central nervous system. As you may have guessed, it’s all about using AI and machine learning to analyse the many layers of the network and offer a good view of them. Exfo claims it will uncover network issues no other equivalent platform can, possibly even before they’ve happened.

“The combination of more users, more connections, more apps and more convoluted networks has created a perfect storm of complexity for operators,” said Philippe Morin, EXFO CEO. “By delivering only the right data at the right time, Nova A|SA is a unique intelligent automation platform to provide operators with 100% visibility into user experience and network performance. We’re talking about operations teams being able to resolve issues in minutes rather than days—or preventing them entirely.”

We’d be lying if we said we had any way of verifying those claims, but as the nature of the launch implies, this is all very complicated stuff. We do know that Exfo is up against some pretty stiff competition in the 5G service assurance space, with all its competitors also claiming to take the stress out of 5G for operators. Telecoms CTOs would seem to have their work cut out picking the best one.

UK AI watchdog reckons social media firms should be more transparent

The Centre for Data Ethics and Innovation says there is strong public support for greater regulation of online platforms, but then it would.

It knows this because it got IPSOS Mori to survey a couple of thousand Brits in the middle of last year and ask them how much they trust a bunch of digital organisations to personalise what they deliver and to target advertising in a responsible way. You can see the responses in the table below, which err towards distrust but not by a massive margin. The don’t know’s probably provide an indication of market penetration.

How much trust, if any, do you have in each of the following organisations to personalise the content users see and to target them with advertising in a responsible way?
Facebook YouTube Instagram TikTok Twitter Snapchat Amazon LinkedIn BBC iPlayer Google search or Maps
A great deal of trust 7% 10% 6% 4% 6% 5% 13% 7% 16% 13%
A fair amount of trust 24% 38% 22% 8% 22% 15% 43% 25% 45% 44%
Not very much trust 30% 26% 24% 15% 25% 22% 24% 18% 17% 23%
No trust at all 32% 16% 24% 28% 25% 26% 13% 20% 10% 13%
Don’t know 8% 10% 23% 45% 23% 32% 7% 30% 11% 7%

It seems that UK punters haven’t generally got a problem with online profiling and consequent ad targeting, but are concerned about the lack of accountability and consumer protection from the significant influence this power confers. 61% of people favoured greater regulatory oversight of online targeting, which again is hardly a landslide and not the most compelling datapoint on which to base public policy.

“Most people do not want targeting stopped, but they do want to know that it is being done safely and responsibly and they want more control.” said Roger Taylor, Chair of the CDEI. “Tech platforms’ ability to decide what information people see puts them in a position of real power. To build public trust over the long-term it is vital for the Government to ensure that the new online harms regulator looks at how platforms recommend content, establishing robust processes to protect vulnerable people.”

Ah, the rallying cry for authoritarians everywhere: ‘think of the vulnerable!’ Among those, it seems, are teenagers, who are notorious for their digital naivety. “We completely agree that there needs to be greater accountability, transparency and control in the online world,” said Dr Bernadka Dubicka, Chair of the Child and Adolescent Faculty at the Royal College of Psychiatrists. “It is fantastic to see the Centre for Data Ethics and Innovation join our call for the regulator to be able to compel social media companies to give independent researchers secure access to their data.”

The CDEI was created last year to keep an eye on AI and technology in general, with a stated aim of investigating potential bias in algorithmic decision making. This is the first thing it has done in that intervening year and it amounts to a generic bureaucratic recommendation it could have made on day one. Still, Rome wasn’t built in a day and it did at least pad that out into a 120-page report.