Exfo uses AI to reassure 5G operators

Testing and service assurance vendor Exfo has launched some new cleverness designed to take the stress out of managing a 5G network.

In case nobody told you, 5G is a lot more complicated than any of the previous Gs, so much so that it’s just too much for mere people to get their heads around. That’s where artificial intelligence comes to the rescue, with its omniscience and ability to learn on the job. Exfo reckoned it was about time its service assurance platform made the most of AI so it has launched Nova Adaptive Service Assurance.

The cleverest bit of it seems to be Nova SensAI (possibly a play on the word ‘Sensei’), which Exfo describes as its central nervous system. As you may have guessed, it’s all about using AI and machine learning to analyse the many layers of the network and offer a good view of them. Exfo claims it will uncover network issues no other equivalent platform can, possibly even before they’ve happened.

“The combination of more users, more connections, more apps and more convoluted networks has created a perfect storm of complexity for operators,” said Philippe Morin, EXFO CEO. “By delivering only the right data at the right time, Nova A|SA is a unique intelligent automation platform to provide operators with 100% visibility into user experience and network performance. We’re talking about operations teams being able to resolve issues in minutes rather than days—or preventing them entirely.”

We’d be lying if we said we had any way of verifying those claims, but as the nature of the launch implies, this is all very complicated stuff. We do know that Exfo is up against some pretty stiff competition in the 5G service assurance space, with all its competitors also claiming to take the stress out of 5G for operators. Telecoms CTOs would seem to have their work cut out picking the best one.

UK AI watchdog reckons social media firms should be more transparent

The Centre for Data Ethics and Innovation says there is strong public support for greater regulation of online platforms, but then it would.

It knows this because it got IPSOS Mori to survey a couple of thousand Brits in the middle of last year and ask them how much they trust a bunch of digital organisations to personalise what they deliver and to target advertising in a responsible way. You can see the responses in the table below, which err towards distrust but not by a massive margin. The don’t know’s probably provide an indication of market penetration.

How much trust, if any, do you have in each of the following organisations to personalise the content users see and to target them with advertising in a responsible way?
Facebook YouTube Instagram TikTok Twitter Snapchat Amazon LinkedIn BBC iPlayer Google search or Maps
A great deal of trust 7% 10% 6% 4% 6% 5% 13% 7% 16% 13%
A fair amount of trust 24% 38% 22% 8% 22% 15% 43% 25% 45% 44%
Not very much trust 30% 26% 24% 15% 25% 22% 24% 18% 17% 23%
No trust at all 32% 16% 24% 28% 25% 26% 13% 20% 10% 13%
Don’t know 8% 10% 23% 45% 23% 32% 7% 30% 11% 7%

It seems that UK punters haven’t generally got a problem with online profiling and consequent ad targeting, but are concerned about the lack of accountability and consumer protection from the significant influence this power confers. 61% of people favoured greater regulatory oversight of online targeting, which again is hardly a landslide and not the most compelling datapoint on which to base public policy.

“Most people do not want targeting stopped, but they do want to know that it is being done safely and responsibly and they want more control.” said Roger Taylor, Chair of the CDEI. “Tech platforms’ ability to decide what information people see puts them in a position of real power. To build public trust over the long-term it is vital for the Government to ensure that the new online harms regulator looks at how platforms recommend content, establishing robust processes to protect vulnerable people.”

Ah, the rallying cry for authoritarians everywhere: ‘think of the vulnerable!’ Among those, it seems, are teenagers, who are notorious for their digital naivety. “We completely agree that there needs to be greater accountability, transparency and control in the online world,” said Dr Bernadka Dubicka, Chair of the Child and Adolescent Faculty at the Royal College of Psychiatrists. “It is fantastic to see the Centre for Data Ethics and Innovation join our call for the regulator to be able to compel social media companies to give independent researchers secure access to their data.”

The CDEI was created last year to keep an eye on AI and technology in general, with a stated aim of investigating potential bias in algorithmic decision making. This is the first thing it has done in that intervening year and it amounts to a generic bureaucratic recommendation it could have made on day one. Still, Rome wasn’t built in a day and it did at least pad that out into a 120-page report.

Nokia raises its OSS game

In the build up to MWC 2020 Nokia has got one of its announcements in early, in the form of the ‘cloud-native’ Network Operations Master software.

Turns out 5G is pretty complicated and at times there’s so much going on that you can’t possibly expect flawed, obsolete humans to stay on top of it. That’s why you need greater automation, we’re told, and that has to start with the network operations software, or OSS in old money. Nokia prides itself on its software, so the launch of a new OSS suite is presumably a fairly big deal for them.

“With 5G forcing traditional functions, like revenue management and customer care, to the cloud and helping drive software deeper into the network, communication service providers need a modern approach to performing network operations that is automated, more efficient and scalable,” said Ron Haberman, CTO at Nokia Software. “The Nokia Network Operations Master delivers these capabilities and allows our customers to perform lifecycle operations with ease, efficiency, and confidence.”

Network slicing will make automation and a much higher level of cloudy flexibility critical features of any network software. NOM also covers AI, machine learning, etc and is designed to just take care of all the plumbing, allowing network operations centres to focus on the stuff only people can manage, if such a thing still exists.

“5G networks will require significantly more operations automation than past networks in order to achieve promised levels of efficiency and new service support,” Nokia got Dana Cooperson, Research Director at Analysys Mason, to say. “Nokia’s Network Operations Master is a cloud-native network management system that is underpinned by machine learning and automated actions and provides the types of tools mobile network operations teams need now for 5G.”

Here are a couple of vids that may tell you more.

London Police push forward with controversial facial recognition tech

The London Metropolitan Police Service has announced it will begin the operational use of Live Facial Recognition (LFR) technology, despite there still being many critics and concerns.

The technology itself has come under criticism not only for poor performance when identifying individuals, but critics have also suggested this should be deemed as a violation of privacy rights afforded to individuals in democratic societies. Despite an on-going controversial position, the London police force seem to think it has all the bases covered.

“This is an important development for the Met and one which is vital in assisting us in bearing down on violence,” said Assistant Commissioner Nick Ephgrave. “As a modern police force, I believe that we have a duty to use new technologies to keep people safe in London.

“We are using a tried-and-tested technology and have taken a considered and transparent approach in order to arrive at this point. Similar technology is already widely used across the UK, in the private sector. Ours has been trialled by our technology teams for use in an operational policing environment.”

The initiative will start in various London locations the Met believes it will help locate the most serious offenders. The primary focus will be on knife and violent crime. It is unclear whether these deployments will be in permanently at a location, or the officers will be free to move around to other parts of the city.

As individuals pass the relevant cameras, facials maps will be compared to ‘watchlists’ created for specific areas. Should a match be confirmed, the officer will be prompted (not ordered) to approach the individual.

What Ephgrave seems to be conveniently leaving out of the above statements is that the private use of facial recognition technology is either (a) largely in trial period, or (b) highly controversial also.

In August, privacy advocacy group Big Brother Watch unveiled a report which suggested shopping centres, casinos and even publicly owned museums had implemented the technology without public consultation and had even been sharing data with local police forces without consent. This is a worrying disregard to the vitally important privacy principles of the UK.

At European level, the European Commission has been considering new rules which would extend consumer rights to include facial recognition technologies. And in the US, court cases have been raised against implementation in Illinois, while the City of San Francisco has effectively banned the technology unless in the most serious of circumstances.

The London Metropolitan Police Force has said it will delete images which are not matched to individuals on record, though considering police databases have more than 20 million records, this leaves wiggle room. If an arrest is made, the data will be kept for 31 days. Although this is a concession by the Met, Human rights organisations and privacy advocacy groups have continued to suggest such technologies are an intrusion, over-stepping the privileges afforded to the police and eroding the concept of privacy.

Interestingly enough, the same underlying issues are persisting in London; the police force seems to have pushed forward with the introduction of the technology without a comprehensive public consultation. While there is good which can be taken from this technology, there are also grave risks for abuse unless managed very effectively; the general public should be afforded the opportunity to contribute to the debate.

This does seem to be a similar case to the boiling frog. The premise of this fable is that if a frog is put suddenly into boiling water, it will jump out, but if the frog is put in tepid water which is then brought to a boil slowly, it will not perceive the danger and will be cooked to death. The same could be said about facial recognition technology.

Eight trials were conducted by the London Metropolitan Police Force between 2016 and 2018, some with disastrously poor results, though few were widely reported on. In September, the UK High Court ruled facial recognition technologies could be implemented for ‘appropriate and non-arbitrary’ cases. As this is quite a nuanced and subjective way to address the status quo, authorities must be prevented from creeping influence.

Ultimately this does seem like a very brash decision to have been made, but also authorised by the political influencers of the UK. This is not to say facial recognition will not benefit society, or have a positive impact on security, but there is an impact on privacy and a risk of abuse. When there are pros and cons to a decision, it should be opened-up to public debate; we should be allowed to elect whether to sacrifice privacy in the pursuit of security.

The general public should be allowed to have their voice heard before such impactful decisions are made, but it seems the London Metropolitan Police Force does not agree with this statement.

Huawei boss plays Pollyanna in Davos

In a live discussion at the World Economic Forum, Huawei CEO Ren Zhengfei insisted that his company is barely affected by US sanctions.

The broader theme of the discussion, which also featured historian Yuval Noah Harari and was moderated by journalist Zanny Minton Beddoes, was: a future shaped by a technology arms race. Harari is known for his concerns about the direction humanity is headed in, especially when it comes to emerging technologies. While he seems to consider a Terminator-like Armageddon at the hands of autonomous weapons systems possible, he’s more immediately concerned with the prospect of human beings being hacked by big data, AI and biometrics, such that we become increasingly manipulated by shady, distant forces.

Ren clearly decided to position himself in opposition to this bleak outlook, as you would expect of someone who makes his living from the tech business, but he did so with such relentlessly blind optimism that most of his comments came over as Pollyannaish at best. He insisted technological developments always benefit mankind, even contrasting nuclear weapons with nuclear energy and radiotherapy to infer a net positive, especially since nobody has been nuked for a while.

Even when asked if the US is right to feel it’s in a technological arms race with China, he dodged the question by saying China doesn’t have the human and technological capability to dominate in AI. Having said that he notes the US is uncomfortable when it’s not number one in any field. He also insisted that Harari’s fears are overblown.

When Ren was asked if he thinks the world is headed towards splitting into two distinct technological ecosystems he said the following. “Huawei used to be an admirer of the US. Huawei is successful today largely because it learned from the US for most of our management system. Since day one we hired dozens of American consulting firms, teaching us how to manage our business operations… The US should feel proud of it… From that perspective I think the US should not be concerned about Huawei and its position in the world.

“Regarding the entity list, Huawei was added to the list last year and it didn’t hurt us much. We basically withstood the challenges. This year the US might further escalate their campaign against Huawei, but I feel the impact on Huawei’s business would not be very significant. But whether the world will be split into two systems, I don’t think so, because science is about truth and there’s only one truth. Any scientist who discovers the truth would make it known to all the people around the world. At a very deep lying layer the whole world is united, it’s all linked.”

The bit about how the US should feel proud of Huawei because of how much it influenced it was some quality trolling, but on the whole it’s not obvious what Ren hoped to achieve from this performance. Everybody knows R&D is a key component of geopolitical competition and that technological advances are both jealously guarded and often used to military advantage. Ren doesn’t do himself or his company any favours by treating his audience as idiots.

You can watch the full discussion here.

LG drives towards connected car market with Cerence tie-up

LG has signed a memorandum of understanding (MOU) with Cerence to make a play for the emerging connected car market.

The partnership with Cerence, which has recently spun-off from Nuance Communications, will integrate LG’s webOS Auto In-Vehicle Infotainment (IVI) system with Cerence ARK (AI Reference Kit), to create a new voice assistant for the connected car market.

“We look forward to this collaboration with Cerence to develop a turnkey voice solution for today’s auto and component makers to accelerate the arrival of the connected car,” said I.P. Park, CTO of LG Electronics. “We will continue to evolve webOS Auto by offering a wider range of AI-powered experiences for both manufacturers and auto customers.”

“We are honoured and excited to partner with LG Electronics on a solution that harnesses the collective power and promise of webOS Auto and Cerence ARK,” said Sanjay Dhawan, CEO of Cerence. “This new offering will support automakers and tier-one suppliers as they rapidly innovate, speed the time to market, and deliver a state-of-the-art in-car experience unlike any other.”

Although still in the early days, the connected car market is accelerating very quickly. LG might be a bit a late to the party here and will have to scrap with some big names from Silicon Valley, the telcos and the OEMs themselves.

Looking at the internet segment, Google has been making promising steps forward with its Android Auto in-car platform, while Amazon has Echo Auto, and Apple has CarPlay to steal some share of the connected car segment. The likes of Huawei and Ericsson are also trying to wrestle attention in the space also.

Albeit distant competitors, some telcos have also shown ambitions to play a greater role in the connected car segment. While it does look like the telcos are destined to be the commoditised connectivity partner, the fortunes of this industry are far from settled.

Finally, you have to consider the car manufacturers themselves. The likes of BMW, Seat and Ford want to create a lasting relationship with customers to drive towards a more sustainable industry in the future. Simply selling and maintaining cars might not be enough but owning the in-car experience is one way to create value and new potential revenue streams.

The winners and losers of the connected car segment are far from settled, but this is quickly becoming an incredibly competitive environment.

White House outlines 10 principles for AI development

Despite the Trump administration stating it would never meddle in artificial intelligence, the White House has outlined 10 commandments for agencies to create rules and regulations.

President Trump has previously promised the White House would not implement a national AI strategy to dictate how the technology is implemented. This was counteracted with the creation of the American AI Initiative, announced in August, though now the U-turn is complete with the emergence of this draft document.

The presence of such guidelines is not necessarily a bad thing for industry or the US Government, it simply depends on the attitudes of the agencies. Some technophobes could use the principles to erect such high-barriers to entry it becomes a redundant exercise, while on the other side of the coin, it could accelerate the introduction of AI in public services.

“The deployment of AI holds the promise to improve safety, fairness, welfare, transparency, and other social goals, and America’s maintenance of its status as a global leader in AI development is vital to preserving our economic and national security,” the document states.

“The importance of developing and deploying AI requires a regulatory approach that fosters innovation, growth, and engenders trust, while protecting core American values, through both regulatory and nonregulatory actions and reducing unnecessary barriers to the development and deployment of AI.”

The objective for the White House Office of Science and Technology Policy (OSTP) is to ensure engagement with and education of the general public, prevent overreach or overregulation and promote AI which is safe and of benefit to all.

Ultimately, the White House is attempting to guide the agencies towards creating a framework so some sort of element of control is created. As with many of these memorandums, the wording is concise enough to keep the various agencies in-line, but there is enough wiggle-room for the nuances of different industries.

The ten principles are as follows:

  1. Public trust in AI
  2. Public participation
  3. Scientific integrity and information quality
  4. Risk assessment and management
  5. Benefits and costs
  6. Flexibility
  7. Fairness and non-discrimination
  8. Disclosure and transparency
  9. Safety and security
  10. Interagency coordination

Although these principles are perfectly sensible for the pursuit of AI which benefits business and society, it is another example of world becoming increasingly regionalised.

At CES, LG discussed the standardisation framework which it has in place for the development of AI within its own ecosystem, while numerous other players have either launched their own approaches or backed another. Governments and bureaucracies are fuelling their own programmes as another layer, paving the way for fragmentation.

Although this sounds negative, it is encouraging to see governments engage industry during the early years of development. It does appear lessons have been learned.

Traditionally, governments and regulators stay at arm’s length from an embryonic technology. The industry is often given the freedom of self-regulation to accelerate development, though this often results in government intervention down the line to limit the negative impact of industry’s flamboyance. You only have to look at recent privacy scandals for evidence of what can happen when the government gives too much slack on the leash.

An increase of bureaucracy might well slow the introduction of AI in the public sector slightly, but it is also much more likely to create a segment which is sustainable, beneficial, healthy and transparent.

LG joins the virtue-signalling crowd with AI standardisation plug

LG is the latest technology company to humbly join the ranks of technology disciples preaching standardisation and, of course, its idea is better than everyone else’s.

The Consumer Electronics Show (CES) is in full-swing in Las Vegas, and in the midst of a swathe of technology announcements, LG found some spare time to lecture the room on the importance of a standardised approach to artificial intelligence. That is, of course, before being joined on stage by a partner to talk about how it has developed its own framework, adding to the growing wave of fragmentation.

The technology industry is one which elects to stretch the definition of certain words and phrases to such a degree many will wonder whether the dictionaries are thought of as ancient artefacts to be revered but never given attention. In the ‘C’ section, LG President and CTO I.P. Park might find the word ‘contradiction’, and it might offer some insight to read the definition.

Looking around the world, the European Commission has put together a group to create an ethics framework to guide the development of AI, Facebook has backed a German initiative called the ‘Institute for Ethics in Artificial Intelligence’, the UK Government has formed its own AI Council, the US has launched the ‘American AI Initiative’, Google created ‘DeepMind Ethics & Society’ and there are countless others.

Each of these parties are aiming to develop a standardised approach for the development of AI, weighing up the commercial ambitions of industry alongside privacy issues, the risk of bias and the preservation of fairness in a currently lop-sided digital economy. Each party is attempting to ‘own’ the space, dictate the conditions of the playing field for the benefit of its own interests.

This is where self-righteous executives preaching the benefits of standardisation have to be taken with a pinch of salt. The more frameworks which are in place simply heightens the risk of fragmentation. In this case, LG is pursuing its own agenda, implementing a framework to achieve its own aims under the guise of enhancing co-operation and standardisation.

These statements reflect badly on LG, but everyone in the industry is doing exactly the same. The European Commission, the White House, Downing Street, Google, Facebook or whoever. These standardisation frameworks are all slightly different because they serve the aims of the puppet masters.

From LG’s perspective, AI is the future. This is a company where the heritage is in consumer electronics but is positioning itself to capitalise on the growing interest in ‘intelligence’ and embedded connectivity in everything and anything. LG’s robot vacuum cleaner will not only recognise patterns, but also collect data to learn from previous mistakes, such as getting stuck in gaps and corners.

This of course is not a new idea. Embedded ‘intelligence’ and the ability for products to learn and adapt, has been discussed at length for years. LG is perhaps behind the trend, though as the industry is yet to achieve mass market adoption, there is still time for it to catch up. However, whenever someone talks about standardisation, be wary.

There is a reason this party is not joining an existing group, it probably does not serve its own ambitions the most effectively. Instead, we are probably likely to see the creation of more groups, alliances, councils, think-tanks and boards. Standardisation is the aim, but fragmentation is looking much more likely.

Xiaomi makes big noises with $7bn 5G, AI and IOT plan

In an open letter from its CEO, Xiaomi has promised to increase its R&D investments in 5G, AI and IOT to $7.18 billion.

In years gone, Xiaomi was a backwater Chinese brand which hoovered up the scraps of mid- and low-tier smartphone shipments. But such is the momentum the Chinese technology industry is generating, Xiaomi is now a major force across the world, and this investment is further evidence of the success.

“2019 was significant year for our global expansion, our overseas revenue now accounts for almost half of our total group revenue,” CEO Lei Jun said in the letter.

“Xiaomi is now truly global technology leader. Our internet business also became more diversified and our AIoT business retained its global leadership. Xiaomi is now widely known as a ‘true AIoT leader’ in the industry.”

The Xiaomi strategy has been focused acutely on the convergence of 5G, AI and IOT. All of the components mean something important to somebody individually, but with Xiaomi’s broad portfolio of consumer products, it is in an interesting position. From smartphones, to home appliances, security products and scooters, if Xiaomi can nail the ‘AIoT’ proposition it can enter into an entirely new world, moving into the ‘software and services’ segments.

For many, AI and IOT are two technologies which work hand-in-hand. They can of course work separately, but the greatest value is achieved together. The consumer world is where Xiaomi can slip into naturally, but the emerging segment of Industry 4.0 is also open to the ambitious Chinese OEM.

What is worth noting is this is not a new investment but supercharging an existing one. Xiaomi had already committed $1.43 billion over the next five years, though this has now been aggressively pushed up to the $7.18 billion over the same period. Throwing cash at an opportunity is no guarantee of success, but it does certainly shift the odds.

Happy New Year, Europe! And let’s learn AI

Finland is offering free AI primer to all EU countries, aiming to provide basic AI literacy to 1% of the Union’s total population by 2021.

As her EU Presidency comes to an end, Finland is offering the 513 million people living in the European Union a parting / Christmas / New Year gift: the Nordic country is making “Elements of AI”, an online introductory course for non-professionals, freely accessible to everyone in the Union’s 28 member states. The target is for 1% of the total population, or about 5 million people, to take the course by 2021.

The course was jointly developed by the University of Helsinki and the technology consultancy Reaktor. It started as a private initiative by the creators, offered for free to anyone interested. The programme was soon integrated in the country’s “national AI strategy”. The initial target of training 1% of the Finnish population was achieved just over six months after the programme started. Created in English, the course content was later made available in Finnish, which helped accelerate the uptake by more people. Later the content was also made available in Swedish (Finland’s second official language) and Estonian (native language of the largest EU immigration community to Finland). So far more than 220,000 people in over 110 countries have taken the course. “Over 40% of our course takers are women (more than doubling the global Computer Science average) and over 25% are over the age of 45” according to Reaktor.

“As our Presidency ends, we want to offer something concrete. It’s about one of the most pressing challenges facing Europe and Finland today: how to develop our digital literacy,” said Timo Harakka, Minister of Employment. Announced in Brussels recently, the content creators, working with EU partners, are going to translate the course into all the other 20 official languages in the EU. The budget of the project, estimated to run up to EUR 1.7 million ($2 million), will be paid for by the Ministry of Economic Affairs and Employment.

“Our investment has three goals: we want to equip EU citizens with digital skills for the future; we wish to increase practical understanding of what artificial intelligence is; and by doing so, we want to give a boost to the digital leadership of Europe,” Harakka said. “The significance of AI is growing. To make use of it, we need digital skills. Changing labour markets, the transformation of work, digitalisation and intensifying global competition all mean one thing for the EU: we must invest in people. Every EU citizen should have the opportunity to pursue continuous lifelong learning, regardless of age and educational background.”

Teemu Roos, Associate Professor in Computer Science at the University of Helsinki, said, “Our University has a policy of making its research and expertise benefit society at large. As research into artificial intelligence is highly advanced in Finland, it came naturally to us to make AI teaching more widely accessible.” Roos came up with the idea of training AI to all in 2017.

Reaktor shares this view. Megan Schaible, COO of Reaktor Education, wrote in a post that new technologies like AI “can feel like an insider’s club that has left the majority of us behind.” The AI for all initiative is therefore developed to “prove that AI should not be left in the hands of only a few elite coders.”

Other EU leaders may see technologies differently. Ina Schieferdecker, a junior minister in Germany’s Federal Ministry of Education and Research, and a PhD in computer science by training, recently expressed a more elitist view that Europeans do not need to understand AI to trust it.

Among University of Helsinki’s alumni is Linus Torvalds, creator of the Linux operating system.