Silicon Valley doesn’t know where to look in the 2020 Presidential race

Traditionally Silicon Valley has supported Democrat Presidential candidates but, with the resident internet giants increasingly becoming a political punching bag, this might change very quickly.

More specifically, Silicon Valley tends to lean towards ‘progressive’ Democrats. Many of those who would want to be included in this list have been running events in California recently to woo voters and potential donors alike, but these are candidates which have not been friendly to the internet giants in recent months.

Some of those who would call themselves ‘progressive’ Democrats include California Senator Kamala Harris, Massachusetts Senator Elizabeth Warren and New Jersey Senator Corey Booker, all of which have made moves against the technology giants for varying reasons. Harris and Booker have sponsored or supported bills which would place greater scrutiny on acquisitions, while Warren made the outlandish promise to break-up big tech and reverse certain acquisitions.

While Warren’s promise might end up meaning very little, we suspect there is too much of a focus on popularity instead of practicality, she has been the focal point of some criticism. Texas Representative Beto O’Rourke, another confirmed candidate, poked fun at Warren’s approach instead suggesting the digital economy should be more tightly regulated, avoiding the difficulties of breaking up incredibly complex, private organizations.

The prospect of new regulations is certainly a better option for the internet giants than Warren’s alternative, however O’Rourke is a bit of a difficult horse to back right now. Looking at O’Rourke’s website, it offers little (in fact, zero) insight into potential policies, but if you want to buy a t-shirt this is the place to go.

Of course, regulatory reform is top of the agenda for many of the potential candidates, and the technology industry is a hot topic here as well. Let’s start with the positives.

The majority of the candidates on show were supporters of net neutrality, battling against FCC Chairman Ajit Pai’s mission to undo the protections. Of the potential candidates, Washington Governor Jay Inslee might steal the crown here.

California might have grabbed the headlines for introducing localised net neutrality rules, potentially paving the path for a constitutional crisis, however it was Inslee who was the first to put pen to paper. Washington’s localised net neutrality rules were introduced in March 2018, six months ahead of California.

More positive news focuses on the Lifeline Program, an initiative which helps poorer families access broadband options. This is another area which felt the fury of Pai’s administration, though several of the candidates opposed the cutting of funds. Warren, Vermont Senator Bernie Sanders and New York Senator Kirsten Gillibrand are three candidates which would support the Lifeline Program.

Former Maryland Congressman John Delaney is another who would want to shake the infrastructure game up. Sticking with the rural digital divide, Delaney is proposing the formation of an Infrastructure Bank, with funds of $50 billion, to help close the virtual chasm. This might sound attractive, but Delaney shares the same anti-China rhetoric as President Donald Trump. And that has been working out really well.

Should one of these individuals win the keys to the White House, the FCC could be in-line for yet another shake-up.

Now onto the negative side of regulatory reform. The privacy and data-handling activities of the internet giants have come under a lot of scrutiny and criticism over the last few months. This is unlucky to change, and perhaps will become a lot more aggressive as politicians search for PR points. This is a popularity contest after all.

Almost every candidate is calling for more regulatory reform, pulling down the curtain which hides the data machine fuelling the sharing economy. No-one who is involved in the data sharing economy, internet giants and telcos alike, want too many of these practises exposed as it would lead to public backlash. The industry has allowed the education of the general public to fall too far behind technological developments; any bold revelations will be scary.

Two candidates are setting themselves out from the pack with bold regulatory change, Minnesota Senator Amy Klobuchar and tech entrepreneur Andrew Yang.

Klobuchar’s idea is to introduce a digital dividend on participants of the sharing economy. A levy would be placed on any company which transfers personal data to a third-party, penalising those who monetize data. Those who collect data and use it internally, current or new product development for example, would not be included in the tax.

Yang on the other hand is perhaps proposing the most revolutionary idea; Universal Basic Income (UBI). Effectively, every person over the age of 18 in the US would be entitled to apply to receive $1,000 per month. Yang claims one in three jobs is under risk from automation and AI, therefore the money will help people compensate for this.

The UBI would be funded by consolidating all welfare payments for efficiencies, a new value added tax (VAT), new revenues through increased consumer disposable income and improvements to other areas such as healthcare. However, we suspect this would not cover the outgoings, so it would not be unfair to assume a tax would be placed on those companies benefiting from automation.

Another development mid-way through last year was an attack on the state sales tax regime which the eCommerce giants have enjoyed for so long. These rules would effectively end tax avoidance benefits so many national players have enjoyed by locating head quarters in states like Delaware. Gillibrand, Sanders, Warren and Klobuchar were Senators to voted in favour of the state led digital sales tax.

What is worth noting is policies are still in their early days, and the genuine lobbying from industry will not have started yet. Who knows what the headline policies will be in the run-up to the 2020 Presidential Election, but the Democrats aren’t looking as Silicon Valley friendly as previous years.

Nick Clegg defends Facebook’s business model from EU’s privacy regulation

Facebook’s head of PR reportedly had a series of meetings with EU and UK officials aiming to safeguard the social network’s business model heavily relying on targeted advertising.

Sir Nick Clegg, the former UK Deputy Prime Minister, now Facebook’s VP for Global Affairs and Communications, met three EU commissioners during the World Economic Forum in Davos and shortly after the event in Brussels, according to a report by the Telegraph. These commissioners’ portfolios include Digital Single Market (Andrus Ansip), Justice, Consumers and Gender Equality (Věra Jourová), and Research, Science and Innovation (Carlos Moedas). Clegg’s mission, according to the Telegraph report, was to present Facebook’s case to defend its ads-based business model in the face of new EU legislation related to consumer privacy.

According to a meeting minutes from the Ansip meeting, seen by the Telegraph, “Nick Clegg stated as main Facebook’s concern the fact that the said rules are considered to call into question the Facebook business model, which should not be ‘outlawed’ (e.g. Facebook would like to measure the effectiveness of its ads, which requires data processing). He stated that the General Data Protection Regulation is more flexible (by providing more grounds for processing).”

In response, Ansip defended the proposed ePrivacy Regulation as a complement to GDPR and it is primarily about protecting the confidentiality of consumers’ communications. In addition, the ePrivacy Regulation will be more up to date and will provide more clarity and certainty, compared with the current ePrivacy Directive, which originated in 2002 and last updated in 2009. Member states could interprete and implement the current Directive more restrictively, Ansip warned.

Facebook’s current security setup makes it possible to access users’ communication and able to target them with advertisements based on the communications. Under the proposed Regulation, platforms like Facebook need to get explicit consent from account holders to access the content of their communications, for either advertisement serving, or effectiveness measuring.

There are two issues with Facebook’s case. The first one is, as Ansip put it, companies like Facebook would still be able to monetise data after obtaining the consent of users. They just need to do it in a way more respectful of users’ privacy, which 92% of EU consumers think important, according to the findings of Eurobarometer, a bi-annual EU wide survey.

Another is Facebook’s own strategy announced by Zuckerberg recently. The new plan will make it impossible for Facebook to read users’ private communications with its end-to-end WhatsApp-like encryption. This means, even if consumers are asked and do grant consent, Facebook in the future will not be able to access the content for targeted advertising. Zuckerberg repeatedly talked about trade-offs in his message. This would be one of them.

On the other hand, last November the EU member states’ telecom ministers agreed to delay the vote on ePrivacy Regulations, which means it will be highly unlikely that the bill will be passed and come into effect before the next European Parliament election in May.

The office of Jeremy Wright, the UK’s Secretary of State for Digital, Culture, Media and Sport, did not release much detail related to the meeting with Clegg, other than claiming “We are at a crucial stage in the formulation of our internet safety strategy and as a result we are engaging with many stakeholders to discuss issues pertinent to the policy. This includes discussions with social media companies such as Facebook. It is in these crucial times that ministers, officials and external parties need space in which to develop their thinking and explore different options in a free and frank manner.”

The Telegraph believed Clegg’s objective was to minimise Facebook’s exposure to risks from the impending government proposals that could “place social media firms under a statutory duty of care, which could see them fined or prosecuted” if they fail to protect users, especially children, from online harms.

It is also highly conceivable that the meeting with the UK officials was related to influence post-Brexit regulatory setup in the country, when it will not longer be governed by EU laws. Facebook may want to have its voice heard before the UK starts to make its own privacy and online regulations.

Zuckerberg’s vision for Facebook: as privacy-focused as WhatsApp

The Facebook founder laid out his plan for the next steps how Facebook will evolve with a focus on privacy and data security, and promised more open and transparency in the transition.

In a long post published on Facebook, Mark Zuckerberg first recognised that going forward, users may prefer more private communication than socialising publicly. He used the analogy of town squares vs. living rooms. To facilitate this, he aims to use the technologies of WhatsApp as the foundation to build the Facebook ecosystem.

Zuckerberg laid out principles for the next steps, including:

  • Private interactions: this is largely related to users’ control over who they communicate with, safeguarded by measures like group size control and limiting public stories being share;
  • End-to-end encryption: this is about encrypting messages going through Facebook’s platforms. An interesting point here is that Zuckerberg admitted that Facebook’s security systems can read the content of users messages sent over Messenger. WhatsApp is already implementing end-to-end encryption and is not storing encryption keys, which makes it literally impossible for it share content of communication between individuals with any other third parties including the authorities. Zuckerberg recalled the case of the Facebook’s VP for Latin America being jailed in Brazil to illustrate his point.
  • Reducing Permanence: this is mainly about giving users the choice to decide how long they like their content (messages, photos, videos, etc.) to be stored, to ensures what they said many years ago would not come back to haunt them.
  • Safety: Facebook will guard the data safe against malignant attacks
  • Interoperability: Facebook aims to make its platforms interoperable and may extend to be interoperable with SMS too.
  • Secure data storage: one of the most important point here is Zuckerberg vowed not to save user data in countries which “have a track record of violating human rights like privacy or freedom of expression”.

To do all these right, Zuckerberg promised, Facebook is committed to “consulting with experts, advocates, industry partners, and governments — including law enforcement and regulators”.

None of these principles are new or surprising, and are an understandable reaction to recent history when Facebook has been battered by scandals of both data leaking and misuse of private data for monetisation purpose. However there are a couple of questions that are not answered:

  1. What changes Facebook needs to make to its business model: in other words, when Facebook limits its own ability to penetrate user data it weakens its value for targeted advertisers. How will it convince the investors this is the right step to take, and how will it to compensate the loss?
  2. Is Facebook finally giving up its plan to re-enter markets like China? Zuckerberg has huffed and puffed over the recent years without bringing down the Great Wall. While his peers in Apple have happily handed over the keys to iCloud and Google has working hard, secretly or not so secretly to re-enter China, how will the capital market react to Facebook’s public statement that “there’s an important difference between providing a service in a country and storing people’s data there”?

Reports of Google China’s death are greatly exaggerated

Google engineers have found that the search giant has continued with its work on the controversial search engine customised for China.

It looks that our conclusion that Google has “terminated” its China project may have been premature. After the management bowed to pressure from both inside and outside of the company to stop the customised search engine for China, codenamed “Dragonfly”, some engineers have told The Intercept that they have seen new codes being added to the products meant for this project.

Despite that the engineers on Dragonfly have been promised to be reassigned to other tasks, and many of them are, Google engineers said they noticed around 100 engineers are still under the cost centre created for the Dragonfly project. Moreover, about 500 changes were made to the code repositories in December, and over 400 changes between January and February of this year. The codes have been developed for the mobile search apps that would be launched for Android and iOS users in China.

There is the possibility that these may be residuals from the suspended project. One source told The Intercept that the code changes could possibly be attributed to employees who have continued this year to wrap up aspects of the work they were doing to develop the Chinese search platform. But it is also worth noting that the Google leadership never formally rang the dead knell of Dragonfly.

The project, first surfaced last November, has angered quite a few Google employees that they voiced their concern to the management. This was also a focal point of Sundar Pichai’s Congressional testimony in December. At that time, multiple Congress members questioned Pichai on this point, including Sheila Jackson Lee (D-TX), Tom Marino (R-PA), David Cicilline (D-RI), Andy Biggs (R-AZ), and Keith Rothfus (R-PA), according to the transcript. Pichai’s answers were carefully worded, when he repeated stated “right now there are no plans for us to launch a search product in China”. When challenged by Tom Marino, the Congressman from Pennsylvania, on the company’s future plan for China, Pichai dodged the question by saying “I’m happy to consult back and be transparent should we plan something there.”

On learning that Google has not entirely killed off Dragonfly, Anna Bacciarelli of Amnesty International told The Intercept, “it’s not only failing on its human rights responsibilities but ignoring the hundreds of Google employees, more than 70 human rights organizations, and hundreds of thousands of campaign supporters around the world who have all called on the company to respect human rights and drop Dragonfly.”

While Sergei Brin, who was behind Google’s decision to pull out of China in 2010, was ready to stand up to censorship and dictatorship, which he had known too well from his childhood in the former Soviet Union, Pichai has adopted a more mercantile approach towards questionable markets since he took over the helm at Google in 2015. In a more recent case, Google (and Apple) has refused to take down the app Absher from their app stores in Saudi Arabia, with Goolge claiming that the app does not violate its policies. The app allows men to control where women travel and offers alerts if and when they leave the country.

This has clearly irritated the lawmakers. 14 House members wrote to Tim Cook and Sundar Pichai, “Twenty first century innovations should not perpetuate sixteenth century tyranny. Keeping this application in your stores allows your companies and your American employees to be accomplices in the oppression of Saudi Arabian women and migrant workers.”

UK police are using AI to make precrime a reality

UK local councils and police forces are using personal data they own and algorithms they bought to pre-empt crimes against children, but there are many things that could go wrong with such a system.

A new research by Cardiff University and Sky News shows that at least 53 UK local councils and 45 of the country’s police forces are heavily relying on computer algorithms to assess the risk level of crimes against children as well as people cheating on benefits. It has raised many eyebrows on both the method’s ethical implications and its effectiveness, with references to Philip K Dick’s concept of precrime inevitable.

The algorithms the authorities sourced from IT companies use the personal data in their possession to train the AI system to predict how likely a child in a certain social environment is going to be subjected to crime, giving each child a score between 1 and 100, then classifying the risk level against each child as high, medium, or low. The results are then used to flag to social workers for intervention before crimes are committed. This does not read too dissimilar to the famous Social Credit system that China is building on national scale, though without the benefits of faster housing loans or good schools for kids as a reward for good behaviour.

The Guardian reported last year that data from more than 377,000 people were used to train the algorithms for similar purposes. This may have been a big underestimate of the scope. The research from Cardiff University disclosed that in Bristol alone, data from 54,000 families, including benefits, school attendance, crime, homelessness, teenage pregnancy, and mental health are being used in the computer tools to predict which children are more susceptible to domestic violence, sexual abuse, or going missing.

On benefit assessment side, the IT system to support the Universal Credit scheme has failed to win much praise. A few days ago, computer generated warning letters were sent out to many residents in certain boroughs, warning them their benefits would be taken away because they have been found cheating. Almost all the warnings turned out to be wrong.

There are two issues here. One is administrative, that is how much human judgement can be used to overrule the algorithms. Local councils insisted that analytics results will not necessarily lead to actions. Privacy activists disagreed. “Whilst it’s advertised as being able to help you make a decision, in reality it replaces the human decision. You have that faith in the computer that it will always be right,” one privacy advocacy group told Sky News. Researchers from Cardiff University also found that “there was hardly any oversight in this area.” Over-enthusiastic intervention, for example taking children away from their families in not absolutely necessary circumstances can be traumatic to the children’s development. Controversies of this kind have been long and hard debated in places like Norway, Sweden, and Finland.

Another is how accurate the output from the algorithms are. The police in Kent believed that among the cases pursued by their algorithm, over a third of all cases on the police’s hand, 98% have been accurate. If this is true, then either Kent Police has a rather relaxed definition of “accuracy”, or it knows something the technology world does not. IBM’s Watson, one of the world’s most advanced AI technologies, has been used by Vodafone to help provide digital customer service. It has won Vodafone prizes and was hailed as a big AI success by IBM during MWC 2019. Watson’s success rate at Vodafone was 68%

Late last year the Financial Times reported that one of China’s most ambitious financial service, Ant Financial, which is affiliated to Alibaba, has never used its credit scoring system to make lending decisions, despite that it had been four years in the making and had access to billions of data points in the Alibaba ecosystem. “There was a difference between ‘big data’ and ‘strong data’, with big data not always providing the most relevant information for predicting behaviour,” an executive from Ant Financial told the FT. A think-tank analyst put it in a more succinct way: “Someone evading taxes might always pay back loans, someone who breaks traffic rules might not break other rules. So I don’t think there is a general concept of trustworthiness that is robust. Trustworthiness is very context specific.”

It is understandable that the UK police and local councils are increasingly relying on algorithms and machine learning as they have been under severe spending cut. The output of algorithms could be used as helpful references but should not be taken at its face value. It is probably safer to admit that AI is simply not good enough yet to drive or guide important decisions as policing, criminal investigation, or social worker intervention. Getting Vodafone’s customer service more accurate is a more realistic target. Even if the bot still failed to help you set your new phone up properly, you would not end up queuing at the foodbank, or have your children taken away for “crime prevention” purposes.

IBM Vodafone partnership wins its first clients

IBM and Vodafone announced during Mobile World Congress 2019 that their $550 million cloud and AI partnership has signed its first heavy-weight clients.

SEAT, a Spanish sub-brand of the Volkswagen group, and KONE, a world leading lift and escalator supplier from Finland, have become the first customers of the open cloud and AI technologies offered by the IBM and Vodafone Business partnership.

SEAT is going to use the cloud, AI, and 5G technologies to facilitate its transformation into a “mobility services provider”. KONE’s main interest is in the IoT domain. With the new technologies it aims to move its customer service from reactive to proactive then predictive mode as well as to improve the efficiency of the monitoring and fix operations.

The partnership between IBM and Vodafone Business was announced last month. Although billed as a “joint venture”, Michael Valocchi, IBM’s General Manager of the new venture, clarified to Telecoms.com that it is not a formal joint venture or a separate organization but an 8-year strategic commercial partnership and $550M managed services agreement. IBM and Vodafone Business are going to put in equal amount of investment.

“IBM’s partnerships with global telco companies like Vodafone will help speed up the deployment of 5G and provide easier access to new technologies such as AI, blockchain, edge computing and IoT,” said Valocchi in a statement. “This is because the promise of 5G doesn’t just depend on fiber, spectrum and gadgets, but on advanced levels of integration, automation, optimization and security across the ever more complex IT systems that companies are building in a bid to transform.”

“By providing the open cloud, connectivity and portable AI technologies that companies need to manage data, workloads and processes across the breadth of their IT systems, Vodafone and IBM are helping to drive innovation and transform user experiences across multiple industries – from retail to agriculture,” added Greg Hyttenrauch, Co-leader of the new venture for Vodafone Business.

The partnership will become operational in Q2 this year. IBM told Telecoms.com that by that time Vodafone Business customers will immediately have access to IBM’s entire hybrid cloud portfolio to optimise and enhance their current solutions. These solutions and services are not dependent on 5G. In the future, clients will benefit from new solutions and services that the new venture will develop, combining IBM’s multi-cloud, AI, analytics and blockchain with IoT, 5G, and edge computing from Vodafone.

Considering that Vodafone is going to start with a non-standalone approach to 5G, the use cases for verticals that demand extreme low latency are hard to realise in the near future. The engineers at IBM’s stand also conceded that although Watson can be deployed and trained to support many scenarios, the implementation of mission critical cases will have to wait till end-to-end 5G network is in place.

RCS is here to stay and doing well

RCS has been touted as a saviour when the SMS value has been destroyed by OTT messaging services, but without much success, but it may finally have find its moment.

Mavenir, the software company, presented on day 1 of MWC 2019, promoting its rich communication solutions offered by Rakutan. The key benefits, or the main use cases that RCS can differentiate from OTT messaging actually are less to do with taking consumers back to texting each other, or P2P messaging, but rather the communication between businesses and consumers, or A2P messaging.

This view is corroborated by Infobip, a Croatia-based messaging platform that provides aggregated OTT messaging services (e.g. WhatsApp, LINE, Viber, KakaoTalk, etc.) for their corporate clients, which the clients then can use for customer service and CRM. However, the company told Telecoms.com that its dominant business, which it has seen annual growth of between 30% and 40%, is SMS and RCS based services.

One of the use cases is helping businesses improve customer engagement. Despite that on feature comparison RCS is mostly playing catching up on OTT messaging services, SMS and RCS tramp OTTs in consumer trust. To quote Guilliaume Le Mener, Manevir’s SVP for Enterprise Business, RCS is a “clean channel”, not tarnished by the privacy scandals committed by Facebook and co, or the over monetisation by others. Research shared by Mavenir showed 97% of SMS / RCS are opened within 3 minutes.

In one case, Infobip was hired by Twitter to reengage the inactive users, after the social media giant failed the mission with its early efforts through email. Thanks to its rich features, RCS messages can enable users to explore the content directly. For those users on phones not compatible with RCS, brands can choose to fall back on SMS with a web line. The results were much more improved also owing largely to the capability of producing rich analytics to evaluate the campaign effectiveness and make quick decisions on any changes needed.

In addition to A2P messaging, RCS is also being used by brands to engage consumers in P2A, that is engaging directly with the brands through messaging. On the brand side the service can be handled by bots. This will then need to be supported by AI and analytics which will be another business opportunity for the RCS solution providers. With OTTs also actively moving into the P2A domains, again this is an area that operators need to have a stronghold for RCS before it is too late.

For Rakuten, RCS may be particularly meaningful, as, coming from an internet service and MVNO background, Rakuten has a big range of digital service tied to a user’s Rakuten ID. RCS will be a key instrument to maintain and strengthen customer engagement when it builds out its 5G network from ground up.

HERE adds mobile operators to its monetisation map

The location and mapping service company HERE, in partnership with data analytics company Continual, launched two new data services, HERE Cellular Signals and HERE Traffic Analytics, aiming to increase its value for mobile operators in addition to the transport and autonomous car industries.

HERE Cellular Signals is generated by overlaying a radio map crowdsourced from its users on top of its in-house road map. The outcome of such a mesh will provide a snapshot of the network coverage, carrier presence, signal strength and bandwidth on a given road. HERE claims that there are 250 million connected devices out there with HERE user clients installed, and the radio data (including cellular and Wi-Fi traces as well as GPS coordinates) will be updated 800 million times a day, including 100 million times over cellular networks.

If the combined solution is proved robust enough, this can deliver benefits to mobile operators. In order to gather reliable data from live networks, mobile operators or its suppliers still need to send out engineers to do drive tests with car-mounted or hand-held measurement equipment. Such data are critical for network and RF planning and optimization, quality evaluations, and competitive assessments. HERE Cellular Signals will not completely replace such tests, but it can reduce the frequency and geographical coverage, and in turn reduce mobile operators’ operation costs.

When it comes to HERE’s home territory, i.e. transport and logistics industry of today and autonomous and self-driving cars of tomorrow, HERE Cellular Signals can help the fleets optimise their communication plans with the control centre based on the cellular network coverage and service plans on the routes. Connected vehicles need always-on connectivity to the cloud, to the road infrastructure and to other vehicles. A radio map like HERE Cellular Signals can therefore help connected car managers plan when to use online service and when to use offline service, or which roads to avoid so as to minimise the risk of dropped connection. This will be particularly critical when full auto-driving cars come to the roads which will demand end-to-end low-latency broadband connectivity, e.g. 5G.

“Bandwidth is a limited and expensive resource,” said Aaron Mayfield, Senior Product Manager at HERE Technologies. “As data traffic soars and new demands are placed on cellular networks, bandwidth optimization will increasingly become a delicate balancing act. HERE Cellular Signals is a valuable resource to add to the toolbox of cellular carriers to help manage these challenges.”

HERE Traffic Analytics, on the other hand, uses the data gathered from the roads to provide visibility into road traffic patterns.

Both of HERE’s new products are integrated in the Mobility Experience Analytics solution marketed by Continual, an Israeli user data analytics and AI company.

“As 5G networks and always-online automated vehicles edge closer to reality, we’re seeing growing convergence between the mobile telecom and automotive markets,” said Michiel Verberg, Senior Manager Strategic Partners at HERE Technologies. “We’re excited that Continual’s existing deep relationships with MNOs coupled with our established automotive partnerships will provide us with a unique opportunity to better address this important evolving market.”

HERE in its earlier life also had a legacy of working with mobile operators extensively. It was part of Nokia, which was acquired by the consortium of German car makers including Audi, BMW, and Mercedes, back in 2015.

“Continual’s Mobility Experience Analytics solution re-defines the approach that mobile operators and automotive companies can adopt towards monitoring and improving the connected experience of car drivers, passengers and subscribers who are traveling,” said Assaf Aloni, CMO of Continual. “HERE’s impressive portfolio of automotive and network technologies is very synergistic with ours, and the partnership is enabling us to create even stronger solutions for Connected Mobility.”

The two companies will demo the new services at the upcoming Mobile World Congress.

Facial recognition is being used in China’s monitoring network

A publicly accessible database managed by a surveillance contractor showed China has used a full suite of AI tools to monitor its Uyghur population in the far west of the country.

Victor Gevers, a cyber security expert and a researcher at the non-profit GDI Foundation, found that a database managed by SenseNets, a Chinese surveillance company, and housed in China Unicom cloud platform, has stored large quantities of tracking data of the residents in the Xinjiang autonomous region in west China. The majority of monitored are the Uyghur ethnic group. The data covered a total number of nearly 2.6 million people (2,565,724 to be precise), including personal information like their ID card details (issue & expire dates, sex, ethnic group, home address, birthday, photo) as well employer details, and the locations they have been tracked (using facial recognition) in the last 24 hours, during which time a total of 6,680,348 records were registered, according to Gevers.

Neither the scope nor the level of detail of the monitoring should be a surprise, given the measures used by China in that part of the country over the last two years. If there is anything embarrassing for the Chinese authorities and their contractors in this story, it will be the total failure of data security: the database was not protected at all. By the time Gevers notified the administrators at SenseNets, it had been accessible to anyone for at least half a year, according to the access log. The database has since been secured, opened, and secured again. Gevers also found out that the database was built on a pirate edition of Windows Server 2012. Police stations, hotels, and other service and business establishments are also found to have connected to the database.

This is a classic example of human errors defeating security systems. Not too long ago, Jeff Bezos of Amazon sent intimate pictures to his female companion, which ended up in the wrong hands. This led to the BBC’s quip that Bezos was the weak link in cybersecurity for the world’s leading cloud service provider.

Like other technologies, facial recognition can be used by overbearing governments for monitoring purposes, breaking all privacy protection. But it can also do tremendous good. EU citizens travelling between the UK and the Schengen Area have long got used to having their passports read by a machine then their faces matched by a camera. The AI technologies behind the experience have vastly simplified and expediated the immigration process. But, sometimes, for some reason, the machine may fail to recognise a face. In that case, there is always an immigration officer at the desk to do manual check.

Facial recognition, coupled with other technologies, for example blockchain, can also improve the efficiency in industries like cross-border logistics. The long border between Sweden and Norway is largely open despite that a passenger or cargo vehicle travelling from one country to another would be technically moving between inside the EU (Sweden) and outside of it (Norway). According to an article in The Economist, the frictionless transit needs digitalisation of documentation (of goods as well as on people), facial recognition (of drivers), sensors on the border (to read code on the driver’s mobile phone), and automatic number-plate recognition (of the vehicles).

In cases like these, facial recognition, and AI in general, should be lauded. What the world should be on alert to is how the data is being used and who has access to it.