San Francisco puts the brakes on facial recognition surveillance

The City of San Francisco has passed new rules which will significantly curb the abilities of public sector organisations to purchase and utilise facial recognition technologies.

Opinions on newly emerging surveillance technologies have varied drastically, with some pointing to the benefits of safety and efficiency for intelligence and police forces, while others have bemoaned the crippling potential it could have on civil liberties and privacy.

The new rules in San Francisco do not necessarily ban surveillance technologies entirely, but barriers to demonstrate justification have been significantly increased.

“The success of San Francisco’s #FacialRecognition ban is owed to a vast grassroots coalition that has advocated for similar policies around the Bay Area for years,” said San Francisco Supervisor Aaron Peskin.

The legislation will come into effect in 30 days’ time. From that point, no city department or contracting officer will be able to purchase equipment unless the Board of Supervisors has appropriated funds for such acquisition. New processes will also be introduced including a surveillance technology policy for the department which meet the demands of the Board, as well as a surveillance impact report.

The department would also have to produce an in-depth annual report which would detail:

  • How the technology was used
  • Details of each instance data was shared outside the department
  • Crime statistics

The impact report will have to include a huge range of information including all the forward plans on logistics, experiences from other government departments, justification for the expenditure and potential impact on privacy. The department may also have to consult public opinion, while it will have to create concrete policies on data retention, storage, reporting and analysis.

City officials are making it as difficult as possible to make use of such technologies, and considering the impact or potential for abuse, quite rightly so. As mentioned before, this is not a ban on next-generation surveillance technologies, but an attempt to ensure deployment is absolutely necessary.

As mentioned before, the concerns surround privacy and potential violations of civil liberties, which were largely outlined in wide-sweeping privacy reforms set forward by California Governor Jerry Brown last year. The rules are intended to spur on an ‘informed public debate’ on the potential impacts on the rights guaranteed by the First, Fourth, and Fourteenth Amendments of the US Constitution.

Aside from the potential for abuse, it does appear City Official and privacy advocates are concerned over the impact on prejudices based on race, ethnicity, religion, national origin, income level, sexual orientation, or political perspective. Many analytical technologies are based on the most likely scenario, leaning on stereotypical beliefs and potentially increasing profiling techniques, effectively removing impartiality of viewing each case on its individual factors.

While the intelligence and policing community will most likely view such conditions as a bureaucratic mess, it should be absolutely be viewed as necessary. We’ve already seen the implementation of such technologies without public debate and scrutiny, a drastic step considering the potential consequences.

Although the technology is not necessarily new, think of border control at airports, perhaps the rollout in China has swayed opinion. When an authoritarian state like China, where political and societal values conflict that of the US, implements such technologies some will begin to ask what the nefarious impact of deployment actually is.

In February, a database emerged demonstrating China has used a full suite of AI tools to monitor its Uyghur population in the far west of the country. This could have been a catalyst for the rules.

That said, the technology is also far from perfect. Police forces across the UK has been trialling facial recognition and data analytics technologies with varied results. At least 53 UK local councils and 45 of the country’s police forces are heavily relying on computer algorithms to assess the risk level of crimes against children as well as people cheating on benefits.

In May last year, the South Wales Police Force has to defend its decision to trial NEC facial recognition software during the 2017 Champions League Final as it is revealed only 8% of the identifications proved to be accurate.

It might be viewed by some as bureaucracy for the sake of bureaucracy but considering the potential for abuse and damage to privacy rights, such administrative barriers are critical. More cities should take the same approach as San Francisco.

Facial recognition is being used in China’s monitoring network

A publicly accessible database managed by a surveillance contractor showed China has used a full suite of AI tools to monitor its Uyghur population in the far west of the country.

Victor Gevers, a cyber security expert and a researcher at the non-profit GDI Foundation, found that a database managed by SenseNets, a Chinese surveillance company, and housed in China Unicom cloud platform, has stored large quantities of tracking data of the residents in the Xinjiang autonomous region in west China. The majority of monitored are the Uyghur ethnic group. The data covered a total number of nearly 2.6 million people (2,565,724 to be precise), including personal information like their ID card details (issue & expire dates, sex, ethnic group, home address, birthday, photo) as well employer details, and the locations they have been tracked (using facial recognition) in the last 24 hours, during which time a total of 6,680,348 records were registered, according to Gevers.

Neither the scope nor the level of detail of the monitoring should be a surprise, given the measures used by China in that part of the country over the last two years. If there is anything embarrassing for the Chinese authorities and their contractors in this story, it will be the total failure of data security: the database was not protected at all. By the time Gevers notified the administrators at SenseNets, it had been accessible to anyone for at least half a year, according to the access log. The database has since been secured, opened, and secured again. Gevers also found out that the database was built on a pirate edition of Windows Server 2012. Police stations, hotels, and other service and business establishments are also found to have connected to the database.

This is a classic example of human errors defeating security systems. Not too long ago, Jeff Bezos of Amazon sent intimate pictures to his female companion, which ended up in the wrong hands. This led to the BBC’s quip that Bezos was the weak link in cybersecurity for the world’s leading cloud service provider.

Like other technologies, facial recognition can be used by overbearing governments for monitoring purposes, breaking all privacy protection. But it can also do tremendous good. EU citizens travelling between the UK and the Schengen Area have long got used to having their passports read by a machine then their faces matched by a camera. The AI technologies behind the experience have vastly simplified and expediated the immigration process. But, sometimes, for some reason, the machine may fail to recognise a face. In that case, there is always an immigration officer at the desk to do manual check.

Facial recognition, coupled with other technologies, for example blockchain, can also improve the efficiency in industries like cross-border logistics. The long border between Sweden and Norway is largely open despite that a passenger or cargo vehicle travelling from one country to another would be technically moving between inside the EU (Sweden) and outside of it (Norway). According to an article in The Economist, the frictionless transit needs digitalisation of documentation (of goods as well as on people), facial recognition (of drivers), sensors on the border (to read code on the driver’s mobile phone), and automatic number-plate recognition (of the vehicles).

In cases like these, facial recognition, and AI in general, should be lauded. What the world should be on alert to is how the data is being used and who has access to it.

 

China’s social credit system set to kick off in Beijing in 2020

The Chinese state wants to control its citizens via a system of social scoring that punishes behaviour it doesn’t approve of.

This initiative has been widely reported, including an excellent piece from ABC Australia, but this marks one of the first times a specific timescale has been attributed to it. Bloomberg reports that Beijing, China’s capital city, plans to implement the social credit system by the end of 2020, which will affect 22 million citizens.

The full plan has been published on a Chinese government website, and we currently have our Beijing bureau sifting through it to bring you our own take on the primary material. But for the time being we’re relying on Bloomberg’s account, which highlights just how sinister this sort of thing is.

People who accumulate higher social ‘scores’, the rules and algorithms for which are presumably opaque, subjective and nebulous, get access to special privileges, while those who fall foul of the system will apparently be unable to move even a single step. This is hopefully at least a bit hyperbolic, but it does indicate that a lot of the sanctions attached to a low score focus on the ability to travel.

Mobile technologies, including smartphones, social media, facial recognition, etc, will clearly play a big part in this Orwellian social manipulation strategy. The fact that our every action, or even inaction, now leaves a permanent digital fingerprint makes this sort of thing possible in a way that it never has been before. If you want a further sense of quite how seamlessly it could metastasize beyond China, watch the episode of Black Mirror called Nosedive, a preview of which you can see below.

 

South Wales Police facial recognition software boasts 8% success rate

The South Wales Police Force is defending the decision to trial NEC facial recognition software during last year’s Champions League Final as it is revealed only 8% of the identifications proved to be accurate.

The project was announced last year as the South Wales Police Force outlined plans to use NEC’s NeoFace Watch facial recognition software platform to increase the efficiency of police work during a weekend which saw 170,000 football fans in Cardiff. The promise of the technology was to identify persons of interest on pre-determined watchlists in real-time, with data being collected using CCTV cameras mounted on a number of police vehicles.

While the software was used at a number of different events in the capital, the Champions League Final got the most attention. As you can see from the table below, none of the results are particularly flattering for the South Wales Police or NEC, the firm which provided the technology to power the failed project, with the overall success rate just 9%

Event True Positive Identifications False Positive Identifications Success (%)
Champions League Final 173 2,297 8%
Elvis Festival 10 7 58%
Operation Fulcrum 5 10 33%
Anthony Joshua Fight 5 46 9%
Wales vs Australia Rugby 6 42 12.5%
Wales vs Georgia Rugby 1 2 33%
Wales vs New Zealand Rugby 3 9 25%
Wales vs South Africa Rugby 5 18 21%
Kasabian Concert 4 3 57%

While it is completely understandable there will be flaws in trials and POCs, this demonstration of outright failure makes you question whether this software should have been released from the lab in the first place.

Governments around the world are seemingly becoming less trusting of their own citizens on a daily basis, meaning more intrusive and secretive means of monitoring individuals are likely to become more common. Politicians and spooks around the world must have been watching these trials with some interest, and such a catastrophic failure of the technology is a very worrying sign.

Technology companies like NEC will be under pressure to produce platforms such as NeoFace Watch as quickly as possible as governments continuously look to step up activities. This pressure might result in platforms being launched too early, before enough stress tests have been run. This would certainly seem to be the explanation here, though South Wales Police (and NEC presumably) has blamed poor images supplied by UEFA, Interpol and other partner agencies. That said, the following extract from the NEC website seems to contradict this statement:

“NEC NeoFace technology’s strength lies in its tolerance of poor quality. Highly compressed surveillance videos and images, previously considered of little to no value, are now usable evidence and leading to higher rates of positive identification. With its proven ability to match low resolution facial images, including images with resolutions down to just 24 pixels between the eyes, NEC’s NeoFace technology outperforms all other face recognition systems in matching accuracy.”

It does seem the world has accepted its fate when it comes to Big Brother and eyes in the skies, but companies like NEC need to step up the game. Such technologies are likely to play a role in the trials and potential convictions of individuals in the future, therefore accuracy needs to be as high as possible.

EPIC pokes FTC over Facebook facial recognition techniques

The Electronic Privacy Information Centre (EPIC) is urging the FTC to investigate whether Facebook’s use of facial recognition technologies contradicts a consent order the firm made in 2011.

EPIC is leading a coalition of consumer groups against the social media giant on the grounds that user the facial recognition software violates the users rights to privacy. While there is a lot of nuanced language from all sides, EPIC argues that Facebook has not sought the permission of the user when developing these technologies, therefore is breaching privacy rules and ethics.

“The scanning of facial images without express, affirmative consent is unlawful and must be enjoined,” the group has said.

Back in 2011, Facebook found itself in hot water over its privacy practices and whether it was living up to the promises made to consumers. EPIC’s complaints at the time were supported by an FTC investigation and the firm was forced to sign a declaration which stated it would take privacy more seriously.

The issue here is the invasiveness of facial recognition software. While other biometric authentication technologies require the consent of the user, as well as proactive engagement (you have to put your finger on the scanner for example), facial recognition can be effectively used without the knowledge or approval of the user. It opens up quite an argument when it comes to the proper and ethical use of the technology as there will be nefarious actors who will have reprehensible intentions. And we do not exclude governments from this last statement.

While EPIC will argue the use of this technology is a violation of the consent agreement which Facebook signed in 2011, the dreaded wiggle room is present again. The application of facial recognition and consequences of user privacy has not been discussed from a regulatory perspective in any meaningful depth yet. It is a grey area which the technology companies are excellent at exploiting, but too be fair, it is legal until rules are written explicitly forbidding the practise.

Consumer groups like EPIC do have a useful place in the world, but a lot of the time they seem to be kicking up a fuss over not much, simply providing friction to progress for an almost non-existent issue which no-one really cares about. That said with the current headlines, there is no shortage of drama for the groups to be pointing their fingers at.