FBI and London Met land in hot water over facial recognition tech

The FBI and London Metropolitan Police force will be facing some awkward conversations this week over unauthorised and potentially illegal use of facial recognition technologies.

Starting in the US, the Washington Post has been handed records dating back almost five years which suggest the FBI and ICE (Immigration and Customs Enforcement) have been using DMV databases to build a surveillance network without the consent of citizens. The emails were obtained by Georgetown Law researchers through public records requests.

Although law enforcement agencies have normalised biometrics as part of investigations nowadays, think finger print or DNA evidence left at crime scenes, the traces are only useful when catching repeat offenders. Biometric databases are built by obtaining data from those who have been previously charged, but in this case, the FBI and ICE have been accessing data on 641 million individuals, the vast majority of which are innocent and would not have been consulted for the initiative.

In the Land of the Free, such hypocrisy is becoming almost second nature to national security and intelligence forces, who may well find themselves in some bother from a privacy perspective.

As it stands, there is no legislative or regulatory guidelines which authorise the development of such a complex surveillance system, or any public consultation with the citizens of the US. This act first, tell later mentality is something which is becoming increasingly common in country’s the US has designated as national enemies, though there is little evidence authorities in the US have any respect for the rights of their own citizens.

Heading across the pond to the UK, a report from the Human Rights, Big Data & Technology Project has identified ‘significant flaws’ with the way live facial recognition has been trialled in London by the Metropolitan Police force. The group, based out of the University of Essex Human Rights Centre, suggests it could be found to be illegal should it be challenged in court.

“The legal basis for the trials was unclear and is unlikely to satisfy the ‘in accordance with the law’ test established by human rights law,” said Dr Daragh Murray, who authored the report alongside Professor Peter Fussey.

“It does not appear that an effective effort was made to identify human rights harms or to establish the necessity of LFR [live facial recognition]. Ultimately, the impression is that human rights compliance was not built into the Metropolitan Police’s systems from the outset and was not an integral part of the process.”

The main gripe from the duo here seems to be how the Met approached the trials. LFR was approached in a manner similar to traditional CCTV, failing to take into the intrusive nature of facial recognition, and the use of biometric processing. The Met did not consider the ‘necessary in a democratic society’ test established by human rights law, and therefore effectively ignored the impact on privacy rights.

There were also numerous other issues, including a lack of public consultation, the accuracy of the technology (8 out of 42 tests were actually correct), criteria for using the technology was not clearly defined and accuracy and relevance of the ‘watchlist’ of suspects. However, the main concern from the University’s research team was that only the technical aspects of the trial were considered, not the impact on privacy.

There is a common theme in both of these instances; the authorities supposedly in place to protect our freedoms pay little attention to the privacy rights which are granted to us. There seems to be a ‘ends justify the means’ attitude with little consideration to the human right to privacy. Such attitudes are exactly what the US and UK aim to eradicate when ‘freeing’ citizens of oppressive regimes abroad.

What is perhaps the most concerned about these stories is the speed at which they are being implemented. There has been little public consultation to the appropriateness of these technologies or whether the general public is prepared to sacrifice privacy rights in the pursuit of national security. With the intrusive nature of facial recognition, authorities should not be allowed to make this decision on behalf of the general public, especially when there is so much precedent for abuse and privacy is a hot-topic following scandals in private industry.

Of course, there are examples of the establishment slowing down progress to give time for these considerations. In San Francisco, the city’s Board of Supervisors has made it illegal for forces to implement facial recognition technologies unless approval has been granted. The police force would have to demonstrate stringent justification, accountability systems and safeguards to privacy rights.

In the UK, Dr Murray and Professor Fussey are calling for a pause on the implementation or trialling of facial recognition technologies until the impact on and trade-off of privacy rights have been fully understood.

Facial recognition technologies are becoming incredibly useful when it comes to access and authentication, though there needs to be some serious conversations about the privacy implications of using the tech in the world of surveillance and police enforcement. At the moment, it seems to be nothing but an after-thought for the police forces and intelligence agencies, an incredibly worrying and dangerous attitude to have.

EFF to testify in support of California facial recognition technology ban

Last month, the City of San Francisco banned law enforcement agencies from using facial recognition software in cameras, and now the issue has been escalated to the State Senate.

While this is still only a minor thorn in the side of those who have complete disregard for privacy principles, it has the potential to swell into a major debate. There have been numerous trials around the world in an attempt to introduce the invasive technology, but no-one has actually stopped to have a public debate as to whether the disembowelling of privacy rights should be so easily facilitated.

After the City of San Francisco passed the rules, officials voted 8-1 in support of the ban, the issue was escalated up to State level. SB 1215 is now being considered by State legislators, with the Senate Committee on Public Safety conducting a review of pros and cons.

Numerous organizations have come out to support progress of the bill, and of course the official organizations representing law enforcement agencies at State level are attempting to block it. As part of the review process, EFF Grassroots Advocacy Organizer Nathan Sheard will testify in-front of the California Senate Public Safety Committee later on today [June 11].

The issue which is being debated here is quite simple; should the police be allowed to use such invasive surveillance technologies, potentially violating citizens right to privacy without knowledge or consent. Many laws are being passed to give citizens more control of their personal data in the digital economy, but with such surveillance technologies, said citizens may have no idea their images are being collected, analysed and stored by the State.

What should be viewed as an absolutely incredible instance of negligence and irresponsible behaviour, numerous police forces around the world have moved forward implementing these technologies without in-depth public consultation. Conspiracy theorists will have penned various nefarious outcomes for such data, but underhanded Government and police actions like this do support the first-step of their theories.

The City of San Francisco, the State of California and the EFF, as well as the dozens of other agencies challenging deployment of the technology, are quite right to slow progress. The introduction of facial recognition software should be challenged, debated and scrutinised. Free-reign should not be given to police forces and intelligence agencies; they have already show themselves as untrustworthy. They have lost the right to play around with invasive technologies without public debate.

“This bill declares that facial recognition and other biometric surveillance technology pose unique and significant threats to the civil rights and civil liberties of residents and visitors,” the proposed bill states.

“[the bill] Declares that the use of facial recognition and other biometric surveillance is the functional equivalent of requiring every person to show a personal photo identification card at all times in violation of recognized constitutional rights. [the bill] States that this technology also allows people to be tracked without consent and would also generate massive databases about law-abiding Californians and may chill the exercise of free speech in public places.”

Under existing laws, there seems to be little resistance to implementing these technologies, aside from the loose definition of ‘best practice’. This would not be considered a particularly difficult hurdle to overcome, such is the nuanced nature of ‘best practice’. Considering the negative implications of the technology, more red-tape should be introduced, forcing the police and intelligence agencies to provide suitable levels of justification and accountability.

Most importantly, there are no requirements for police forces or intelligence agencies to seek approval from the relevant legislative body to deploy the technology. Permission is needed to acquire cellular communications interception technology, in order to protect the civil rights and civil liberties of residents and visitors. The same rights are being challenged with facial recognition software in cameras, but no permissions are required.

This is of course not the first sign of resistance to facial recognition technologies. In January, 85 pro-privacy organizations, charities and influencers wrote to Amazon, Google and Microsoft requesting the firms pledge not to sell the technology to police forces or intelligence agencies. It does appear the use of the data by enforcement agencies in countries like China has put the fear into these organizations.

The accuracy of the technology has also been called into question. Although the tech giants are claiming AI is improving the accuracy every day, last year the American Civil Liberties Union produced research which suggested a 5% error rate. The research claimed 28 members of Congress had been falsely identified as people who had been arrested.

Interestingly enough, critics also claim the technology violates the Forth Amendment of the US Constitution. It has already been established that police demanding identification without suspicion violates this amendment, while the American Civil Liberties Union argues such technologies are effectively doing the same thing.

What is worth noting is that it is highly unlikely a total ban will be passed. This is not the case in the City of San Francisco, as the city has introduced measures to ensure appropriate justification and also that data is stored properly. The key with the rules in San Francisco is that it is making it as difficult as possible to ensure the technologies are not used haphazardly.

What we are most likely to see is bureaucracy. Red-tape will be scattered all over the technology to ensure it is used in an appropriate and justified manner.

Accessibility is one of the issues which privacy campaigners are facing right now. Companies like New York-based Vuzix and NNTC in the UAE are making products which are not obviously used for surveillance and are becoming increasingly affordable. Software from companies like NEC is also becoming more available, giving the police more options. A landscape with affordable technology and no regulatory resistance paints a gloomy picture.

The introduction of more red-tape might have under-resourced and under-pressure police forces frustrated, but such is the potential invasion of privacy rights and the consequence of abuse, it is absolutely necessary. The quicker this technology is brought into the public domain and understood by the man-on-the-street, the better.

San Francisco puts the brakes on facial recognition surveillance

The City of San Francisco has passed new rules which will significantly curb the abilities of public sector organisations to purchase and utilise facial recognition technologies.

Opinions on newly emerging surveillance technologies have varied drastically, with some pointing to the benefits of safety and efficiency for intelligence and police forces, while others have bemoaned the crippling potential it could have on civil liberties and privacy.

The new rules in San Francisco do not necessarily ban surveillance technologies entirely, but barriers to demonstrate justification have been significantly increased.

“The success of San Francisco’s #FacialRecognition ban is owed to a vast grassroots coalition that has advocated for similar policies around the Bay Area for years,” said San Francisco Supervisor Aaron Peskin.

The legislation will come into effect in 30 days’ time. From that point, no city department or contracting officer will be able to purchase equipment unless the Board of Supervisors has appropriated funds for such acquisition. New processes will also be introduced including a surveillance technology policy for the department which meet the demands of the Board, as well as a surveillance impact report.

The department would also have to produce an in-depth annual report which would detail:

  • How the technology was used
  • Details of each instance data was shared outside the department
  • Crime statistics

The impact report will have to include a huge range of information including all the forward plans on logistics, experiences from other government departments, justification for the expenditure and potential impact on privacy. The department may also have to consult public opinion, while it will have to create concrete policies on data retention, storage, reporting and analysis.

City officials are making it as difficult as possible to make use of such technologies, and considering the impact or potential for abuse, quite rightly so. As mentioned before, this is not a ban on next-generation surveillance technologies, but an attempt to ensure deployment is absolutely necessary.

As mentioned before, the concerns surround privacy and potential violations of civil liberties, which were largely outlined in wide-sweeping privacy reforms set forward by California Governor Jerry Brown last year. The rules are intended to spur on an ‘informed public debate’ on the potential impacts on the rights guaranteed by the First, Fourth, and Fourteenth Amendments of the US Constitution.

Aside from the potential for abuse, it does appear City Official and privacy advocates are concerned over the impact on prejudices based on race, ethnicity, religion, national origin, income level, sexual orientation, or political perspective. Many analytical technologies are based on the most likely scenario, leaning on stereotypical beliefs and potentially increasing profiling techniques, effectively removing impartiality of viewing each case on its individual factors.

While the intelligence and policing community will most likely view such conditions as a bureaucratic mess, it should be absolutely be viewed as necessary. We’ve already seen the implementation of such technologies without public debate and scrutiny, a drastic step considering the potential consequences.

Although the technology is not necessarily new, think of border control at airports, perhaps the rollout in China has swayed opinion. When an authoritarian state like China, where political and societal values conflict that of the US, implements such technologies some will begin to ask what the nefarious impact of deployment actually is.

In February, a database emerged demonstrating China has used a full suite of AI tools to monitor its Uyghur population in the far west of the country. This could have been a catalyst for the rules.

That said, the technology is also far from perfect. Police forces across the UK has been trialling facial recognition and data analytics technologies with varied results. At least 53 UK local councils and 45 of the country’s police forces are heavily relying on computer algorithms to assess the risk level of crimes against children as well as people cheating on benefits.

In May last year, the South Wales Police Force has to defend its decision to trial NEC facial recognition software during the 2017 Champions League Final as it is revealed only 8% of the identifications proved to be accurate.

It might be viewed by some as bureaucracy for the sake of bureaucracy but considering the potential for abuse and damage to privacy rights, such administrative barriers are critical. More cities should take the same approach as San Francisco.

Facial recognition is being used in China’s monitoring network

A publicly accessible database managed by a surveillance contractor showed China has used a full suite of AI tools to monitor its Uyghur population in the far west of the country.

Victor Gevers, a cyber security expert and a researcher at the non-profit GDI Foundation, found that a database managed by SenseNets, a Chinese surveillance company, and housed in China Unicom cloud platform, has stored large quantities of tracking data of the residents in the Xinjiang autonomous region in west China. The majority of monitored are the Uyghur ethnic group. The data covered a total number of nearly 2.6 million people (2,565,724 to be precise), including personal information like their ID card details (issue & expire dates, sex, ethnic group, home address, birthday, photo) as well employer details, and the locations they have been tracked (using facial recognition) in the last 24 hours, during which time a total of 6,680,348 records were registered, according to Gevers.

Neither the scope nor the level of detail of the monitoring should be a surprise, given the measures used by China in that part of the country over the last two years. If there is anything embarrassing for the Chinese authorities and their contractors in this story, it will be the total failure of data security: the database was not protected at all. By the time Gevers notified the administrators at SenseNets, it had been accessible to anyone for at least half a year, according to the access log. The database has since been secured, opened, and secured again. Gevers also found out that the database was built on a pirate edition of Windows Server 2012. Police stations, hotels, and other service and business establishments are also found to have connected to the database.

This is a classic example of human errors defeating security systems. Not too long ago, Jeff Bezos of Amazon sent intimate pictures to his female companion, which ended up in the wrong hands. This led to the BBC’s quip that Bezos was the weak link in cybersecurity for the world’s leading cloud service provider.

Like other technologies, facial recognition can be used by overbearing governments for monitoring purposes, breaking all privacy protection. But it can also do tremendous good. EU citizens travelling between the UK and the Schengen Area have long got used to having their passports read by a machine then their faces matched by a camera. The AI technologies behind the experience have vastly simplified and expediated the immigration process. But, sometimes, for some reason, the machine may fail to recognise a face. In that case, there is always an immigration officer at the desk to do manual check.

Facial recognition, coupled with other technologies, for example blockchain, can also improve the efficiency in industries like cross-border logistics. The long border between Sweden and Norway is largely open despite that a passenger or cargo vehicle travelling from one country to another would be technically moving between inside the EU (Sweden) and outside of it (Norway). According to an article in The Economist, the frictionless transit needs digitalisation of documentation (of goods as well as on people), facial recognition (of drivers), sensors on the border (to read code on the driver’s mobile phone), and automatic number-plate recognition (of the vehicles).

In cases like these, facial recognition, and AI in general, should be lauded. What the world should be on alert to is how the data is being used and who has access to it.

 

China’s social credit system set to kick off in Beijing in 2020

The Chinese state wants to control its citizens via a system of social scoring that punishes behaviour it doesn’t approve of.

This initiative has been widely reported, including an excellent piece from ABC Australia, but this marks one of the first times a specific timescale has been attributed to it. Bloomberg reports that Beijing, China’s capital city, plans to implement the social credit system by the end of 2020, which will affect 22 million citizens.

The full plan has been published on a Chinese government website, and we currently have our Beijing bureau sifting through it to bring you our own take on the primary material. But for the time being we’re relying on Bloomberg’s account, which highlights just how sinister this sort of thing is.

People who accumulate higher social ‘scores’, the rules and algorithms for which are presumably opaque, subjective and nebulous, get access to special privileges, while those who fall foul of the system will apparently be unable to move even a single step. This is hopefully at least a bit hyperbolic, but it does indicate that a lot of the sanctions attached to a low score focus on the ability to travel.

Mobile technologies, including smartphones, social media, facial recognition, etc, will clearly play a big part in this Orwellian social manipulation strategy. The fact that our every action, or even inaction, now leaves a permanent digital fingerprint makes this sort of thing possible in a way that it never has been before. If you want a further sense of quite how seamlessly it could metastasize beyond China, watch the episode of Black Mirror called Nosedive, a preview of which you can see below.

 

South Wales Police facial recognition software boasts 8% success rate

The South Wales Police Force is defending the decision to trial NEC facial recognition software during last year’s Champions League Final as it is revealed only 8% of the identifications proved to be accurate.

The project was announced last year as the South Wales Police Force outlined plans to use NEC’s NeoFace Watch facial recognition software platform to increase the efficiency of police work during a weekend which saw 170,000 football fans in Cardiff. The promise of the technology was to identify persons of interest on pre-determined watchlists in real-time, with data being collected using CCTV cameras mounted on a number of police vehicles.

While the software was used at a number of different events in the capital, the Champions League Final got the most attention. As you can see from the table below, none of the results are particularly flattering for the South Wales Police or NEC, the firm which provided the technology to power the failed project, with the overall success rate just 9%

Event True Positive Identifications False Positive Identifications Success (%)
Champions League Final 173 2,297 8%
Elvis Festival 10 7 58%
Operation Fulcrum 5 10 33%
Anthony Joshua Fight 5 46 9%
Wales vs Australia Rugby 6 42 12.5%
Wales vs Georgia Rugby 1 2 33%
Wales vs New Zealand Rugby 3 9 25%
Wales vs South Africa Rugby 5 18 21%
Kasabian Concert 4 3 57%

While it is completely understandable there will be flaws in trials and POCs, this demonstration of outright failure makes you question whether this software should have been released from the lab in the first place.

Governments around the world are seemingly becoming less trusting of their own citizens on a daily basis, meaning more intrusive and secretive means of monitoring individuals are likely to become more common. Politicians and spooks around the world must have been watching these trials with some interest, and such a catastrophic failure of the technology is a very worrying sign.

Technology companies like NEC will be under pressure to produce platforms such as NeoFace Watch as quickly as possible as governments continuously look to step up activities. This pressure might result in platforms being launched too early, before enough stress tests have been run. This would certainly seem to be the explanation here, though South Wales Police (and NEC presumably) has blamed poor images supplied by UEFA, Interpol and other partner agencies. That said, the following extract from the NEC website seems to contradict this statement:

“NEC NeoFace technology’s strength lies in its tolerance of poor quality. Highly compressed surveillance videos and images, previously considered of little to no value, are now usable evidence and leading to higher rates of positive identification. With its proven ability to match low resolution facial images, including images with resolutions down to just 24 pixels between the eyes, NEC’s NeoFace technology outperforms all other face recognition systems in matching accuracy.”

It does seem the world has accepted its fate when it comes to Big Brother and eyes in the skies, but companies like NEC need to step up the game. Such technologies are likely to play a role in the trials and potential convictions of individuals in the future, therefore accuracy needs to be as high as possible.

EPIC pokes FTC over Facebook facial recognition techniques

The Electronic Privacy Information Centre (EPIC) is urging the FTC to investigate whether Facebook’s use of facial recognition technologies contradicts a consent order the firm made in 2011.

EPIC is leading a coalition of consumer groups against the social media giant on the grounds that user the facial recognition software violates the users rights to privacy. While there is a lot of nuanced language from all sides, EPIC argues that Facebook has not sought the permission of the user when developing these technologies, therefore is breaching privacy rules and ethics.

“The scanning of facial images without express, affirmative consent is unlawful and must be enjoined,” the group has said.

Back in 2011, Facebook found itself in hot water over its privacy practices and whether it was living up to the promises made to consumers. EPIC’s complaints at the time were supported by an FTC investigation and the firm was forced to sign a declaration which stated it would take privacy more seriously.

The issue here is the invasiveness of facial recognition software. While other biometric authentication technologies require the consent of the user, as well as proactive engagement (you have to put your finger on the scanner for example), facial recognition can be effectively used without the knowledge or approval of the user. It opens up quite an argument when it comes to the proper and ethical use of the technology as there will be nefarious actors who will have reprehensible intentions. And we do not exclude governments from this last statement.

While EPIC will argue the use of this technology is a violation of the consent agreement which Facebook signed in 2011, the dreaded wiggle room is present again. The application of facial recognition and consequences of user privacy has not been discussed from a regulatory perspective in any meaningful depth yet. It is a grey area which the technology companies are excellent at exploiting, but too be fair, it is legal until rules are written explicitly forbidding the practise.

Consumer groups like EPIC do have a useful place in the world, but a lot of the time they seem to be kicking up a fuss over not much, simply providing friction to progress for an almost non-existent issue which no-one really cares about. That said with the current headlines, there is no shortage of drama for the groups to be pointing their fingers at.