FBI and London Met land in hot water over facial recognition tech

The FBI and London Metropolitan Police force will be facing some awkward conversations this week over unauthorised and potentially illegal use of facial recognition technologies.

Starting in the US, the Washington Post has been handed records dating back almost five years which suggest the FBI and ICE (Immigration and Customs Enforcement) have been using DMV databases to build a surveillance network without the consent of citizens. The emails were obtained by Georgetown Law researchers through public records requests.

Although law enforcement agencies have normalised biometrics as part of investigations nowadays, think finger print or DNA evidence left at crime scenes, the traces are only useful when catching repeat offenders. Biometric databases are built by obtaining data from those who have been previously charged, but in this case, the FBI and ICE have been accessing data on 641 million individuals, the vast majority of which are innocent and would not have been consulted for the initiative.

In the Land of the Free, such hypocrisy is becoming almost second nature to national security and intelligence forces, who may well find themselves in some bother from a privacy perspective.

As it stands, there is no legislative or regulatory guidelines which authorise the development of such a complex surveillance system, or any public consultation with the citizens of the US. This act first, tell later mentality is something which is becoming increasingly common in country’s the US has designated as national enemies, though there is little evidence authorities in the US have any respect for the rights of their own citizens.

Heading across the pond to the UK, a report from the Human Rights, Big Data & Technology Project has identified ‘significant flaws’ with the way live facial recognition has been trialled in London by the Metropolitan Police force. The group, based out of the University of Essex Human Rights Centre, suggests it could be found to be illegal should it be challenged in court.

“The legal basis for the trials was unclear and is unlikely to satisfy the ‘in accordance with the law’ test established by human rights law,” said Dr Daragh Murray, who authored the report alongside Professor Peter Fussey.

“It does not appear that an effective effort was made to identify human rights harms or to establish the necessity of LFR [live facial recognition]. Ultimately, the impression is that human rights compliance was not built into the Metropolitan Police’s systems from the outset and was not an integral part of the process.”

The main gripe from the duo here seems to be how the Met approached the trials. LFR was approached in a manner similar to traditional CCTV, failing to take into the intrusive nature of facial recognition, and the use of biometric processing. The Met did not consider the ‘necessary in a democratic society’ test established by human rights law, and therefore effectively ignored the impact on privacy rights.

There were also numerous other issues, including a lack of public consultation, the accuracy of the technology (8 out of 42 tests were actually correct), criteria for using the technology was not clearly defined and accuracy and relevance of the ‘watchlist’ of suspects. However, the main concern from the University’s research team was that only the technical aspects of the trial were considered, not the impact on privacy.

There is a common theme in both of these instances; the authorities supposedly in place to protect our freedoms pay little attention to the privacy rights which are granted to us. There seems to be a ‘ends justify the means’ attitude with little consideration to the human right to privacy. Such attitudes are exactly what the US and UK aim to eradicate when ‘freeing’ citizens of oppressive regimes abroad.

What is perhaps the most concerned about these stories is the speed at which they are being implemented. There has been little public consultation to the appropriateness of these technologies or whether the general public is prepared to sacrifice privacy rights in the pursuit of national security. With the intrusive nature of facial recognition, authorities should not be allowed to make this decision on behalf of the general public, especially when there is so much precedent for abuse and privacy is a hot-topic following scandals in private industry.

Of course, there are examples of the establishment slowing down progress to give time for these considerations. In San Francisco, the city’s Board of Supervisors has made it illegal for forces to implement facial recognition technologies unless approval has been granted. The police force would have to demonstrate stringent justification, accountability systems and safeguards to privacy rights.

In the UK, Dr Murray and Professor Fussey are calling for a pause on the implementation or trialling of facial recognition technologies until the impact on and trade-off of privacy rights have been fully understood.

Facial recognition technologies are becoming incredibly useful when it comes to access and authentication, though there needs to be some serious conversations about the privacy implications of using the tech in the world of surveillance and police enforcement. At the moment, it seems to be nothing but an after-thought for the police forces and intelligence agencies, an incredibly worrying and dangerous attitude to have.

EFF to testify in support of California facial recognition technology ban

Last month, the City of San Francisco banned law enforcement agencies from using facial recognition software in cameras, and now the issue has been escalated to the State Senate.

While this is still only a minor thorn in the side of those who have complete disregard for privacy principles, it has the potential to swell into a major debate. There have been numerous trials around the world in an attempt to introduce the invasive technology, but no-one has actually stopped to have a public debate as to whether the disembowelling of privacy rights should be so easily facilitated.

After the City of San Francisco passed the rules, officials voted 8-1 in support of the ban, the issue was escalated up to State level. SB 1215 is now being considered by State legislators, with the Senate Committee on Public Safety conducting a review of pros and cons.

Numerous organizations have come out to support progress of the bill, and of course the official organizations representing law enforcement agencies at State level are attempting to block it. As part of the review process, EFF Grassroots Advocacy Organizer Nathan Sheard will testify in-front of the California Senate Public Safety Committee later on today [June 11].

The issue which is being debated here is quite simple; should the police be allowed to use such invasive surveillance technologies, potentially violating citizens right to privacy without knowledge or consent. Many laws are being passed to give citizens more control of their personal data in the digital economy, but with such surveillance technologies, said citizens may have no idea their images are being collected, analysed and stored by the State.

What should be viewed as an absolutely incredible instance of negligence and irresponsible behaviour, numerous police forces around the world have moved forward implementing these technologies without in-depth public consultation. Conspiracy theorists will have penned various nefarious outcomes for such data, but underhanded Government and police actions like this do support the first-step of their theories.

The City of San Francisco, the State of California and the EFF, as well as the dozens of other agencies challenging deployment of the technology, are quite right to slow progress. The introduction of facial recognition software should be challenged, debated and scrutinised. Free-reign should not be given to police forces and intelligence agencies; they have already show themselves as untrustworthy. They have lost the right to play around with invasive technologies without public debate.

“This bill declares that facial recognition and other biometric surveillance technology pose unique and significant threats to the civil rights and civil liberties of residents and visitors,” the proposed bill states.

“[the bill] Declares that the use of facial recognition and other biometric surveillance is the functional equivalent of requiring every person to show a personal photo identification card at all times in violation of recognized constitutional rights. [the bill] States that this technology also allows people to be tracked without consent and would also generate massive databases about law-abiding Californians and may chill the exercise of free speech in public places.”

Under existing laws, there seems to be little resistance to implementing these technologies, aside from the loose definition of ‘best practice’. This would not be considered a particularly difficult hurdle to overcome, such is the nuanced nature of ‘best practice’. Considering the negative implications of the technology, more red-tape should be introduced, forcing the police and intelligence agencies to provide suitable levels of justification and accountability.

Most importantly, there are no requirements for police forces or intelligence agencies to seek approval from the relevant legislative body to deploy the technology. Permission is needed to acquire cellular communications interception technology, in order to protect the civil rights and civil liberties of residents and visitors. The same rights are being challenged with facial recognition software in cameras, but no permissions are required.

This is of course not the first sign of resistance to facial recognition technologies. In January, 85 pro-privacy organizations, charities and influencers wrote to Amazon, Google and Microsoft requesting the firms pledge not to sell the technology to police forces or intelligence agencies. It does appear the use of the data by enforcement agencies in countries like China has put the fear into these organizations.

The accuracy of the technology has also been called into question. Although the tech giants are claiming AI is improving the accuracy every day, last year the American Civil Liberties Union produced research which suggested a 5% error rate. The research claimed 28 members of Congress had been falsely identified as people who had been arrested.

Interestingly enough, critics also claim the technology violates the Forth Amendment of the US Constitution. It has already been established that police demanding identification without suspicion violates this amendment, while the American Civil Liberties Union argues such technologies are effectively doing the same thing.

What is worth noting is that it is highly unlikely a total ban will be passed. This is not the case in the City of San Francisco, as the city has introduced measures to ensure appropriate justification and also that data is stored properly. The key with the rules in San Francisco is that it is making it as difficult as possible to ensure the technologies are not used haphazardly.

What we are most likely to see is bureaucracy. Red-tape will be scattered all over the technology to ensure it is used in an appropriate and justified manner.

Accessibility is one of the issues which privacy campaigners are facing right now. Companies like New York-based Vuzix and NNTC in the UAE are making products which are not obviously used for surveillance and are becoming increasingly affordable. Software from companies like NEC is also becoming more available, giving the police more options. A landscape with affordable technology and no regulatory resistance paints a gloomy picture.

The introduction of more red-tape might have under-resourced and under-pressure police forces frustrated, but such is the potential invasion of privacy rights and the consequence of abuse, it is absolutely necessary. The quicker this technology is brought into the public domain and understood by the man-on-the-street, the better.

Judge says no to police forcing phone unlocks with face

A judge in the District Court for the Northern District of California has denied the police a warrant which would force suspects to open their phones through biometric authentication.

While it might seem like somewhat of an unusual scenario, we’re sure many of you are imagining a man pinned to the ground with a phone being waved in his face, it is important to set precedent in these matters. Just as law enforcement agencies cannot be granted a warrant forcing an individual to hand over his/her password, suspects or criminals cannot be forced to open devices through the biometric sensors according to the ruling.

The case itself focuses on two individuals, who are suspected of attempting to extort money from a third person through Facebook Messenger. The pair are threatening to release an embarrassing video of the third person should the funds not be transferred.

Northern California Federal District Judge Kandis Westmore ruled the authorities did not have probable cause for the warrant, perhaps due to the reason said messages and threats could be read through the third persons account, and the request was too broad. This is another example of authorities over reaching and not being specific, leaving too much room for potential abuse.

While this case might sound odd, the world should be prepared for more such rulings in the future.

“The challenge facing the courts is that technology is far outpacing the law,” the ruling from Judge Westmore states. “In recognition of this reality, the United States Supreme Court recently instructed courts to adopt rules that ‘take account of more sophisticated systems that are already in use or in development’.

“Courts have an obligation to safeguard constitutional rights and cannot permit those rights to be diminished due to the advancement of technology.”

In short, the rules and regulations of the land are not in fitting with today’s technology and society, but this does not mean law enforcement authorities can take advantage of the grey areas. This is perhaps an obvious statement to make, but it does hammer home the need for reform to ensure rules and regulations are contextually relevant.

While progress has been slow, there have been a few breakthroughs for privacy advocates in recent months. Last June, the US Supreme Court ruled in Carpenter versus US case that the collection of mobile location data on individuals without a warrant was a violation of data privacy and the Fourth Amendment of the US constitution.

The issue which many courts are facing is precedent. Lawyers are arguing for certain cases and warrants using precedent which is from another era. Theoretically, these rules can be applied, but when you consider the drastic and fundamental changes which have occurred in the communications world, you have to wonder whether anything from previous decades is relevant anymore.

As Judge Westmore points out, technology is vastly outpacing the pace of change in public sector institutions. This presents a massive risk of abuse, but slowing innovation is not a reasonable option. A tricky catch-22.