Congress asks Amazon whether it is becoming a police snitch

The Subcommittee on Economic and Consumer Policy has written to Amazon asking the internet to explain partnerships between surveillance company Ring and local police departments.

Home security and surveillance products are becoming increasingly popular with the consumer, though it appears the subcommittee is asking Amazon to explain the fine print. As with most products and services launched by Silicon Valley residents, Ring seems to be accompanied with legal jargon few will understand and may well compromise privacy and data protection principles.

“The Subcommittee on Economic and Consumer Policy is writing to request documents and information about Ring’s partnerships with city governments and local police departments, along with the company’s policies governing the data it collects,” the letter states.

“The Subcommittee is examining traditional constitutional protections against surveilling Americans and the balancing of civil liberties and security interests.”

The question which the politicians seem to be asking is how compliant Ring will in handing over information to law enforcement agencies or local government authorities, as well as the fundamentals of the partnerships themselves. Once again it appears the technology industry is revelling in the grey lands of nuance and half-statements.

Ring currently has partnerships with more than 900 law enforcement and local government agencies, it is critically important that everything is above board. This isn’t just a quirky product adopted by a few individuals anymore, this is potentially a scaled-surveillance programme. The opportunity for abuse is present once again, offering validity for Congress to wade into the situation and start splashing.

Optimists might suggest Ring is being a good corporate citizen, aiding police and security forces where possible. Cynics, on the other hand, would question whether Amazon is attempting to create a private, for-profit surveillance network.

One area which the Subcommittee would like some clarification on is to do with how compliant Ring would be when offering data to government agencies. Ring has said it would not turn over data unless it is “required to do so to comply with a legally valid and binding order”, though the wording of the terms of service seem to undermine this firm stance.

Ring may access, use, preserve and/or disclose your Content to law enforcement authorities, government officials and/or third parties, if legally required to do so or if we have a good faith belief that such access, use, preservation or disclosure is reasonably necessary to: (a) comply with applicable law, regulation, legal process or reasonable governmental request.

The final point of this clause, reasonable government request, is what should be considered worrying. This is unnecessarily vague and flexible language which can be used for a wide range of justifications or explanations for wrongdoing.

More often than not, politicians on such subcommittees are usually chasing a headline, but this seems to be a case where proper investigation is warranted. Law enforcement agencies and the internet giants have shown themselves on numerous occasions not to be trustworthy with minimal oversight. And when you are talking about a topic as sensitive as data privacy, no blind trust should be afforded at all.

London Police push forward with controversial facial recognition tech

The London Metropolitan Police Service has announced it will begin the operational use of Live Facial Recognition (LFR) technology, despite there still being many critics and concerns.

The technology itself has come under criticism not only for poor performance when identifying individuals, but critics have also suggested this should be deemed as a violation of privacy rights afforded to individuals in democratic societies. Despite an on-going controversial position, the London police force seem to think it has all the bases covered.

“This is an important development for the Met and one which is vital in assisting us in bearing down on violence,” said Assistant Commissioner Nick Ephgrave. “As a modern police force, I believe that we have a duty to use new technologies to keep people safe in London.

“We are using a tried-and-tested technology and have taken a considered and transparent approach in order to arrive at this point. Similar technology is already widely used across the UK, in the private sector. Ours has been trialled by our technology teams for use in an operational policing environment.”

The initiative will start in various London locations the Met believes it will help locate the most serious offenders. The primary focus will be on knife and violent crime. It is unclear whether these deployments will be in permanently at a location, or the officers will be free to move around to other parts of the city.

As individuals pass the relevant cameras, facials maps will be compared to ‘watchlists’ created for specific areas. Should a match be confirmed, the officer will be prompted (not ordered) to approach the individual.

What Ephgrave seems to be conveniently leaving out of the above statements is that the private use of facial recognition technology is either (a) largely in trial period, or (b) highly controversial also.

In August, privacy advocacy group Big Brother Watch unveiled a report which suggested shopping centres, casinos and even publicly owned museums had implemented the technology without public consultation and had even been sharing data with local police forces without consent. This is a worrying disregard to the vitally important privacy principles of the UK.

At European level, the European Commission has been considering new rules which would extend consumer rights to include facial recognition technologies. And in the US, court cases have been raised against implementation in Illinois, while the City of San Francisco has effectively banned the technology unless in the most serious of circumstances.

The London Metropolitan Police Force has said it will delete images which are not matched to individuals on record, though considering police databases have more than 20 million records, this leaves wiggle room. If an arrest is made, the data will be kept for 31 days. Although this is a concession by the Met, Human rights organisations and privacy advocacy groups have continued to suggest such technologies are an intrusion, over-stepping the privileges afforded to the police and eroding the concept of privacy.

Interestingly enough, the same underlying issues are persisting in London; the police force seems to have pushed forward with the introduction of the technology without a comprehensive public consultation. While there is good which can be taken from this technology, there are also grave risks for abuse unless managed very effectively; the general public should be afforded the opportunity to contribute to the debate.

This does seem to be a similar case to the boiling frog. The premise of this fable is that if a frog is put suddenly into boiling water, it will jump out, but if the frog is put in tepid water which is then brought to a boil slowly, it will not perceive the danger and will be cooked to death. The same could be said about facial recognition technology.

Eight trials were conducted by the London Metropolitan Police Force between 2016 and 2018, some with disastrously poor results, though few were widely reported on. In September, the UK High Court ruled facial recognition technologies could be implemented for ‘appropriate and non-arbitrary’ cases. As this is quite a nuanced and subjective way to address the status quo, authorities must be prevented from creeping influence.

Ultimately this does seem like a very brash decision to have been made, but also authorised by the political influencers of the UK. This is not to say facial recognition will not benefit society, or have a positive impact on security, but there is an impact on privacy and a risk of abuse. When there are pros and cons to a decision, it should be opened-up to public debate; we should be allowed to elect whether to sacrifice privacy in the pursuit of security.

The general public should be allowed to have their voice heard before such impactful decisions are made, but it seems the London Metropolitan Police Force does not agree with this statement.

FBI and London Met land in hot water over facial recognition tech

The FBI and London Metropolitan Police force will be facing some awkward conversations this week over unauthorised and potentially illegal use of facial recognition technologies.

Starting in the US, the Washington Post has been handed records dating back almost five years which suggest the FBI and ICE (Immigration and Customs Enforcement) have been using DMV databases to build a surveillance network without the consent of citizens. The emails were obtained by Georgetown Law researchers through public records requests.

Although law enforcement agencies have normalised biometrics as part of investigations nowadays, think finger print or DNA evidence left at crime scenes, the traces are only useful when catching repeat offenders. Biometric databases are built by obtaining data from those who have been previously charged, but in this case, the FBI and ICE have been accessing data on 641 million individuals, the vast majority of which are innocent and would not have been consulted for the initiative.

In the Land of the Free, such hypocrisy is becoming almost second nature to national security and intelligence forces, who may well find themselves in some bother from a privacy perspective.

As it stands, there is no legislative or regulatory guidelines which authorise the development of such a complex surveillance system, or any public consultation with the citizens of the US. This act first, tell later mentality is something which is becoming increasingly common in country’s the US has designated as national enemies, though there is little evidence authorities in the US have any respect for the rights of their own citizens.

Heading across the pond to the UK, a report from the Human Rights, Big Data & Technology Project has identified ‘significant flaws’ with the way live facial recognition has been trialled in London by the Metropolitan Police force. The group, based out of the University of Essex Human Rights Centre, suggests it could be found to be illegal should it be challenged in court.

“The legal basis for the trials was unclear and is unlikely to satisfy the ‘in accordance with the law’ test established by human rights law,” said Dr Daragh Murray, who authored the report alongside Professor Peter Fussey.

“It does not appear that an effective effort was made to identify human rights harms or to establish the necessity of LFR [live facial recognition]. Ultimately, the impression is that human rights compliance was not built into the Metropolitan Police’s systems from the outset and was not an integral part of the process.”

The main gripe from the duo here seems to be how the Met approached the trials. LFR was approached in a manner similar to traditional CCTV, failing to take into the intrusive nature of facial recognition, and the use of biometric processing. The Met did not consider the ‘necessary in a democratic society’ test established by human rights law, and therefore effectively ignored the impact on privacy rights.

There were also numerous other issues, including a lack of public consultation, the accuracy of the technology (8 out of 42 tests were actually correct), criteria for using the technology was not clearly defined and accuracy and relevance of the ‘watchlist’ of suspects. However, the main concern from the University’s research team was that only the technical aspects of the trial were considered, not the impact on privacy.

There is a common theme in both of these instances; the authorities supposedly in place to protect our freedoms pay little attention to the privacy rights which are granted to us. There seems to be a ‘ends justify the means’ attitude with little consideration to the human right to privacy. Such attitudes are exactly what the US and UK aim to eradicate when ‘freeing’ citizens of oppressive regimes abroad.

What is perhaps the most concerned about these stories is the speed at which they are being implemented. There has been little public consultation to the appropriateness of these technologies or whether the general public is prepared to sacrifice privacy rights in the pursuit of national security. With the intrusive nature of facial recognition, authorities should not be allowed to make this decision on behalf of the general public, especially when there is so much precedent for abuse and privacy is a hot-topic following scandals in private industry.

Of course, there are examples of the establishment slowing down progress to give time for these considerations. In San Francisco, the city’s Board of Supervisors has made it illegal for forces to implement facial recognition technologies unless approval has been granted. The police force would have to demonstrate stringent justification, accountability systems and safeguards to privacy rights.

In the UK, Dr Murray and Professor Fussey are calling for a pause on the implementation or trialling of facial recognition technologies until the impact on and trade-off of privacy rights have been fully understood.

Facial recognition technologies are becoming incredibly useful when it comes to access and authentication, though there needs to be some serious conversations about the privacy implications of using the tech in the world of surveillance and police enforcement. At the moment, it seems to be nothing but an after-thought for the police forces and intelligence agencies, an incredibly worrying and dangerous attitude to have.

EFF to testify in support of California facial recognition technology ban

Last month, the City of San Francisco banned law enforcement agencies from using facial recognition software in cameras, and now the issue has been escalated to the State Senate.

While this is still only a minor thorn in the side of those who have complete disregard for privacy principles, it has the potential to swell into a major debate. There have been numerous trials around the world in an attempt to introduce the invasive technology, but no-one has actually stopped to have a public debate as to whether the disembowelling of privacy rights should be so easily facilitated.

After the City of San Francisco passed the rules, officials voted 8-1 in support of the ban, the issue was escalated up to State level. SB 1215 is now being considered by State legislators, with the Senate Committee on Public Safety conducting a review of pros and cons.

Numerous organizations have come out to support progress of the bill, and of course the official organizations representing law enforcement agencies at State level are attempting to block it. As part of the review process, EFF Grassroots Advocacy Organizer Nathan Sheard will testify in-front of the California Senate Public Safety Committee later on today [June 11].

The issue which is being debated here is quite simple; should the police be allowed to use such invasive surveillance technologies, potentially violating citizens right to privacy without knowledge or consent. Many laws are being passed to give citizens more control of their personal data in the digital economy, but with such surveillance technologies, said citizens may have no idea their images are being collected, analysed and stored by the State.

What should be viewed as an absolutely incredible instance of negligence and irresponsible behaviour, numerous police forces around the world have moved forward implementing these technologies without in-depth public consultation. Conspiracy theorists will have penned various nefarious outcomes for such data, but underhanded Government and police actions like this do support the first-step of their theories.

The City of San Francisco, the State of California and the EFF, as well as the dozens of other agencies challenging deployment of the technology, are quite right to slow progress. The introduction of facial recognition software should be challenged, debated and scrutinised. Free-reign should not be given to police forces and intelligence agencies; they have already show themselves as untrustworthy. They have lost the right to play around with invasive technologies without public debate.

“This bill declares that facial recognition and other biometric surveillance technology pose unique and significant threats to the civil rights and civil liberties of residents and visitors,” the proposed bill states.

“[the bill] Declares that the use of facial recognition and other biometric surveillance is the functional equivalent of requiring every person to show a personal photo identification card at all times in violation of recognized constitutional rights. [the bill] States that this technology also allows people to be tracked without consent and would also generate massive databases about law-abiding Californians and may chill the exercise of free speech in public places.”

Under existing laws, there seems to be little resistance to implementing these technologies, aside from the loose definition of ‘best practice’. This would not be considered a particularly difficult hurdle to overcome, such is the nuanced nature of ‘best practice’. Considering the negative implications of the technology, more red-tape should be introduced, forcing the police and intelligence agencies to provide suitable levels of justification and accountability.

Most importantly, there are no requirements for police forces or intelligence agencies to seek approval from the relevant legislative body to deploy the technology. Permission is needed to acquire cellular communications interception technology, in order to protect the civil rights and civil liberties of residents and visitors. The same rights are being challenged with facial recognition software in cameras, but no permissions are required.

This is of course not the first sign of resistance to facial recognition technologies. In January, 85 pro-privacy organizations, charities and influencers wrote to Amazon, Google and Microsoft requesting the firms pledge not to sell the technology to police forces or intelligence agencies. It does appear the use of the data by enforcement agencies in countries like China has put the fear into these organizations.

The accuracy of the technology has also been called into question. Although the tech giants are claiming AI is improving the accuracy every day, last year the American Civil Liberties Union produced research which suggested a 5% error rate. The research claimed 28 members of Congress had been falsely identified as people who had been arrested.

Interestingly enough, critics also claim the technology violates the Forth Amendment of the US Constitution. It has already been established that police demanding identification without suspicion violates this amendment, while the American Civil Liberties Union argues such technologies are effectively doing the same thing.

What is worth noting is that it is highly unlikely a total ban will be passed. This is not the case in the City of San Francisco, as the city has introduced measures to ensure appropriate justification and also that data is stored properly. The key with the rules in San Francisco is that it is making it as difficult as possible to ensure the technologies are not used haphazardly.

What we are most likely to see is bureaucracy. Red-tape will be scattered all over the technology to ensure it is used in an appropriate and justified manner.

Accessibility is one of the issues which privacy campaigners are facing right now. Companies like New York-based Vuzix and NNTC in the UAE are making products which are not obviously used for surveillance and are becoming increasingly affordable. Software from companies like NEC is also becoming more available, giving the police more options. A landscape with affordable technology and no regulatory resistance paints a gloomy picture.

The introduction of more red-tape might have under-resourced and under-pressure police forces frustrated, but such is the potential invasion of privacy rights and the consequence of abuse, it is absolutely necessary. The quicker this technology is brought into the public domain and understood by the man-on-the-street, the better.

Judge says no to police forcing phone unlocks with face

A judge in the District Court for the Northern District of California has denied the police a warrant which would force suspects to open their phones through biometric authentication.

While it might seem like somewhat of an unusual scenario, we’re sure many of you are imagining a man pinned to the ground with a phone being waved in his face, it is important to set precedent in these matters. Just as law enforcement agencies cannot be granted a warrant forcing an individual to hand over his/her password, suspects or criminals cannot be forced to open devices through the biometric sensors according to the ruling.

The case itself focuses on two individuals, who are suspected of attempting to extort money from a third person through Facebook Messenger. The pair are threatening to release an embarrassing video of the third person should the funds not be transferred.

Northern California Federal District Judge Kandis Westmore ruled the authorities did not have probable cause for the warrant, perhaps due to the reason said messages and threats could be read through the third persons account, and the request was too broad. This is another example of authorities over reaching and not being specific, leaving too much room for potential abuse.

While this case might sound odd, the world should be prepared for more such rulings in the future.

“The challenge facing the courts is that technology is far outpacing the law,” the ruling from Judge Westmore states. “In recognition of this reality, the United States Supreme Court recently instructed courts to adopt rules that ‘take account of more sophisticated systems that are already in use or in development’.

“Courts have an obligation to safeguard constitutional rights and cannot permit those rights to be diminished due to the advancement of technology.”

In short, the rules and regulations of the land are not in fitting with today’s technology and society, but this does not mean law enforcement authorities can take advantage of the grey areas. This is perhaps an obvious statement to make, but it does hammer home the need for reform to ensure rules and regulations are contextually relevant.

While progress has been slow, there have been a few breakthroughs for privacy advocates in recent months. Last June, the US Supreme Court ruled in Carpenter versus US case that the collection of mobile location data on individuals without a warrant was a violation of data privacy and the Fourth Amendment of the US constitution.

The issue which many courts are facing is precedent. Lawyers are arguing for certain cases and warrants using precedent which is from another era. Theoretically, these rules can be applied, but when you consider the drastic and fundamental changes which have occurred in the communications world, you have to wonder whether anything from previous decades is relevant anymore.

As Judge Westmore points out, technology is vastly outpacing the pace of change in public sector institutions. This presents a massive risk of abuse, but slowing innovation is not a reasonable option. A tricky catch-22.