Europe set to join the facial recognition debate

With more authorities demonstrating they cannot be trusted to act responsibly or transparently, the European Commission is reportedly on the verge of putting the reigns on facial recognition.

According to reports in The Financial Times, the European Commission is considering imposing new rules which would extend consumer rights to include facial recognition technologies. The move is part of a greater upheaval to address the ethical and responsible use of artificial intelligence in today’s digital society.

Across the world, police forces and intelligence agencies are imposing technologies which pose a significant risk of abuse without public consultation or processes to create accountability or justification. There are of course certain nations who do not care about privacy rights of citizens, though when you see the technology being implemented for surveillance purposes in the likes of the US, UK and Sweden, states where such rights are supposedly sacred, the line starts to be blurry.

The reasoning behind the implementation of facial recognition in surveillance networks is irrelevant; without public consultation and transparency, these police forces, agencies, public sector authorities and private companies are completely disregarding the citizens right to privacy.

These citizens might well support such initiatives, electing for greater security or consumer benefits over the right to privacy, but they have the right to be asked.

What is worth noting, is that this technology can be a driver for positive change in the world when implemented and managed correctly. Facial scanners are speeding up the immigration process in airports, while Telia is trialling a payment system using facial recognition in Finland. When deployed with consideration and the right processes, there are many benefits to be realised.

The European Commission has not confirmed or denied the reports to Telecoms.com, though it did reaffirm its on-going position on artificial intelligence during a press conference yesterday.

“In June, the high-level expert group on artificial intelligence, which was appointed by the Commission, presented the first policy recommendations and ethics guidelines on AI,” spokesperson Natasha Bertaud said during the afternoon briefing. “These are currently being tested and going forward the Commission will decide on any future steps in-light of this process which remains on-going.”

The Commission does not comment on leaked documents and memos, though reading between the lines, it is on the agenda. One of the points the 52-person expert group will address over the coming months is building trust in artificial intelligence, while one of the seven principles presented for consultation concerns privacy.

On the privacy side, parties implementing these technologies must ensure data ‘will not be used to unlawfully or unfairly discriminate’, as well as setting systems in place to dictate who can access the data. We suspect that in the rush to trial and deploy technology such as facial recognition, few systems and processes to drive accountability and justification have been put in place.

Although these points do not necessarily cover the right for the citizen to decide, tracking and profiling are areas where the group has recommended the European Commission consider adding more regulation to protect against abuses and irresponsible deployment or management of the technology.

Once again, the grey areas are being exploited.

As there are only so many bodies in the European Commission or working for national regulators, and technology is advancing so quickly, there is often a void in the rules governing the newly emerging segments. Artificial intelligence, surveillance and facial recognition certainly fall into this chasm, creating a digital wild-west landscape where those who do not understand the ‘law of unintended consequence’ play around with new toys.

In the UK, it was unveiled several private property owners and museums were using the technology for surveillance without telling consumers. Even more worryingly, some of this data has been shared with police forces. Information Commissioner Elizabeth Denham has already stated her agency will be looking into the deployments and will attempt to rectify the situation.

Prior to this revelation, a report from the Human Rights, Big Data & Technology Project attacked a trial from the London Metropolitan Police Force, suggesting it could be found to be illegal should it be challenged in court. The South Wales Police Force has also found itself in hot water after it was found its own trials saw only an 8% success rate.

Over in Sweden, the data protection regulator used powers granted by GDPR to fine a school which had been using facial recognition to monitor attendance of pupils. The school claimed they had received consent from the students, but as they are in a dependent position, this was not deemed satisfactory. The school was also found to have substandard processes when handling the data.

Finally, in the US, Facebook is going to find itself in court once again, this time over the implementation of facial recognition software in 2010. A class-action lawsuit has been brought against the social media giant, suggesting the use of the technology was non-compliant under the Illinois Biometric Information Privacy Act.

This is one example where law makers have been very effective in getting ahead of trends. The law in question was enacted in 2008 and demanded companies gain consent before any facial recognition technologies are introduced. This is an Act which should be applauded for its foresight.

The speed in which progress is being made with facial recognition in the surveillance world is incredibly worrying. Private and public parties have an obligation to consider the impact on the human right to privacy, though much distaste has been shown to these principles in recent months. Perhaps it is more ignorance, short-sightedness or a lack of competence, but without rules to govern this segment, the unintended consequences could be compounded years down the line.

Another point worth noting is the gathering momentum to stop the wrongful implementation of facial recognition. Aside from Big Brother Watch raising concerns in the UK, the City of San Francisco is attempting to implement an approval function for police forces, while Google is facing an internal rebellion. Last week, it emerged several hundred employees had signed a petition refusing to work on any projects which would aid the government in tracking citizens through facial recognition surveillance.

Although the European Commission has not confirmed or denied the report, we suspect (or at the very least hope) work is being taken on to address this area. Facial recognition needs rules, or we will find ourselves in a very difficult position, similar to today.

A lack of action surrounding fake news, online bullying, cybersecurity, supply chain diversity and resilience, or the consolidation of power in the hands of a few has created some difficult situations around the world. Now the Commission and national governments are finding it difficult to claw back the progress of technology. This is one area where the European Commission desperately needs to get ahead of the technology industry; the risk and consequence of abuse is far too great.

Telia toys with facial recognition for ice cream payments

In the ever-lasting search for 5G usecases, Telia has teamed-up with Finnish bank OP to trial facial recognition payment solutions.

While facial recognition technologies are taking a bit of a reputational beating at the moment, there are promising usecases in the pipeline. The issue which is not being discussed here, though certainly warrants more attention in the public domain, is the ethical, responsible and transparent application of the technology.

However, this example, authenticating payments, would appear to be a very logical application of the technology.

Firstly, biometrics are becoming increasingly normalised in payments and financial services authentication through fingerprints or vocal recognition, this is just one step further. Secondly, it is theoretically more secure than current identification and authentication techniques. And finally, banks already have trusted relationships with the consumer, and are yet to be caught up with a privacy scandal.

“Facial payment is a good example of a service that benefits from the capacity increase and lower latency of 5G,” said Janne Koistinen, Head of Telia Finland’s 5G programme. “5G will also take the security of mobile connections to the next level, which is interesting for example for payment and other financial services.”

Using the biometric template uploaded through a camera prior to the purchase with the customers bank, a connected device is used by the merchant to authenticate the individual. The customer then authorises the purchase with a simple click once their face has been recognised.

However, 5G would appear to be key here, largely thanks to the advances in lower latency which can be experienced. Slow service could certainly hinder experience and the commercial benefits promised.

“Besides security, a smooth user experience is important for customers,” said Kristian Luoma, Head of OP Lab. “5G makes the service faster and is therefore the perfect partner for Pivo Face Payment. We believe that the trial with Telia opens a new window to the future.”

Although fingerprints and vocal patterns are theoretically unique to each person, there are environmental factors which might hinder authentication. For example, dirt or grease stop the fingerprint reader from worker, or background noise could impact performance for vocal readers.

Facial recognition is also cheaper. Most smartphones or tablets already have a camera, so no specialist equipment needs to be built into the devices. The camera does not need to be high-end, just functional, therefore the expense is mainly on the software side. It is also a lot more accessible, in that everyone has a face and rarely covers it up when in a store.

For the moment, this trial has been limited to an ice-cream van in Vallila, though it is easy to see the wider applications in numerous different settings.

The challenge which such initiatives might face is the increasingly negative perception of facial recognition. This reputation the technology is working up is largely down to the unethical or secretive application in surveillance. This is a much larger topic which needs to be discussed in the public domain, however this initiative does demonstrate the benefits of facial recognition.

UK’s laissez-faire attitude to privacy and facial recognition tech is worrying

Big Brother Watch has described the implementation of facial recognition tech as an ‘epidemic’ as it emerges the police has been colluding with private industry for trials.

There are of course significant benefits to be realised through the introduction of facial recognition, but the risks are monstrous. It is a sensitive subject, where transparency should be assumed as a given, but the general public has little or no understanding of the implications to personal privacy rights.

Thanks to an investigation from the group, it has been uncovered that shopping centres, casinos and even publicly-owned museums have been using the technology. Even more worryingly, in some cases the data has been shared with police forces. Without public consultation, the introduction of such technologies is an insult to the general public and a violation of the trust which has been put in public institutions and private industry.

“There is an epidemic of facial recognition in the UK,” said Director of Big Brother Watch, Silkie Carlo.

“The collusion between police and private companies in building these surveillance nets around popular spaces is deeply disturbing. Facial recognition is the perfect tool of oppression and the widespread use we’ve found indicates we’re facing a privacy emergency.”

What is worth noting is that groups such as Big Brother Watch have a tendency to over engineer certain developments, adding an element of theatrics to drum up support and dramatize events. However, in this instance, we completely agree.

When introducing new technology to society, there should be some form of public consultation, especially when the risk of abuse can have such a monumental impact on everyday life. Here, the risk is to the human right to privacy, a benefit many in the UK overlook, due to the assumption rights will be honoured by those given the responsibility of management of our society.

The general public should be given the right to choose. Increased safety might be a benefit, but there will be a sacrifice to personal privacy. We should have the opportunity to say no.

While the UK Government is clip-clopping along, sat pleasantly atop of its high-horse, criticising other administrations of human right violations, this incident blurs the line. Using facial recognition in a private environment without telling customers is a suspect position, though sharing this data with police forces is wrong.

Is there any material difference between these programmes and initiatives launched in autocratic and totalitarian governments elsewhere in the world? It smells very similar to the dreary picture painted in George Orwell’s “1984”, with a nanny-state assuming the right to decide what is reasonable and what is not.

And for those who appreciated a bit of irony, one of the examples Big Brother Watch has identified of unwarranted surveillance was at Liverpool’s World Museum, during a “China’s First Emperor and the Terracotta Warriors” exhibition.

“The idea of a British museum secretly scanning the faces of children visiting an exhibition on the first emperor of China is chilling,” said Carlo. “There is a dark irony that this authoritarian surveillance tool is rarely seen outside of China.

“Facial recognition surveillance risks making privacy in Britain extinct.”

Aside from this museum, private development companies including British Land, have been implementing the technology. There is reference to the technology in terms and conditions documents, though it is unlikely many members of the general public have been made aware.

As a result of the suspect implementations, including at Kings Cross in London, the Information Commission Officer Elizabeth Denham has launched an investigation. The investigation will look into an increasingly common theme; whether the implementation of new technology is taking advantage of the slow-moving process of legislation, and the huge number of grey areas currently present in the law.

Moving forward, facial recognition technologies will have a role to play in the digital society. Away from the clearly obvious risk of abuse, there are very material benefits. If a programme can identify fear or stress, for example, emergency services could be potentially alerted to an incident much quicker. Response to such incidents today are reliant on someone calling 999 in most cases, new technology could help here and save lives.

However, the general public must be informed, and blessings must be given. Transparency is key, and right now, it is missing.

Amazon has managed to bottle fear, but recognition debate remains

While facial recognition technologies are becoming increasingly controversial, it is always worth paying homage to innovation in this field and the real-world applications, when deployed responsibly.

We suspect people aren’t necessarily objecting to the concept of facial recognition technologies, but more to the application and lack of public consultation. You only have to look at some of world’s less appetizing governments to see the negative implications to privacy and human rights, but there are of course some significant benefits should it be applied in an ethically sound and transparent manner.

Over in the AWS labs, engineers have managed to do something quite remarkable; they have managed to bottle the concept of fear and teach its AI programmes to recognise it.

“Amazon Rekognition provides a comprehensive set of face detection, analysis, and recognition features for image and video analysis,” the company stated on its blog. “Today, we are launching accuracy and functionality improvements to our face analysis features.

“With this release, we have further improved the accuracy of gender identification. In addition, we have improved accuracy for emotion detection (for all 7 emotions: Happy, Sad, Angry, Surprised, Disgusted, Calm and Confused) and added a new emotion: Fear.”

When applied correctly, these technologies have an incredibly power to help society. You only have to think about some of the atrocities which have plagued major cities, but also the on-going problems. Human eyes can only see so much, with police and security forces often relying on reports from the general public. With cameras able to recognise emotions such as fear, crimes could be identified while they are taking process, allowing speedier reactions from the relevant parties.

However, there are of course significant risks with the application of this technology. We have seen in China such programmes are being used to track certain individuals and races, while certain forces and agencies in the US are constantly rumoured to be considering the implementation of AI for facial recognition, profiling and tracking of individuals. Some of these projects are incredibly worrying, and a violation of privacy rights granted to the general public.

This is where governments are betraying the promise they have made to the general public. Rules and regulations have not been written for such technologies, therefore the agencies and forces involved are acting in a monstrously large grey area. There of course need to be rules in place to govern surveillance practices, but a public conversation should be considered imperative.

Any time the right to privacy is being compromised, irrelevant as to whether there are noble goals in mind, the public should be consulted. The voters should choose whether they are happy to sacrifice certain privacy rights and freedoms in the pursuit of safety. This is what transparency means and this is exactly what has been disregarded to date.

Facebook faces yet another monstrous privacy headache in Illinois

Just as the Cambridge Analytica scandal re-emerged to heighten Facebook frustrations, the social media giant is contemplating a class-action lawsuit regarding facial-recognition.

It has been a tough couple of weeks for Facebook. With the ink still wet on a $5 billion FTC fine, the UK Government questioning discrepancies in evidence presented to Parliamentary Committees and a Netflix documentary reopening the wounds of the Cambridge Analytica scandal, the last thing needed was another headache. This is exactly what has been handed across to Mountain View from Illinois.

In a 3-0 ruling, the Court of Appeals for the Ninth District has ruled against Facebook, allowing for a class-action lawsuit following the implementation of facial-recognition technologies without consultation or the creation of public policy.

“Plaintiffs’ complaint alleges that Facebook subjected them to facial-recognition technology without complying with an Illinois statute intended to safeguard their privacy,” the court opinion states.

“Because a violation of the Illinois statute injures an individual’s concrete right to privacy, we reject Facebook’s claim that the plaintiffs have failed to allege a concrete injury-in-fact for purposes of Article III standing. Additionally, we conclude that the district court did not abuse its discretion in certifying the class.”

After introducing facial recognition technologies to the platform to offer tag suggestions on uploaded photos and video content in 2010, Facebook was the subject to a lawsuit under the Illinois Biometric Information Privacy Act. This law compels companies to create public policy before implementing facial-recognition technologies and analysing biometric data, a means to protect the privacy rights of consumers.

Facebook appealed against the lawsuit, suggesting the plaintiffs had not demonstrated material damage, therefore the lower courts in California were exceeding granted responsibilities. However, the appeals court has dismissed this opinion. The lawsuit will proceed as planned.

The law in question was enacted in 2008, with the intention of protecting consumer privacy. As biometric data can be seen as unique as a social security number, legislators feared the risk of identity theft, as well as the numerous unknowns as to how this technology could be implemented in the future. This was a protectionary piece of legislation and does look years ahead of its time when you consider the inability of legislators to create relevant rules today.

As part of this legislation, private companies are compelled to establish a “retention

schedule and guidelines for permanently destroying biometric identifiers and biometric information”. The statute also forces companies to obtain permission before applying biometric technologies used to identify individuals or analyse and retain data.

Facebook is not arguing it was compliant with the requirements but suggested as there have been no material damages to individuals or their right to privacy, the lawsuit should have been dismissed by the lower courts in California. The senior judges clearly disagree.

But what could this lawsuit actually mean?

Firstly, you have the reputational damage. Facebook’s credibility is dented at best and shattered at worst, depending on who you talk to of course. The emergence of the Netflix documentary ‘The Great Hack’, detailing the Cambridge Analytica scandal, is dragging the brand through the mud once again, while questions are also being asked whether the management team directly misread the UK Government.

Secondly, you have to look at the financial impact. Facebook is a profit-machine, but few will be happy with another fine. It was only three weeks ago the FTC issued a $5 billion fine for various privacy inadequacies over the last decade, while this is a lawsuit which could become very expensive, very quickly.

Not only will Facebook have to hire another battalion of lawyers to combat the threat posed by the likes of the American Civil Liberties Union, the Electronic Frontier Foundation, the Center for Democracy &Technology and the Illinois PIRG Education Fund, the pay-out could be significant.

Depending on the severity of the violation, users could be entitled to a single sum between $1000-$5000. Should Facebook lose this legal foray, the financial damage could be in the 100s of millions or even billions.

From a reputational and financial perspective, this lawsuit could be very damaging to Facebook.

FBI and London Met land in hot water over facial recognition tech

The FBI and London Metropolitan Police force will be facing some awkward conversations this week over unauthorised and potentially illegal use of facial recognition technologies.

Starting in the US, the Washington Post has been handed records dating back almost five years which suggest the FBI and ICE (Immigration and Customs Enforcement) have been using DMV databases to build a surveillance network without the consent of citizens. The emails were obtained by Georgetown Law researchers through public records requests.

Although law enforcement agencies have normalised biometrics as part of investigations nowadays, think finger print or DNA evidence left at crime scenes, the traces are only useful when catching repeat offenders. Biometric databases are built by obtaining data from those who have been previously charged, but in this case, the FBI and ICE have been accessing data on 641 million individuals, the vast majority of which are innocent and would not have been consulted for the initiative.

In the Land of the Free, such hypocrisy is becoming almost second nature to national security and intelligence forces, who may well find themselves in some bother from a privacy perspective.

As it stands, there is no legislative or regulatory guidelines which authorise the development of such a complex surveillance system, or any public consultation with the citizens of the US. This act first, tell later mentality is something which is becoming increasingly common in country’s the US has designated as national enemies, though there is little evidence authorities in the US have any respect for the rights of their own citizens.

Heading across the pond to the UK, a report from the Human Rights, Big Data & Technology Project has identified ‘significant flaws’ with the way live facial recognition has been trialled in London by the Metropolitan Police force. The group, based out of the University of Essex Human Rights Centre, suggests it could be found to be illegal should it be challenged in court.

“The legal basis for the trials was unclear and is unlikely to satisfy the ‘in accordance with the law’ test established by human rights law,” said Dr Daragh Murray, who authored the report alongside Professor Peter Fussey.

“It does not appear that an effective effort was made to identify human rights harms or to establish the necessity of LFR [live facial recognition]. Ultimately, the impression is that human rights compliance was not built into the Metropolitan Police’s systems from the outset and was not an integral part of the process.”

The main gripe from the duo here seems to be how the Met approached the trials. LFR was approached in a manner similar to traditional CCTV, failing to take into the intrusive nature of facial recognition, and the use of biometric processing. The Met did not consider the ‘necessary in a democratic society’ test established by human rights law, and therefore effectively ignored the impact on privacy rights.

There were also numerous other issues, including a lack of public consultation, the accuracy of the technology (8 out of 42 tests were actually correct), criteria for using the technology was not clearly defined and accuracy and relevance of the ‘watchlist’ of suspects. However, the main concern from the University’s research team was that only the technical aspects of the trial were considered, not the impact on privacy.

There is a common theme in both of these instances; the authorities supposedly in place to protect our freedoms pay little attention to the privacy rights which are granted to us. There seems to be a ‘ends justify the means’ attitude with little consideration to the human right to privacy. Such attitudes are exactly what the US and UK aim to eradicate when ‘freeing’ citizens of oppressive regimes abroad.

What is perhaps the most concerned about these stories is the speed at which they are being implemented. There has been little public consultation to the appropriateness of these technologies or whether the general public is prepared to sacrifice privacy rights in the pursuit of national security. With the intrusive nature of facial recognition, authorities should not be allowed to make this decision on behalf of the general public, especially when there is so much precedent for abuse and privacy is a hot-topic following scandals in private industry.

Of course, there are examples of the establishment slowing down progress to give time for these considerations. In San Francisco, the city’s Board of Supervisors has made it illegal for forces to implement facial recognition technologies unless approval has been granted. The police force would have to demonstrate stringent justification, accountability systems and safeguards to privacy rights.

In the UK, Dr Murray and Professor Fussey are calling for a pause on the implementation or trialling of facial recognition technologies until the impact on and trade-off of privacy rights have been fully understood.

Facial recognition technologies are becoming incredibly useful when it comes to access and authentication, though there needs to be some serious conversations about the privacy implications of using the tech in the world of surveillance and police enforcement. At the moment, it seems to be nothing but an after-thought for the police forces and intelligence agencies, an incredibly worrying and dangerous attitude to have.

EFF to testify in support of California facial recognition technology ban

Last month, the City of San Francisco banned law enforcement agencies from using facial recognition software in cameras, and now the issue has been escalated to the State Senate.

While this is still only a minor thorn in the side of those who have complete disregard for privacy principles, it has the potential to swell into a major debate. There have been numerous trials around the world in an attempt to introduce the invasive technology, but no-one has actually stopped to have a public debate as to whether the disembowelling of privacy rights should be so easily facilitated.

After the City of San Francisco passed the rules, officials voted 8-1 in support of the ban, the issue was escalated up to State level. SB 1215 is now being considered by State legislators, with the Senate Committee on Public Safety conducting a review of pros and cons.

Numerous organizations have come out to support progress of the bill, and of course the official organizations representing law enforcement agencies at State level are attempting to block it. As part of the review process, EFF Grassroots Advocacy Organizer Nathan Sheard will testify in-front of the California Senate Public Safety Committee later on today [June 11].

The issue which is being debated here is quite simple; should the police be allowed to use such invasive surveillance technologies, potentially violating citizens right to privacy without knowledge or consent. Many laws are being passed to give citizens more control of their personal data in the digital economy, but with such surveillance technologies, said citizens may have no idea their images are being collected, analysed and stored by the State.

What should be viewed as an absolutely incredible instance of negligence and irresponsible behaviour, numerous police forces around the world have moved forward implementing these technologies without in-depth public consultation. Conspiracy theorists will have penned various nefarious outcomes for such data, but underhanded Government and police actions like this do support the first-step of their theories.

The City of San Francisco, the State of California and the EFF, as well as the dozens of other agencies challenging deployment of the technology, are quite right to slow progress. The introduction of facial recognition software should be challenged, debated and scrutinised. Free-reign should not be given to police forces and intelligence agencies; they have already show themselves as untrustworthy. They have lost the right to play around with invasive technologies without public debate.

“This bill declares that facial recognition and other biometric surveillance technology pose unique and significant threats to the civil rights and civil liberties of residents and visitors,” the proposed bill states.

“[the bill] Declares that the use of facial recognition and other biometric surveillance is the functional equivalent of requiring every person to show a personal photo identification card at all times in violation of recognized constitutional rights. [the bill] States that this technology also allows people to be tracked without consent and would also generate massive databases about law-abiding Californians and may chill the exercise of free speech in public places.”

Under existing laws, there seems to be little resistance to implementing these technologies, aside from the loose definition of ‘best practice’. This would not be considered a particularly difficult hurdle to overcome, such is the nuanced nature of ‘best practice’. Considering the negative implications of the technology, more red-tape should be introduced, forcing the police and intelligence agencies to provide suitable levels of justification and accountability.

Most importantly, there are no requirements for police forces or intelligence agencies to seek approval from the relevant legislative body to deploy the technology. Permission is needed to acquire cellular communications interception technology, in order to protect the civil rights and civil liberties of residents and visitors. The same rights are being challenged with facial recognition software in cameras, but no permissions are required.

This is of course not the first sign of resistance to facial recognition technologies. In January, 85 pro-privacy organizations, charities and influencers wrote to Amazon, Google and Microsoft requesting the firms pledge not to sell the technology to police forces or intelligence agencies. It does appear the use of the data by enforcement agencies in countries like China has put the fear into these organizations.

The accuracy of the technology has also been called into question. Although the tech giants are claiming AI is improving the accuracy every day, last year the American Civil Liberties Union produced research which suggested a 5% error rate. The research claimed 28 members of Congress had been falsely identified as people who had been arrested.

Interestingly enough, critics also claim the technology violates the Forth Amendment of the US Constitution. It has already been established that police demanding identification without suspicion violates this amendment, while the American Civil Liberties Union argues such technologies are effectively doing the same thing.

What is worth noting is that it is highly unlikely a total ban will be passed. This is not the case in the City of San Francisco, as the city has introduced measures to ensure appropriate justification and also that data is stored properly. The key with the rules in San Francisco is that it is making it as difficult as possible to ensure the technologies are not used haphazardly.

What we are most likely to see is bureaucracy. Red-tape will be scattered all over the technology to ensure it is used in an appropriate and justified manner.

Accessibility is one of the issues which privacy campaigners are facing right now. Companies like New York-based Vuzix and NNTC in the UAE are making products which are not obviously used for surveillance and are becoming increasingly affordable. Software from companies like NEC is also becoming more available, giving the police more options. A landscape with affordable technology and no regulatory resistance paints a gloomy picture.

The introduction of more red-tape might have under-resourced and under-pressure police forces frustrated, but such is the potential invasion of privacy rights and the consequence of abuse, it is absolutely necessary. The quicker this technology is brought into the public domain and understood by the man-on-the-street, the better.

San Francisco puts the brakes on facial recognition surveillance

The City of San Francisco has passed new rules which will significantly curb the abilities of public sector organisations to purchase and utilise facial recognition technologies.

Opinions on newly emerging surveillance technologies have varied drastically, with some pointing to the benefits of safety and efficiency for intelligence and police forces, while others have bemoaned the crippling potential it could have on civil liberties and privacy.

The new rules in San Francisco do not necessarily ban surveillance technologies entirely, but barriers to demonstrate justification have been significantly increased.

“The success of San Francisco’s #FacialRecognition ban is owed to a vast grassroots coalition that has advocated for similar policies around the Bay Area for years,” said San Francisco Supervisor Aaron Peskin.

The legislation will come into effect in 30 days’ time. From that point, no city department or contracting officer will be able to purchase equipment unless the Board of Supervisors has appropriated funds for such acquisition. New processes will also be introduced including a surveillance technology policy for the department which meet the demands of the Board, as well as a surveillance impact report.

The department would also have to produce an in-depth annual report which would detail:

  • How the technology was used
  • Details of each instance data was shared outside the department
  • Crime statistics

The impact report will have to include a huge range of information including all the forward plans on logistics, experiences from other government departments, justification for the expenditure and potential impact on privacy. The department may also have to consult public opinion, while it will have to create concrete policies on data retention, storage, reporting and analysis.

City officials are making it as difficult as possible to make use of such technologies, and considering the impact or potential for abuse, quite rightly so. As mentioned before, this is not a ban on next-generation surveillance technologies, but an attempt to ensure deployment is absolutely necessary.

As mentioned before, the concerns surround privacy and potential violations of civil liberties, which were largely outlined in wide-sweeping privacy reforms set forward by California Governor Jerry Brown last year. The rules are intended to spur on an ‘informed public debate’ on the potential impacts on the rights guaranteed by the First, Fourth, and Fourteenth Amendments of the US Constitution.

Aside from the potential for abuse, it does appear City Official and privacy advocates are concerned over the impact on prejudices based on race, ethnicity, religion, national origin, income level, sexual orientation, or political perspective. Many analytical technologies are based on the most likely scenario, leaning on stereotypical beliefs and potentially increasing profiling techniques, effectively removing impartiality of viewing each case on its individual factors.

While the intelligence and policing community will most likely view such conditions as a bureaucratic mess, it should be absolutely be viewed as necessary. We’ve already seen the implementation of such technologies without public debate and scrutiny, a drastic step considering the potential consequences.

Although the technology is not necessarily new, think of border control at airports, perhaps the rollout in China has swayed opinion. When an authoritarian state like China, where political and societal values conflict that of the US, implements such technologies some will begin to ask what the nefarious impact of deployment actually is.

In February, a database emerged demonstrating China has used a full suite of AI tools to monitor its Uyghur population in the far west of the country. This could have been a catalyst for the rules.

That said, the technology is also far from perfect. Police forces across the UK has been trialling facial recognition and data analytics technologies with varied results. At least 53 UK local councils and 45 of the country’s police forces are heavily relying on computer algorithms to assess the risk level of crimes against children as well as people cheating on benefits.

In May last year, the South Wales Police Force has to defend its decision to trial NEC facial recognition software during the 2017 Champions League Final as it is revealed only 8% of the identifications proved to be accurate.

It might be viewed by some as bureaucracy for the sake of bureaucracy but considering the potential for abuse and damage to privacy rights, such administrative barriers are critical. More cities should take the same approach as San Francisco.

Facial recognition is being used in China’s monitoring network

A publicly accessible database managed by a surveillance contractor showed China has used a full suite of AI tools to monitor its Uyghur population in the far west of the country.

Victor Gevers, a cyber security expert and a researcher at the non-profit GDI Foundation, found that a database managed by SenseNets, a Chinese surveillance company, and housed in China Unicom cloud platform, has stored large quantities of tracking data of the residents in the Xinjiang autonomous region in west China. The majority of monitored are the Uyghur ethnic group. The data covered a total number of nearly 2.6 million people (2,565,724 to be precise), including personal information like their ID card details (issue & expire dates, sex, ethnic group, home address, birthday, photo) as well employer details, and the locations they have been tracked (using facial recognition) in the last 24 hours, during which time a total of 6,680,348 records were registered, according to Gevers.

Neither the scope nor the level of detail of the monitoring should be a surprise, given the measures used by China in that part of the country over the last two years. If there is anything embarrassing for the Chinese authorities and their contractors in this story, it will be the total failure of data security: the database was not protected at all. By the time Gevers notified the administrators at SenseNets, it had been accessible to anyone for at least half a year, according to the access log. The database has since been secured, opened, and secured again. Gevers also found out that the database was built on a pirate edition of Windows Server 2012. Police stations, hotels, and other service and business establishments are also found to have connected to the database.

This is a classic example of human errors defeating security systems. Not too long ago, Jeff Bezos of Amazon sent intimate pictures to his female companion, which ended up in the wrong hands. This led to the BBC’s quip that Bezos was the weak link in cybersecurity for the world’s leading cloud service provider.

Like other technologies, facial recognition can be used by overbearing governments for monitoring purposes, breaking all privacy protection. But it can also do tremendous good. EU citizens travelling between the UK and the Schengen Area have long got used to having their passports read by a machine then their faces matched by a camera. The AI technologies behind the experience have vastly simplified and expediated the immigration process. But, sometimes, for some reason, the machine may fail to recognise a face. In that case, there is always an immigration officer at the desk to do manual check.

Facial recognition, coupled with other technologies, for example blockchain, can also improve the efficiency in industries like cross-border logistics. The long border between Sweden and Norway is largely open despite that a passenger or cargo vehicle travelling from one country to another would be technically moving between inside the EU (Sweden) and outside of it (Norway). According to an article in The Economist, the frictionless transit needs digitalisation of documentation (of goods as well as on people), facial recognition (of drivers), sensors on the border (to read code on the driver’s mobile phone), and automatic number-plate recognition (of the vehicles).

In cases like these, facial recognition, and AI in general, should be lauded. What the world should be on alert to is how the data is being used and who has access to it.

 

China’s social credit system set to kick off in Beijing in 2020

The Chinese state wants to control its citizens via a system of social scoring that punishes behaviour it doesn’t approve of.

This initiative has been widely reported, including an excellent piece from ABC Australia, but this marks one of the first times a specific timescale has been attributed to it. Bloomberg reports that Beijing, China’s capital city, plans to implement the social credit system by the end of 2020, which will affect 22 million citizens.

The full plan has been published on a Chinese government website, and we currently have our Beijing bureau sifting through it to bring you our own take on the primary material. But for the time being we’re relying on Bloomberg’s account, which highlights just how sinister this sort of thing is.

People who accumulate higher social ‘scores’, the rules and algorithms for which are presumably opaque, subjective and nebulous, get access to special privileges, while those who fall foul of the system will apparently be unable to move even a single step. This is hopefully at least a bit hyperbolic, but it does indicate that a lot of the sanctions attached to a low score focus on the ability to travel.

Mobile technologies, including smartphones, social media, facial recognition, etc, will clearly play a big part in this Orwellian social manipulation strategy. The fact that our every action, or even inaction, now leaves a permanent digital fingerprint makes this sort of thing possible in a way that it never has been before. If you want a further sense of quite how seamlessly it could metastasize beyond China, watch the episode of Black Mirror called Nosedive, a preview of which you can see below.