Europe set to join the facial recognition debate

With more authorities demonstrating they cannot be trusted to act responsibly or transparently, the European Commission is reportedly on the verge of putting the reigns on facial recognition.

According to reports in The Financial Times, the European Commission is considering imposing new rules which would extend consumer rights to include facial recognition technologies. The move is part of a greater upheaval to address the ethical and responsible use of artificial intelligence in today’s digital society.

Across the world, police forces and intelligence agencies are imposing technologies which pose a significant risk of abuse without public consultation or processes to create accountability or justification. There are of course certain nations who do not care about privacy rights of citizens, though when you see the technology being implemented for surveillance purposes in the likes of the US, UK and Sweden, states where such rights are supposedly sacred, the line starts to be blurry.

The reasoning behind the implementation of facial recognition in surveillance networks is irrelevant; without public consultation and transparency, these police forces, agencies, public sector authorities and private companies are completely disregarding the citizens right to privacy.

These citizens might well support such initiatives, electing for greater security or consumer benefits over the right to privacy, but they have the right to be asked.

What is worth noting, is that this technology can be a driver for positive change in the world when implemented and managed correctly. Facial scanners are speeding up the immigration process in airports, while Telia is trialling a payment system using facial recognition in Finland. When deployed with consideration and the right processes, there are many benefits to be realised.

The European Commission has not confirmed or denied the reports to Telecoms.com, though it did reaffirm its on-going position on artificial intelligence during a press conference yesterday.

“In June, the high-level expert group on artificial intelligence, which was appointed by the Commission, presented the first policy recommendations and ethics guidelines on AI,” spokesperson Natasha Bertaud said during the afternoon briefing. “These are currently being tested and going forward the Commission will decide on any future steps in-light of this process which remains on-going.”

The Commission does not comment on leaked documents and memos, though reading between the lines, it is on the agenda. One of the points the 52-person expert group will address over the coming months is building trust in artificial intelligence, while one of the seven principles presented for consultation concerns privacy.

On the privacy side, parties implementing these technologies must ensure data ‘will not be used to unlawfully or unfairly discriminate’, as well as setting systems in place to dictate who can access the data. We suspect that in the rush to trial and deploy technology such as facial recognition, few systems and processes to drive accountability and justification have been put in place.

Although these points do not necessarily cover the right for the citizen to decide, tracking and profiling are areas where the group has recommended the European Commission consider adding more regulation to protect against abuses and irresponsible deployment or management of the technology.

Once again, the grey areas are being exploited.

As there are only so many bodies in the European Commission or working for national regulators, and technology is advancing so quickly, there is often a void in the rules governing the newly emerging segments. Artificial intelligence, surveillance and facial recognition certainly fall into this chasm, creating a digital wild-west landscape where those who do not understand the ‘law of unintended consequence’ play around with new toys.

In the UK, it was unveiled several private property owners and museums were using the technology for surveillance without telling consumers. Even more worryingly, some of this data has been shared with police forces. Information Commissioner Elizabeth Denham has already stated her agency will be looking into the deployments and will attempt to rectify the situation.

Prior to this revelation, a report from the Human Rights, Big Data & Technology Project attacked a trial from the London Metropolitan Police Force, suggesting it could be found to be illegal should it be challenged in court. The South Wales Police Force has also found itself in hot water after it was found its own trials saw only an 8% success rate.

Over in Sweden, the data protection regulator used powers granted by GDPR to fine a school which had been using facial recognition to monitor attendance of pupils. The school claimed they had received consent from the students, but as they are in a dependent position, this was not deemed satisfactory. The school was also found to have substandard processes when handling the data.

Finally, in the US, Facebook is going to find itself in court once again, this time over the implementation of facial recognition software in 2010. A class-action lawsuit has been brought against the social media giant, suggesting the use of the technology was non-compliant under the Illinois Biometric Information Privacy Act.

This is one example where law makers have been very effective in getting ahead of trends. The law in question was enacted in 2008 and demanded companies gain consent before any facial recognition technologies are introduced. This is an Act which should be applauded for its foresight.

The speed in which progress is being made with facial recognition in the surveillance world is incredibly worrying. Private and public parties have an obligation to consider the impact on the human right to privacy, though much distaste has been shown to these principles in recent months. Perhaps it is more ignorance, short-sightedness or a lack of competence, but without rules to govern this segment, the unintended consequences could be compounded years down the line.

Another point worth noting is the gathering momentum to stop the wrongful implementation of facial recognition. Aside from Big Brother Watch raising concerns in the UK, the City of San Francisco is attempting to implement an approval function for police forces, while Google is facing an internal rebellion. Last week, it emerged several hundred employees had signed a petition refusing to work on any projects which would aid the government in tracking citizens through facial recognition surveillance.

Although the European Commission has not confirmed or denied the report, we suspect (or at the very least hope) work is being taken on to address this area. Facial recognition needs rules, or we will find ourselves in a very difficult position, similar to today.

A lack of action surrounding fake news, online bullying, cybersecurity, supply chain diversity and resilience, or the consolidation of power in the hands of a few has created some difficult situations around the world. Now the Commission and national governments are finding it difficult to claw back the progress of technology. This is one area where the European Commission desperately needs to get ahead of the technology industry; the risk and consequence of abuse is far too great.

UK’s laissez-faire attitude to privacy and facial recognition tech is worrying

Big Brother Watch has described the implementation of facial recognition tech as an ‘epidemic’ as it emerges the police has been colluding with private industry for trials.

There are of course significant benefits to be realised through the introduction of facial recognition, but the risks are monstrous. It is a sensitive subject, where transparency should be assumed as a given, but the general public has little or no understanding of the implications to personal privacy rights.

Thanks to an investigation from the group, it has been uncovered that shopping centres, casinos and even publicly-owned museums have been using the technology. Even more worryingly, in some cases the data has been shared with police forces. Without public consultation, the introduction of such technologies is an insult to the general public and a violation of the trust which has been put in public institutions and private industry.

“There is an epidemic of facial recognition in the UK,” said Director of Big Brother Watch, Silkie Carlo.

“The collusion between police and private companies in building these surveillance nets around popular spaces is deeply disturbing. Facial recognition is the perfect tool of oppression and the widespread use we’ve found indicates we’re facing a privacy emergency.”

What is worth noting is that groups such as Big Brother Watch have a tendency to over engineer certain developments, adding an element of theatrics to drum up support and dramatize events. However, in this instance, we completely agree.

When introducing new technology to society, there should be some form of public consultation, especially when the risk of abuse can have such a monumental impact on everyday life. Here, the risk is to the human right to privacy, a benefit many in the UK overlook, due to the assumption rights will be honoured by those given the responsibility of management of our society.

The general public should be given the right to choose. Increased safety might be a benefit, but there will be a sacrifice to personal privacy. We should have the opportunity to say no.

While the UK Government is clip-clopping along, sat pleasantly atop of its high-horse, criticising other administrations of human right violations, this incident blurs the line. Using facial recognition in a private environment without telling customers is a suspect position, though sharing this data with police forces is wrong.

Is there any material difference between these programmes and initiatives launched in autocratic and totalitarian governments elsewhere in the world? It smells very similar to the dreary picture painted in George Orwell’s “1984”, with a nanny-state assuming the right to decide what is reasonable and what is not.

And for those who appreciated a bit of irony, one of the examples Big Brother Watch has identified of unwarranted surveillance was at Liverpool’s World Museum, during a “China’s First Emperor and the Terracotta Warriors” exhibition.

“The idea of a British museum secretly scanning the faces of children visiting an exhibition on the first emperor of China is chilling,” said Carlo. “There is a dark irony that this authoritarian surveillance tool is rarely seen outside of China.

“Facial recognition surveillance risks making privacy in Britain extinct.”

Aside from this museum, private development companies including British Land, have been implementing the technology. There is reference to the technology in terms and conditions documents, though it is unlikely many members of the general public have been made aware.

As a result of the suspect implementations, including at Kings Cross in London, the Information Commission Officer Elizabeth Denham has launched an investigation. The investigation will look into an increasingly common theme; whether the implementation of new technology is taking advantage of the slow-moving process of legislation, and the huge number of grey areas currently present in the law.

Moving forward, facial recognition technologies will have a role to play in the digital society. Away from the clearly obvious risk of abuse, there are very material benefits. If a programme can identify fear or stress, for example, emergency services could be potentially alerted to an incident much quicker. Response to such incidents today are reliant on someone calling 999 in most cases, new technology could help here and save lives.

However, the general public must be informed, and blessings must be given. Transparency is key, and right now, it is missing.

Amazon has managed to bottle fear, but recognition debate remains

While facial recognition technologies are becoming increasingly controversial, it is always worth paying homage to innovation in this field and the real-world applications, when deployed responsibly.

We suspect people aren’t necessarily objecting to the concept of facial recognition technologies, but more to the application and lack of public consultation. You only have to look at some of world’s less appetizing governments to see the negative implications to privacy and human rights, but there are of course some significant benefits should it be applied in an ethically sound and transparent manner.

Over in the AWS labs, engineers have managed to do something quite remarkable; they have managed to bottle the concept of fear and teach its AI programmes to recognise it.

“Amazon Rekognition provides a comprehensive set of face detection, analysis, and recognition features for image and video analysis,” the company stated on its blog. “Today, we are launching accuracy and functionality improvements to our face analysis features.

“With this release, we have further improved the accuracy of gender identification. In addition, we have improved accuracy for emotion detection (for all 7 emotions: Happy, Sad, Angry, Surprised, Disgusted, Calm and Confused) and added a new emotion: Fear.”

When applied correctly, these technologies have an incredibly power to help society. You only have to think about some of the atrocities which have plagued major cities, but also the on-going problems. Human eyes can only see so much, with police and security forces often relying on reports from the general public. With cameras able to recognise emotions such as fear, crimes could be identified while they are taking process, allowing speedier reactions from the relevant parties.

However, there are of course significant risks with the application of this technology. We have seen in China such programmes are being used to track certain individuals and races, while certain forces and agencies in the US are constantly rumoured to be considering the implementation of AI for facial recognition, profiling and tracking of individuals. Some of these projects are incredibly worrying, and a violation of privacy rights granted to the general public.

This is where governments are betraying the promise they have made to the general public. Rules and regulations have not been written for such technologies, therefore the agencies and forces involved are acting in a monstrously large grey area. There of course need to be rules in place to govern surveillance practices, but a public conversation should be considered imperative.

Any time the right to privacy is being compromised, irrelevant as to whether there are noble goals in mind, the public should be consulted. The voters should choose whether they are happy to sacrifice certain privacy rights and freedoms in the pursuit of safety. This is what transparency means and this is exactly what has been disregarded to date.

IBM and Google reportedly swap morals for cash in Chinese surveillance JV

IBM and Google executives should be bracing for impact as the comet of controversy heads directly towards their offices.

Reports have emerged, via the Intercept, suggesting two of the US’ most influential and powerful technology giants have indirectly been assisting the Chinese Government with its campaign of mass-surveillance and censorship. Both will try to distance themselves from the controversy, but this could have a significant impact on both firms.

The drama here is focused around a joint-venture, the OpenPower Foundation, founded in 2013 by Google and IBM, but features members such as Red Hat, Broadcom, Mellanox, Xilinx and Rackspace. The aim of the open-ecosystem organization is to facilitate and share advances in networking, server, data storage, and processing technology.

To date, the group has been little more than another relatively uninteresting NPO, serving a niche in the industry, though one initiative is causing the stir. The OpenPower Foundation has been working with Xilinx and Chinese firm Semptian to create a new breed of chips capable of enabling computers to process incredible amounts of data. This might not seem extraordinary, though the application is where the issue has been found.

On the surface, Semptian is a relatively ordinary Chinese semiconductor business, but when you look at its most profitable division, iNext, the story becomes a lot more sinister. iNext specialises in selling equipment to the Chinese Government to enable the mass-surveillance and censorship projects which have become so infamous.

It will come as little surprise a Chinese firm is aiding the Government with its nefarious objectives, but a link to IBM and Google, as well as a host of other US firms, will have some twitching with discomfort. We can imagine the only people who are pleased at this news are the politicians who are looking to get their faces on TV by theatrically condemning the whole saga.

Let’s start with what iNext actually does before moving onto the US firms involved in the controversy. iNext works with Chinese Government agencies by providing a product called Aegis. Aegis is an interception and analysis system which has been embedded into various phone and internet networks throughout the country. This is one of the products which enables the Chinese Government to have such a close eye on the activities of its citizens.

Documentation acquired by The Intercept outlines the proposition in more detail.

“Aegis is not only the standard interception system but also the powerful analysis system with early warning and timely action capabilities. Aegis can work with all kinds of networks and 3rd party systems, from recovering, analysing, exploring, warning, early warning, locating to capturing. Aegis provides LEA with an end to end solution described as Deep Insight, Early Warning and Timely Action.”

Although the majority of this statement is corporate fluff, it does provide some insight into the way in which the technology actually works. This is an incredibly powerful surveillance system, which is capable of locating individuals through application usernames, IP addresses or phone numbers, as well as accurately tracking the location of said individuals on a real-time basis.

Perhaps one of the most worrying aspect of this system is the ‘pre-crime’ element. Although the idea of predictive analytics in some societies has been met with controversy and considerable resistance, we suspect the Chinese Government does not have the same reservations.

iNext promises this feature can help prevent crime through the introduction of an early warning system. This raises all sorts of ethical questions, as while the data estimates might be accurate to five nines, can you arrest someone when they haven’t actually committed a crime. This is the sticky position Google and IBM might have found itself in.

OpenPower has said that it was not aware of the commercial applications of the projects it manages, while its charter prevents it from getting involved. The objective of the foundation is to facilitate the progress of technology, not to act as judge and jury for its application. It’s a nice little way to keep controversy at arm’s length; inaction and negligence is seen as an appropriate defence plea.

For IBM and Google, who are noted as founding members of the OpenPower Foundation, a stance of ignorance might be enough to satisfy institutions of innocence, but the court of public opinion could swing heavily the other direction. An indirect tie to such nefarious activities is enough for many to pass judgment.

When it comes to IBM, the pursuit of innocence becomes a little bit trickier. IBM is directly mentioned on the Semptian website, suggesting Big Blue has been working closely with the Chinese firm for some time, though the details of this relationship are unknown for the moment.

For any of the US firms which have been mentioned here, it is not a comfortable situation to be in. Although they might be able to plead ignorance, it is quite difficult to believe. These are monstrous multi-national billion-dollar corporations, with hordes of lawyers, some of whom will be tasked with making sure the technology is not being utilised in situations which would get the firm in trouble.

Of course, this is not the first time US technology firms have found themselves on the wrong side of right. There have been numerous protests from employees of the technology giants as to how the technology is being applied in the real-world. Google is a prime example.

In April 2018, Google employees revolted over an initiative the firm was participating in with the US Government. Known as Project Maven, Google’s AI technology was used to improve the accuracy of drone strikes. As you can imagine, the Googlers were not happy at the thought of helping the US Government blow people up. Project Dragonfly was another which brought internal uproar, this time the Googlers were helping to create a version of the Google news app for China which would filter out certain stories which the Government deemed undesirable.

Most of the internet giants will plead their case, suggesting their intentions are only to advance society, but there are numerous examples of contracts and initiatives which contradict this position.

Most developers or engineers, especially the ones who work for a Silicon Valley giant, work for the highest bidder, but there is a moral line few will cross. As we’ve seen before, employees are not happy to aide governments in the business of death, surveillance or censorship, and we suspect the same storyline will play out here.

Google and IBM should be preparing themselves for significant internal and external backlash.

FBI and London Met land in hot water over facial recognition tech

The FBI and London Metropolitan Police force will be facing some awkward conversations this week over unauthorised and potentially illegal use of facial recognition technologies.

Starting in the US, the Washington Post has been handed records dating back almost five years which suggest the FBI and ICE (Immigration and Customs Enforcement) have been using DMV databases to build a surveillance network without the consent of citizens. The emails were obtained by Georgetown Law researchers through public records requests.

Although law enforcement agencies have normalised biometrics as part of investigations nowadays, think finger print or DNA evidence left at crime scenes, the traces are only useful when catching repeat offenders. Biometric databases are built by obtaining data from those who have been previously charged, but in this case, the FBI and ICE have been accessing data on 641 million individuals, the vast majority of which are innocent and would not have been consulted for the initiative.

In the Land of the Free, such hypocrisy is becoming almost second nature to national security and intelligence forces, who may well find themselves in some bother from a privacy perspective.

As it stands, there is no legislative or regulatory guidelines which authorise the development of such a complex surveillance system, or any public consultation with the citizens of the US. This act first, tell later mentality is something which is becoming increasingly common in country’s the US has designated as national enemies, though there is little evidence authorities in the US have any respect for the rights of their own citizens.

Heading across the pond to the UK, a report from the Human Rights, Big Data & Technology Project has identified ‘significant flaws’ with the way live facial recognition has been trialled in London by the Metropolitan Police force. The group, based out of the University of Essex Human Rights Centre, suggests it could be found to be illegal should it be challenged in court.

“The legal basis for the trials was unclear and is unlikely to satisfy the ‘in accordance with the law’ test established by human rights law,” said Dr Daragh Murray, who authored the report alongside Professor Peter Fussey.

“It does not appear that an effective effort was made to identify human rights harms or to establish the necessity of LFR [live facial recognition]. Ultimately, the impression is that human rights compliance was not built into the Metropolitan Police’s systems from the outset and was not an integral part of the process.”

The main gripe from the duo here seems to be how the Met approached the trials. LFR was approached in a manner similar to traditional CCTV, failing to take into the intrusive nature of facial recognition, and the use of biometric processing. The Met did not consider the ‘necessary in a democratic society’ test established by human rights law, and therefore effectively ignored the impact on privacy rights.

There were also numerous other issues, including a lack of public consultation, the accuracy of the technology (8 out of 42 tests were actually correct), criteria for using the technology was not clearly defined and accuracy and relevance of the ‘watchlist’ of suspects. However, the main concern from the University’s research team was that only the technical aspects of the trial were considered, not the impact on privacy.

There is a common theme in both of these instances; the authorities supposedly in place to protect our freedoms pay little attention to the privacy rights which are granted to us. There seems to be a ‘ends justify the means’ attitude with little consideration to the human right to privacy. Such attitudes are exactly what the US and UK aim to eradicate when ‘freeing’ citizens of oppressive regimes abroad.

What is perhaps the most concerned about these stories is the speed at which they are being implemented. There has been little public consultation to the appropriateness of these technologies or whether the general public is prepared to sacrifice privacy rights in the pursuit of national security. With the intrusive nature of facial recognition, authorities should not be allowed to make this decision on behalf of the general public, especially when there is so much precedent for abuse and privacy is a hot-topic following scandals in private industry.

Of course, there are examples of the establishment slowing down progress to give time for these considerations. In San Francisco, the city’s Board of Supervisors has made it illegal for forces to implement facial recognition technologies unless approval has been granted. The police force would have to demonstrate stringent justification, accountability systems and safeguards to privacy rights.

In the UK, Dr Murray and Professor Fussey are calling for a pause on the implementation or trialling of facial recognition technologies until the impact on and trade-off of privacy rights have been fully understood.

Facial recognition technologies are becoming incredibly useful when it comes to access and authentication, though there needs to be some serious conversations about the privacy implications of using the tech in the world of surveillance and police enforcement. At the moment, it seems to be nothing but an after-thought for the police forces and intelligence agencies, an incredibly worrying and dangerous attitude to have.

Orange points to privacy benefits through MEC

Mobile Edge Computing (MEC) is back on the buzzword agenda after spending a few years in the wilderness and Orange has pointed to an interesting privacy benefit to the technology.

After getting a technology tour at Roland Garros this week, one of the quick demos offered some insight into the world of video analytics and edge computing. Using several different wireless cameras scattered around the venue and various AI applications, Orange is able to keep track on the number of individuals who are in one particular area. This could be one of the entertainment areas or the courts themselves, but the algorithm is able to give an accurate estimate of how populated these areas are, which can help for crowd control or security purposes.

The idea of using facial recognition through video surveillance has started to create some privacy concerns in recent months, as there is little awareness from the general public who have not consented to being monitored, but this is where it gets interesting. Orange pointed out that the images are not detailed to identify specific individuals, just the number of individuals in an area, but even if it was, it doesn’t matter because of edge computing.

With processing power stored on the edge of the network the data can be processed, insight captured, before being deleted. Useless information can be sifted out on the edge, with only relevant data or the insight sent back to the core. By empowering the edge, privacy concerns are negated as personal information is not actually being stored by Orange, simply the insight which would not be considered sensitive.

This is not a revelation which is going to change the technology world, but it is an interesting little benefit which addresses a growing concern in the wider society.

UK Government’s mass surveillance deemed unlawful by courts

The UK Government’s mass surveillance and data collection activities has been ruled unlawful by the Court of Appeal after the laws were challenged by Labour MP Tom Watson and human rights group Liberty.

The law which was challenged by Watson, the Data Retention and Investigatory Powers Act (DRIPA) has now expired, though the government largely replicated and expanded these same powers in the Investigatory Powers Act, more commonly known as the Snoopers Charter. This controversial law came into effect in the early stages of 2017 though there is likely to be a lot of back peddling from the Government.

One of the more contentious aspects of the Snoopers Charter forced telcos to collect and store data on what customers internet activities and phone records. Some critics pointed out this information, which could be accessed by the government with little accountability, made everyone a person of interest. The Court of Appeal has ruled collecting information on someone who is not under investigation for a crime is indeed unlawful.

The Court of Appeal found two areas in particular which were deemed unlawful:

  1. Access to the data was restricted or used solely for the purpose of fighting crime
  2. Police and intelligence agencies were their own authority for infringing an individual’s privacy. This should only be authorised by courts or an independent body

In short, the government gave itself a blank cheque with no clauses for accountability or justification to violate UK citizens right to privacy. Big Brother was certainly in free-flowing form.

“This legislation was flawed from the start,” said Watson. “It was rushed through Parliament just before recess without proper parliamentary scrutiny.

“The Government must now bring forward changes to the Investigatory Powers Act to ensure that hundreds of thousands of people, many of whom are innocent victims or witnesses to crime, are protected by a system of independent approval for access to communications data. I’m proud to have played my part in safeguarding citizen’s fundamental rights.”

“Yet again a UK court has ruled the Government’s extreme mass surveillance regime unlawful,” said Martha Spurrier of Liberty. “This judgment tells ministers in crystal clear terms that they are breaching the public’s human rights. The latest incarnation of the Snoopers’ Charter, the Investigatory Powers Act, must be changed.

“No politician is above the law. When will the Government stop bartering with judges and start drawing up a surveillance law that upholds our democratic freedoms?”

What is worth noting is that this case is not a direct challenge to the Snoopers Charter. Any accusations of wrong-doing are indirect, as the Snoopers Charter simply continued offering power to the government and intelligence agencies which were granted in DRIPA. Liberty is directly challenging the Snoopers Charter in court later in the year, but hopefully by that point the rules will be torn up.

This is a very promising move from the Court of Appeal, as there have been some very worrying trends around the world; the concept of an individual’s privacy was starting to look like a footnote in historical records. The Canadian Government has been expanding the powers of intelligence agencies, France and Germany were exploring how encryption could be weakened, the US has been passing laws about surveillance without consulting elected representatives and Australia has been trying to introduce its own anti-encryption laws.

Unfortunately this is one of the few successful cases were worrying rules have been challenged. The UK is not the only example of this, as politicians feel fear mongering messages and the prospect of terrorism is a perfectly valid reason to ignore and bastardise human rights to privacy. The concept of privacy is slowly becoming a thing of the past.