Europe set to join the facial recognition debate

With more authorities demonstrating they cannot be trusted to act responsibly or transparently, the European Commission is reportedly on the verge of putting the reigns on facial recognition.

According to reports in The Financial Times, the European Commission is considering imposing new rules which would extend consumer rights to include facial recognition technologies. The move is part of a greater upheaval to address the ethical and responsible use of artificial intelligence in today’s digital society.

Across the world, police forces and intelligence agencies are imposing technologies which pose a significant risk of abuse without public consultation or processes to create accountability or justification. There are of course certain nations who do not care about privacy rights of citizens, though when you see the technology being implemented for surveillance purposes in the likes of the US, UK and Sweden, states where such rights are supposedly sacred, the line starts to be blurry.

The reasoning behind the implementation of facial recognition in surveillance networks is irrelevant; without public consultation and transparency, these police forces, agencies, public sector authorities and private companies are completely disregarding the citizens right to privacy.

These citizens might well support such initiatives, electing for greater security or consumer benefits over the right to privacy, but they have the right to be asked.

What is worth noting, is that this technology can be a driver for positive change in the world when implemented and managed correctly. Facial scanners are speeding up the immigration process in airports, while Telia is trialling a payment system using facial recognition in Finland. When deployed with consideration and the right processes, there are many benefits to be realised.

The European Commission has not confirmed or denied the reports to Telecoms.com, though it did reaffirm its on-going position on artificial intelligence during a press conference yesterday.

“In June, the high-level expert group on artificial intelligence, which was appointed by the Commission, presented the first policy recommendations and ethics guidelines on AI,” spokesperson Natasha Bertaud said during the afternoon briefing. “These are currently being tested and going forward the Commission will decide on any future steps in-light of this process which remains on-going.”

The Commission does not comment on leaked documents and memos, though reading between the lines, it is on the agenda. One of the points the 52-person expert group will address over the coming months is building trust in artificial intelligence, while one of the seven principles presented for consultation concerns privacy.

On the privacy side, parties implementing these technologies must ensure data ‘will not be used to unlawfully or unfairly discriminate’, as well as setting systems in place to dictate who can access the data. We suspect that in the rush to trial and deploy technology such as facial recognition, few systems and processes to drive accountability and justification have been put in place.

Although these points do not necessarily cover the right for the citizen to decide, tracking and profiling are areas where the group has recommended the European Commission consider adding more regulation to protect against abuses and irresponsible deployment or management of the technology.

Once again, the grey areas are being exploited.

As there are only so many bodies in the European Commission or working for national regulators, and technology is advancing so quickly, there is often a void in the rules governing the newly emerging segments. Artificial intelligence, surveillance and facial recognition certainly fall into this chasm, creating a digital wild-west landscape where those who do not understand the ‘law of unintended consequence’ play around with new toys.

In the UK, it was unveiled several private property owners and museums were using the technology for surveillance without telling consumers. Even more worryingly, some of this data has been shared with police forces. Information Commissioner Elizabeth Denham has already stated her agency will be looking into the deployments and will attempt to rectify the situation.

Prior to this revelation, a report from the Human Rights, Big Data & Technology Project attacked a trial from the London Metropolitan Police Force, suggesting it could be found to be illegal should it be challenged in court. The South Wales Police Force has also found itself in hot water after it was found its own trials saw only an 8% success rate.

Over in Sweden, the data protection regulator used powers granted by GDPR to fine a school which had been using facial recognition to monitor attendance of pupils. The school claimed they had received consent from the students, but as they are in a dependent position, this was not deemed satisfactory. The school was also found to have substandard processes when handling the data.

Finally, in the US, Facebook is going to find itself in court once again, this time over the implementation of facial recognition software in 2010. A class-action lawsuit has been brought against the social media giant, suggesting the use of the technology was non-compliant under the Illinois Biometric Information Privacy Act.

This is one example where law makers have been very effective in getting ahead of trends. The law in question was enacted in 2008 and demanded companies gain consent before any facial recognition technologies are introduced. This is an Act which should be applauded for its foresight.

The speed in which progress is being made with facial recognition in the surveillance world is incredibly worrying. Private and public parties have an obligation to consider the impact on the human right to privacy, though much distaste has been shown to these principles in recent months. Perhaps it is more ignorance, short-sightedness or a lack of competence, but without rules to govern this segment, the unintended consequences could be compounded years down the line.

Another point worth noting is the gathering momentum to stop the wrongful implementation of facial recognition. Aside from Big Brother Watch raising concerns in the UK, the City of San Francisco is attempting to implement an approval function for police forces, while Google is facing an internal rebellion. Last week, it emerged several hundred employees had signed a petition refusing to work on any projects which would aid the government in tracking citizens through facial recognition surveillance.

Although the European Commission has not confirmed or denied the report, we suspect (or at the very least hope) work is being taken on to address this area. Facial recognition needs rules, or we will find ourselves in a very difficult position, similar to today.

A lack of action surrounding fake news, online bullying, cybersecurity, supply chain diversity and resilience, or the consolidation of power in the hands of a few has created some difficult situations around the world. Now the Commission and national governments are finding it difficult to claw back the progress of technology. This is one area where the European Commission desperately needs to get ahead of the technology industry; the risk and consequence of abuse is far too great.

NASA breach shows there is something wrong with data rules

Two months ago, the US National Aeronautics and Space Administration (NASA) got hacked, yesterday it told employees. For just over eight weeks, employees were blissfully unaware they were the victim of cyber-crime.

In an internal memo sent to staff on December 18, the NASA management team informed employees servers containing personal information on current and former employees had been hacked on October 23. The agency still does not know the full extent of the breach, though personal information has been compromised, included social security numbers.

For those involved in the breach, nothing might happen. Or, the personal information stolen could be used for a number of different things including ruining credit scores or open credit card and bank accounts in the individuals name. In this instance, the effected individuals have not been able to do anything to protect themselves.

“Upon discovery of the incidents, NASA cybersecurity personnel took immediate action to secure the servers and the data contained within,” the memo states.

“NASA and its Federal cybersecurity partners are continuing to examine the servers to determine the scope of the potential data exfiltration and identify potentially affected individuals. This process will take time. The ongoing investigation is a top agency priority, with senior leadership actively involved. NASA does not believe that any Agency missions were jeopardized by the cyber incidents.”

This is the issue with data protection and privacy laws at the moment. NASA may have wanted to inform employees and the world about the incident at the time, but they might not have. There are also rules in the US which dictate NASA could have been forced to keep quiet about the incident by law enforcement agencies during an investigation. While this might be the best way to catch the hackers, that will come as no comfort to those who are potentially impacted by the incident. They were left in the dark.

This is an example of government sacrificing the individual for the greater good. Authorities might be able to justify such actions by catching the hacker, thus making the world a slightly safer place for everyone else, but what compensation is that for the people who get hurt. This rule might fit into the bigger picture scenario of government, but if even one person has been ripped off because of this delayed information, NASA and the law enforcement agencies failed that person. For us, that is not good enough.

Perhaps there is a middle ground. The employees are informed but held under some sort of non-disclosure agreement. The individual can take action to protect themselves while simultaneously allowing the law enforcement agencies to act without fear the hacker will go underground.

More than anything else, this incident perhaps shows the inadequacies of rules and regulations today. The speed at which damage can be done in the digital world is startling, and people need to be as vigilant as possible. This means having all the available information to make informed decisions. This rule might have worked in a previous era, but it is outdated in today’s digital society. Let’s hope no-one feels the sharp end of the stick.