US Government to consider strict data localisation laws

US Senator Josh Hawley has proposed new legislation which would impose data localisation requirements on foreign technology companies.

The legislation, known as the National Security and Personal Data Protection Act, is targeted primarily at China, though it might not be surprising for this concept to be extended elsewhere. As part of the legislation, Chinese app developers who offer services in the US would not be allowed to store data in China. US companies would also be banned from storing data on US citizens in the country.

“Current law makes it far too easy for hostile foreign governments like China to access Americans’ sensitive data,” said Hawley, one of the more prominent critics of Big Tech in Congress.

“Chinese companies with vast amounts of personal data on Americans are required by Chinese law to provide that data to Chinese intelligence services. If your child uses TikTok, there’s a chance the Chinese Communist Party knows where they are, what they look like, what their voices sound like, and what they’re watching. That’s a feature TikTok doesn’t advertise.”

While such a move should have been expected from Harley, concerns over the aggressive success of TikTok have been raised by politicians in recent memory, it remains to be seen whether the sense of irony will be appreciated.

Back in June, rumours circulated regarding US posturing. US Secretary of State Mike Pompeo reportedly suggested the number of visas afforded to India would be limited unless the government dropped data localisation requirements on US firms. Data protection and privacy issues are perhaps at the heart of these regulation and legislation, however there is also the economic bonus of data centre investments.

This is perhaps an aspect of the legislation which would appeal to US citizens and the White House. President Trump does thoroughly enjoy shouting and screaming about the economic value his administration is bringing to the country.

In the current climate where the US and Chinese governments are not on friendly terms, this is legislation which might be passed quickly. US politicians are fearful of Chinese intelligence agencies, and many would quickly jump on the opportunity to kill any links to China.

US citizens believe data collection risks outweigh the benefits

The technology world is becoming increasingly complicated and inaccessible for the majority, so it is of little surprise citizens are focusing on the negative.

A fair assumption is the majority of individuals will mistrust something they do not understand. This is not a new concept and has been evident throughout the centuries, but the technology giants have rarely helped themselves with secretive business models and presenting an incredibly opaque picture for data analysis.

According to the Pew Research Centre, a supposedly politically-neutral US think tank, the majority of US citizens do not trust the new data tsunami which is sweeping through every aspects of our lives.

Private industry Government
The citizen has little control over data collected by… 81% 84%
Risks outweigh the benefits for data collected by… 81% 66%
Concerned about how data is collected by… 79% 64%
The citizen does not know how data is used by… 59% 78%

What this data indicates is a lack of understanding, and perhaps a condemnation of the competency of those in-control of the data to manage it appropriately. This is a significant risk to anyone involved in the newly-flourishing data-sharing economy; if the general public start to push back, success will be difficult to realise.

There are of course numerous elements to consider as to why the US general public is seemingly so set against the data economy. Firstly, perhaps Big Tech has been too mysterious with the way it functions.

Few people genuinely understand the way in which the big data machine works. There might be a basic understanding of the function, purpose and outcome, but Big Tech has been incredibly secretive when it comes to the nitty-gritty details. These are trade secrets after all, the likes of Google would not want to help its rivals in creating better data-churning machines as this would erode any competitive edge. But the general public are also being left in the dark.

This generally doesn’t matter until things start to go wrong, which leads us onto the second point. There have been too many high-profile data breaches or leaks, such as Equifax, or cases where data has been used irresponsibly, Cambridge Analytica for example. When you combine negative outcomes with a lack of understanding of how the machine functions, the general public will start to become uneasy.

In general, more needs to be done to educate on numerous different areas. Firstly, how the data economy functions. Secondly, what rights individuals have to opt-out of data collections. These rights do exist, and the fact the general public is not aware is a failure of the government. Third, the general public should be aware of what is being done today; how data is being collected, stored, analysed and applied. And finally, what the big picture is, how this data can lead to benefits for society and the individual.

The issue which has been raised here is very simple to understand. The general public is beginning to mistrust the digital economy because it is being asked to trust in a mechanism without any explanation. This is a significant challenge and will need to be addressed as soon as possible. Negative ideas have a way of festering when not addressed. More education is needed or there could be resistance to progress further into the digital world.

Microsoft might be toying with European data protection compliance

The European Data Protection Supervisor has raised ‘serious concerns’ over whether Microsoft is compliant with data protection regulations.

The contracts in question are between the software giant and various European Union institutions which are making use of said products. The central issue is whether contractual terms are compliant with data protection laws intended to protect individual rights across the region from foreign bodies which do not hold data protection to the same standards.

“Though the investigation is still ongoing, preliminary results reveal serious concerns over the compliance of the relevant contractual terms with data protection rules and the role of Microsoft as a processor for EU institutions using its products and services,” a statement reads.

“Similar risk assessments were carried out by the Dutch Ministry of Justice and Security confirmed that public authorities in the Member States face similar issues.”

The preliminary findings from the European Data Protection Supervisor follow on from investigations taking place in the Netherlands and also changes to the Microsoft privacy policies for its VoIP product Skype and AI assistant Cortana. The changes were seemingly a knee-jerk reaction to reports contractors were listening to audio clips to improve translations and the accuracy of inferences.

What is worth noting is that Microsoft is not the only company which has been bending the definition of privacy with regard to contractors and audio clips. Amazon and Google have also been dragged into the hazy definition of privacy and consent.

The issue which seems to be at the heart of this investigation is one of arm’s length. While government authorities and agencies might hand-over responsibility of data protection and privacy compliance to the cloud companies, the European Data Protection Supervisor is suggesting more scrutiny and oversight should be applied by said government parties.

Once again, the definition and extent of privacy principles are causing problems. Europe takes a much more stringent stance on the depth of privacy, as well as the rights which are affording to individuals, than other regions around the world. Ensuring the rights of European citizens are extended elsewhere was one of the primary objectives of the GDPR, though it seems there are still teething problems.

“When using the products and services of IT service providers, EU institutions outsource the processing of large amounts of personal data,” the statement continues.

“Nevertheless, they remain accountable for any processing activities carried out on their behalf. They must assess the risks and have appropriate contractual and technical safeguards in place to mitigate those risks. The same applies to all controllers operating within the EEA.”

One development which could result in additional scrutiny is The Hague Forum, an initiative to create standardised contracts for European member states which meet the baseline data protection and privacy conditions set forward. The European Data Protection Supervisor has encouraged all European institutions to join the Forum.

Although GDPR was seen as a headache for many companies around the world, such statements from the European Data Protection Supervisor proves this is not an area which can simply be addressed once and then forgotten. GDPR was supposed to set a baseline, and there will be more regulation to build further protections. Perhaps the fact that Microsoft is seemingly non-compliant with current regulations justifies the introduction of more rules and red-tape.

UK starts laying groundwork for another assault on privacy

UK Home Secretary Priti Patel is reportedly to sign a transatlantic agreement offering the UK Government more clout over the stubborn messaging platforms.

First and foremost, this is not a pact between the UK and US which would compel the messaging platforms to break their encryption protections, but it is a step towards offering the UK Government more opportunity.

According to The Times, Patel will sign an agreement with the US next month which will offer the UK powers to compel US companies which offer messaging services to handover data to police forces, intelligence services and prosecutors. After the Clarifying Lawful Overseas Use of Data (CLOUD) Act was signed into law last year, the US Government was afforded the opportunity to share more data with foreign governments, and this would appear to be the first of such agreements.

This is of course not the first time the UK Government has set its eyes on undermining user privacy. Former-Home Secretary Amber Rudd was the champion of the Government efforts to break the blockage during yesteryear, attempting to force these companies to introduce ‘backdoors’ which would enable the access of information.

There are of course numerous reasons why this would be seen as an awful idea. Firstly, the introduction of a back-door is a vulnerability by design. It doesn’t matter how well secured it is, if there is a vulnerability the nefarious actors in the darker corners of the web will find it.

Secondly, stringent security measures should not be undermined for the sake of it or because the consumer is not driven by security as a reason for using the services. Your correspondent does not buy a car because it has the best airbags, but he would be irked if they didn’t work when called upon.

Finally, governments and public offices have not proven themselves responsible enough to hand over such a potential violation of the human right to privacy. And let’s not forget, Article 8 of the European Convention on Human Rights is solely focused on privacy.

What is worth noting is this pact with the US Government is not a measure to introduce back-doors into encryption software, but you should always bear in mind what the UK Government is driving towards with incremental steps. It is easy to forget the bigger picture when small steps are made, but how often have you looked back and wondered how we got to a certain situation?

The CLOUD Act offers the US agencies the right to collect limited information from the messaging platform providers. Currently, US authorities can request information such as who the user is messaging, when and the frequency. The law does not grant access to the content of the messages, though it is a step towards wielding greater control and influence over the social media companies.

Should Patel sign this agreement, and it is still an if right now, this power would be extended to the UK Government to collect information on UK citizens.

What is worth noting is this is not official, though it would not surprise us. Rudd attempted to revolutionise the relationship between the UK Government and messaging platforms, and this failed spectacularly. This would be a more reasonable approach, taking baby steps towards the ultimate goal.

France told to stay in its lane over ‘right to be forgotten’

Google has won a landmark case against French regulator Commission nationale de L’informatique et des libertés (CNIL) over the ‘right to be forgotten’ rules.

After being fined €100,000 for refusing to de-reference certain references in markets outside of the CNIL jurisdiction, Google took the regulator to the Court of Justice of the European Union. And Europe’s top court agreed with the search giant.

“The operator of a search engine is not required to carry out a de-referencing on all versions of its search engine,” the court ruling states.

“It is, however, required to carry out that de-referencing on the versions corresponding to all the Member States and to put in place measures discouraging internet users from gaining access, from one of the Member States, to the links in question which appear on versions of that search engine outside the EU.”

In short, Google must de-reference inside the European Union, while also preventing internet users inside the bloc from accessing de-referenced content which is hosted elsewhere. Preventing those inside the European Union from seeing de-referenced content on versions of the search engine outside of the bloc will be complicated, there is always a workaround if you know what you are doing, however it is a win for Google.

For the CNIL, this is a humbling ruling however. The regulator has effectively been told to stick to its job and not try to force its will upon companies where it has no right to. And we whole-heartedly agree.

The French regulator has no right to impose its own rules on Google when it is operating in other sovereign nation states.

This case dates back to 2015 when the idea of ‘right to be forgotten’ was forced upon Google. In France, and generally across Europe, an individual or company can request Google de-reference search results which are damaging or false. This does not give individuals freedom to remove any reference to them which they don’t like, but it does allow for the removal of false information. These are reasonable rules.

In reaction to the rules, Google geo-fenced internet users in the European Union, but refused to de-reference information on versions of the search engine outside the bloc. This is a reasonable response and course of action.

This is what the French regulator had an issue with, though it has quite rightly been told to stay within its remit. This is a reasonable judgement.

What the French regulator was trying to do was wrong and would have set a damaging precedent. No government or regulator should be allowed to apply its own rules outside its border. The European Union is a tricky situation, as rules can be extended to member states, though there is a hard border at the edge of the bloc.

Thankfully the Court of Justice of the European Union has applied logic to the situation.

Facebook starts taking data guardian role seriously

Facebook needs to get back in the good books of both regulators and the general public sharpish, and it seems it is taking a machete to the developer ecosystem to do so.

As part of the agreement with the Federal Trade Commission, Facebook has promised to create a more comprehensive oversight model for the development and implementation of apps on its platform, and it does seem to be taking its responsibility seriously this time around. Whether this prevents a repeat of the Cambridge Analytica scandal which kicked-off the privacy debate remains to be seen, though it is making the right noises.

“Our App Developer Investigation is by no means finished,” said Ime Archibong, VP of Product Partnerships.

“But there is meaningful progress to report so far. To date, this investigation has addressed millions of apps. Of those, tens of thousands have been suspended for a variety of reasons while we continue to investigate.”

Although it is very difficult to figure out how many app developers and applications there are actually on the Facebook platform at any single point, Archibong has stated that 400 developers have been deemed to be breaking the rules. These 400 are responsible for the ‘tens of thousands’ of apps which have been suspended.

While this is a promising start from the social media giant, it will have to do a lot more. We struggle to believe the number of suspect app developers is as low as 400. There might be 400 in London, but worldwide it is going to be a number which is monstrously larger.

This is where Facebook will struggle to be the perfect guardian of our digital lives. With the number of developers and apps unthinkable it will never be able to protect us from every bad actor. Whether best effort is good enough for the critics remains to be seen.

Dating back to March 2018, this is a saga which Facebook cannot shake-off. The general public, politicians and regulators were all enraged by what can only be described as gross negligence from the social media giant. Rules were in place, though there were not nearly comprehensive enough and rarely were bad actors put to the sword and held accountable.

This is what Facebook has to prove to its critics; it is a company which is responsible and can act as an effective guardian of the user’s personal information. It is currently being judged in court of public opinion, a very difficult place to make any progress when the masses are baying for blood.

Although the Cambridge Analytica scandal is only part of the problem, it was the incident which turned the tides against the technology industry. Along with other privacy scandals and debatable business practices, Silicon Valley is being placed under the microscope and it is not working out well. Best case scenario for the likes of Facebook and Google is stricter regulation, though the worst outcome could see acquisitions reversed in the pursuit of increased competition and diluted influence at these companies.

This Facebook investigation is looking to identify the developers who are most likely to break the rules, though there are stricter guidelines being put in place. Archibong is suggesting many of the quiz apps which plague the platform will be banned moving forward, as many will be judged to collect too much information when measured against the value which they offer. Moving forward, these developers shouldn’t be able to get away with it.

This in itself is the problem; Facebook was asleep at the wheel. It created a valuable product and then started to count the cash. It didn’t evolve the rules as the platform grew into an entirely different proposition and it didn’t keep an eye on whether app developers were breaking the basic rules which it had in place anyway.

If Facebook’s quest continues on its current trajectory, the developer ecosystem might have to work a bit harder to access personal information. Apps with very limited functionality and value will not be granted access to the same treasure troves, while the team will also have to prove collecting personal information will improve experience for the user.

Another interesting point which was raised in the commitment is an annual review. Archibong is suggesting every app will be assessed on a yearly basis, and those who do not respond effectively to the audits will be temporarily suspended or banned.

It remains to be seen whether Facebook is doing enough to keep critics happy, though there is no such thing as being heavy-handed here. Facebook will have to take the strictest approach, over compensating even, to ensure it regains the trust and credibility it threw away through inaction.

US tech fraternity pushes its own version of GDPR

The technology industry might enjoy light-touch regulatory landscapes, but change is on the horizon with what appears to be an attempt to be the master of its own fate.

In an open-letter to senior members of US Congress, 51 CEOs of the technology and business community have asked for a federal law governing data protection and privacy. It appears to be a push to gain consistency across the US, removing the ability for aggressive and politically ambitious Attorney Generals and Senators to create their own, local, crusades against the technology industry.

Certain aspects of the framework proposed to the politicians are remarkably similar to GDPR, such as the right for consumers to control their own personal data, seek corrections and even demand deletion. Breach notifications could also be introduced, though the coalition of CEOs are calling for the FTC to be the tip of the spear.

Interestingly enough, there are also calls to remove ‘private right of action’, meaning only the US Government could take an offending company to court over violations. In a highly litigious society like the US, this would be a significant win for any US corporation.

And while there are some big names attached to the letter, there are some notable omissions. Few will be surprised Facebook’s CEO Mark Zuckerberg has not signed a letter requesting a more comprehensive approach to data privacy, though Alphabet, Microsoft, Uber, Verizon, T-Mobile US, Intel, Cisco and Oracle are also absent.

“There is now widespread agreement among companies across all sectors of the economy, policymakers and consumer groups about the need for a comprehensive federal consumer data privacy law that provides strong, consistent protections for American consumers,” the letter states.

“A federal consumer privacy law should also ensure that American companies continue to lead a globally competitive market.”

CEOs who have signed the letter include Jeff Bezos of Amazon, Alfred Kelly of Visa, Salesforce’s Keith Block, Steve Mollenkoph of Qualcomm, Randall Stephenson of AT&T and Brian Roberts of Comcast.

Although it might seem unusual for companies to be requesting a more comprehensive approach to regulation, the over-arching ambition seems to be one of consistency. Ultimately, these executives want one, consolidated approach to data protection and privacy, managed at a Federal level, as opposed to a potentially fragmented environment with the States applying their own nuances.

It does appear the technology and business community is attempting to have some sort of control over its own fate. As much as these companies would want a light-touch regulatory environment to continue, this is not an outcome which is on the table. The world is changing but consolidating this evolution into a single agency the lobbyists can be much more effective, and cheaper.

The statement has been made through Business Roundtable, a lobby group for larger US corporations, requesting a national consumer privacy law which would pre-empt any equivalent from the states or local government. Definitions and ownership rules should be modernised, and a risk-orientated approach to data management, storage and analysis is also being requested.

Ultimately, this looks like a case of damage control. There seems to be an acceptance of regulation overhaul, however the CEOs are attempting to control exposure. In consolidating the regulations through the FTC, punishments and investigations can theoretically only be brought forward through a limited number of routes, with the companies only having to worry about a single set of rules.

Consistency is a very important word in the business world, especially when it comes to regulation.

What we are currently seeing across the US is aggression towards the technology industry from almost every legal avenue. Investigations have been launched by Federal agencies and State-level Attorney Generals, while law suits have also been filed by non-profits and law firms representing citizens. It’s a mess.

Looking at the Attorney Generals, there do seem to be a couple who are attempting to make a name for themselves, pushing into the public domain. This might well be the first steps for higher offices in the political domain. For example, it would surprise few if New York Attorney General Letitia James harbours larger political ambitions and striking a blow for the consumer into Facebook would certainly gain positive PR points.

Another interesting element is the fragmentation of regulations to govern data protection and privacy. For example, there are more aggressive rules in place in New York and California than in North Carolina and Alaska. In California, it becomes even more fragmented, just look at the work the City of San Francisco is undertaking to limit the power of facial recognition and data analytics. These rules will effectively make it impossible to implement the technology, but in the State of Illinois, technology companies only have to seek explicit consent from the consumer.

Inconsistency creates confusion and non-compliance. Confusion and non-compliance cost a lot of money through legal fees, restructuring, product customisation and fines.

Finally, from a PR perspective, this is an excellent move. The perception of Big Business at the moment, is that it does not care about the privacy rights of citizens. There have been too many scandals and data breaches for anyone to take claims of caring about consumer privacy seriously. By suggesting a more comprehensive and consistent approach to privacy, Big Business can more legitimately claim it is the consumer champion.

A more consistent approach to regulation helps the Government, consumers and business, however this is a move from the US technology and business community to control their own fate. This is a move to decrease the power and influence of the disruptive Attorney Generals and make the regulatory evolution more manageable.

Momentum is gathering pace towards a more comprehensive and contextually relevant privacy regulatory landscape, and it might not be too long before a US version of Europe’s GDPR is introduced.

Is $170 million a big enough fine to stop Google privacy violations?

Another week has passed, and we have another story focusing on privacy violations at Google. This time it has cost the search giant $170 million, but is that anywhere near enough?

The Federal Trade Commission (FTC) has announced yet another fine for Google, this time the YouTube video platform has been caught breaking privacy rules. An investigation found YouTube had been collecting and processing personal data of children, without seeking permission from the individuals or parents.

“YouTube touted its popularity with children to prospective corporate clients,” said FTC Chairman Joe Simons. “Yet when it came to complying with COPPA [the Children’s Online Privacy Protection Act], the company refused to acknowledge that portions of its platform were clearly directed to kids. There’s no excuse for YouTube’s violations of the law.”

Once again, a prominent member of the Silicon Valley society has been caught flaunting privacy laws. The ‘act now, seek permission later’ attitude of the internet giants is on show and there doesn’t seem to be any evidence of these incredibly powerful and monstrously influential companies respecting laws or the privacy rights of users.

At some point, authorities are going to have to ask whether these companies will ever respect these rules on their own, or whether they have to be forced. If there is a carrot and stick approach, the stick has to be sharp, and we wonder whether it is anywhere near sharp enough. The question which we would like to pose here is whether $170 million is a large enough deterrent to ensure Google does something to respect the rules.

Privacy violations are nothing new when it comes to the internet. This is partly down to the fragrant attitude of those left in positions of responsibility, but also the inability for rule makers to keep pace with the eye-watering fast progress Silicon Valley is making.

In this example, rules have been introduced to hold Google accountable, however we do not believe the fine is anywhere near large enough to ensure action.

Taking 2018 revenues at Google, the $170 million fine represents 0.124% of the total revenues made across the year. Google made on average, $370 million per day, roughly $15 million per hour. It would take Google just over 11 hours and 20 minutes to pay off this fine.

Of course, what is worth taking into account is that these numbers are 12 months old. Looking at the most recent financial results, revenues increased 19% year-on-year for Q2 2019. Over the 91-day period ending June 30, Google made $38.9 billion, or $427 million a day, $17.8 million an hour. It would now take less than 10 hours to pay off the fine.

Fines are supposed to act as a deterrent, a call to action to avoid receiving another one. We question whether these numbers are relevant to Google and if the US should consider its own version of Europe’s General Data Protection Regulation (GDPR).

This is a course which would strike fear into the hearts of Silicon Valley’s leadership, as well as pretty much every other company which has any form of digital presence. It was hard work to become GDPR compliant, though it was necessary. Those who break the rules are now potentially exposed to a fine of €20 million or 3% of annual revenue. British Airways was recently fined £183 million for GDPR violations, a figure which represented 1.5% of total revenues due to co-operation from BA during the investigation and the fact it owned-up.

More importantly, European companies are now taking privacy, security and data protection very seriously, though the persistent presence of privacy violations in the US suggests a severe overhaul of the rules and punishments are required.

Of course, Google and YouTube have reacted to the news in the way you would imagine. The team has come, cap in hand, to explain the situation.

“We will also stop serving personalized ads on this content entirely, and some features will no longer be available on this type of content, like comments and notifications,” YouTube CEO Susan Wojcicki said in a statement following the fine.

“In order to identify content made for kids, creators will be required to tell us when their content falls in this category, and we’ll also use machine learning to find videos that clearly target young audiences, for example those that have an emphasis on kids characters, themes, toys, or games.”

The appropriate changes have been made to privacy policies and the way in which ads are served to children, though amazingly, the blog post does not feature the words ‘sorry’, ‘apology’, ‘wrong’ or ‘inappropriate’. There is no admission of fault, simply a statement that suggests they will be compliant with the rules.

We wonder how long it will be before Google will be caught breaking privacy rules again. Of course, Google is not alone here, if you cast the net wider to include everyone from Silicon Valley, we suspect there will be another incident, investigation or fine to report on next week.

Privacy rules are not acting as a deterrent nowadays. These companies have simply grown too large for the fines imposed by agencies to have a material impact. We suspect Google made much more than $170 million through the adverts served to children over this period. If the fine does not exceed the benefit, will the guilty party stop? Of course not, Google is designed to make money not serve the world.

Europe set to join the facial recognition debate

With more authorities demonstrating they cannot be trusted to act responsibly or transparently, the European Commission is reportedly on the verge of putting the reigns on facial recognition.

According to reports in The Financial Times, the European Commission is considering imposing new rules which would extend consumer rights to include facial recognition technologies. The move is part of a greater upheaval to address the ethical and responsible use of artificial intelligence in today’s digital society.

Across the world, police forces and intelligence agencies are imposing technologies which pose a significant risk of abuse without public consultation or processes to create accountability or justification. There are of course certain nations who do not care about privacy rights of citizens, though when you see the technology being implemented for surveillance purposes in the likes of the US, UK and Sweden, states where such rights are supposedly sacred, the line starts to be blurry.

The reasoning behind the implementation of facial recognition in surveillance networks is irrelevant; without public consultation and transparency, these police forces, agencies, public sector authorities and private companies are completely disregarding the citizens right to privacy.

These citizens might well support such initiatives, electing for greater security or consumer benefits over the right to privacy, but they have the right to be asked.

What is worth noting, is that this technology can be a driver for positive change in the world when implemented and managed correctly. Facial scanners are speeding up the immigration process in airports, while Telia is trialling a payment system using facial recognition in Finland. When deployed with consideration and the right processes, there are many benefits to be realised.

The European Commission has not confirmed or denied the reports to Telecoms.com, though it did reaffirm its on-going position on artificial intelligence during a press conference yesterday.

“In June, the high-level expert group on artificial intelligence, which was appointed by the Commission, presented the first policy recommendations and ethics guidelines on AI,” spokesperson Natasha Bertaud said during the afternoon briefing. “These are currently being tested and going forward the Commission will decide on any future steps in-light of this process which remains on-going.”

The Commission does not comment on leaked documents and memos, though reading between the lines, it is on the agenda. One of the points the 52-person expert group will address over the coming months is building trust in artificial intelligence, while one of the seven principles presented for consultation concerns privacy.

On the privacy side, parties implementing these technologies must ensure data ‘will not be used to unlawfully or unfairly discriminate’, as well as setting systems in place to dictate who can access the data. We suspect that in the rush to trial and deploy technology such as facial recognition, few systems and processes to drive accountability and justification have been put in place.

Although these points do not necessarily cover the right for the citizen to decide, tracking and profiling are areas where the group has recommended the European Commission consider adding more regulation to protect against abuses and irresponsible deployment or management of the technology.

Once again, the grey areas are being exploited.

As there are only so many bodies in the European Commission or working for national regulators, and technology is advancing so quickly, there is often a void in the rules governing the newly emerging segments. Artificial intelligence, surveillance and facial recognition certainly fall into this chasm, creating a digital wild-west landscape where those who do not understand the ‘law of unintended consequence’ play around with new toys.

In the UK, it was unveiled several private property owners and museums were using the technology for surveillance without telling consumers. Even more worryingly, some of this data has been shared with police forces. Information Commissioner Elizabeth Denham has already stated her agency will be looking into the deployments and will attempt to rectify the situation.

Prior to this revelation, a report from the Human Rights, Big Data & Technology Project attacked a trial from the London Metropolitan Police Force, suggesting it could be found to be illegal should it be challenged in court. The South Wales Police Force has also found itself in hot water after it was found its own trials saw only an 8% success rate.

Over in Sweden, the data protection regulator used powers granted by GDPR to fine a school which had been using facial recognition to monitor attendance of pupils. The school claimed they had received consent from the students, but as they are in a dependent position, this was not deemed satisfactory. The school was also found to have substandard processes when handling the data.

Finally, in the US, Facebook is going to find itself in court once again, this time over the implementation of facial recognition software in 2010. A class-action lawsuit has been brought against the social media giant, suggesting the use of the technology was non-compliant under the Illinois Biometric Information Privacy Act.

This is one example where law makers have been very effective in getting ahead of trends. The law in question was enacted in 2008 and demanded companies gain consent before any facial recognition technologies are introduced. This is an Act which should be applauded for its foresight.

The speed in which progress is being made with facial recognition in the surveillance world is incredibly worrying. Private and public parties have an obligation to consider the impact on the human right to privacy, though much distaste has been shown to these principles in recent months. Perhaps it is more ignorance, short-sightedness or a lack of competence, but without rules to govern this segment, the unintended consequences could be compounded years down the line.

Another point worth noting is the gathering momentum to stop the wrongful implementation of facial recognition. Aside from Big Brother Watch raising concerns in the UK, the City of San Francisco is attempting to implement an approval function for police forces, while Google is facing an internal rebellion. Last week, it emerged several hundred employees had signed a petition refusing to work on any projects which would aid the government in tracking citizens through facial recognition surveillance.

Although the European Commission has not confirmed or denied the report, we suspect (or at the very least hope) work is being taken on to address this area. Facial recognition needs rules, or we will find ourselves in a very difficult position, similar to today.

A lack of action surrounding fake news, online bullying, cybersecurity, supply chain diversity and resilience, or the consolidation of power in the hands of a few has created some difficult situations around the world. Now the Commission and national governments are finding it difficult to claw back the progress of technology. This is one area where the European Commission desperately needs to get ahead of the technology industry; the risk and consequence of abuse is far too great.

European court rules websites are equally responsible for some shared data

If you’ve got Facebook ‘like’ functionality on your website then you could be held responsible for any misuse of user data by the social media giant.

The court of Justice of the European Union made this judgment as part of an ongoing action brought by a German consumer rights group called Verbraucherzentrale NRW against German clothing etailer Fashion ID. It turns out that merely having the ‘like’ button embedded on your site results in personal data being automatically transferred to Facebook for it to use in whatever way it chooses, without the consent or even knowledge of the exploited punter.

Sifting through the legalese it looks like the court has concluded that Fashion ID is responsible for the user data it passes on to Facebook since the only reason it embedded the button in the first place is the commercial benefit it gets from people sharing its stuff on social media. This, in turn, means it must be subject to certain data protection obligations such as at least telling visitors to its site what they’re letting themselves in for.

While the case itself is relatively niche and arcane, it could represent the thin end of the wedge when it comes to data protection and consumer rights online in general. The internet is awash with contraptions, such as cookies, designed to track your every move and feed that data into the cyber hive-mind, all the better to work out how best to entice you into spending cash on stuff you didn’t even know you wanted.

Having said that it could be the case that, since Cambridge Analytica, the internet has already got the memo, as those ‘like’ buttons seem to be much less common than they were a few years ago. High profile fines for Facebook and violators of GDPR rules probably mean that website owners have become wary of just embedding any old third party rubbish onto their sites and rulings such as this should serve as a warning not slip back into bad habits.