Amnesty calls out Google and Facebook on privacy abuses

Amnesty International has unveiled a new report heavily criticising Google and Facebook, and the alleged strategies employed to abuse privacy rights of individuals.

The report, which is downloadable here, claims the likes of Google and Facebook force the general public into a Faustian bargain. Users are effectively asked to forgo certain human rights in order to access the digital society which we are now so dependent on.

“The internet is vital for people to enjoy many of their rights, yet billions of people have no meaningful choice but to access this public space on terms dictated by Facebook and Google,” said Kumi Naidoo, Secretary General of Amnesty International.

“To make it worse this isn’t the internet people signed up for when these platforms started out. Google and Facebook chipped away at our privacy over time.

“We are now trapped. Either we must submit to this pervasive surveillance machinery – where our data is easily weaponised to manipulate and influence us – or forego the benefits of the digital world. This can never be a legitimate choice.”

The extensive report outlines the business models which allegedly trap the general public into forgoing privacy rights, and calls governments to create more comprehensive privacy frameworks to prevent the harvesting of data. The key issue is the conditions placed on accessing these prominent services, Amnesty International does not believe Google and Facebook should be able to deny access if a user does not consent to data collection.

What is worth noting is that opting-out of certain services is an option. Google’s mapping products do have the opt-out option for example, though this is only a scratch on the surface. Not only are these opt-outs limited, we suspect few in the general public would actually realise this is an alternative.

While this is certainly one of the more comprehensive attacks on the global dominance of two of Silicon Valley’s most prominent residents, this is of course not the first. Amnesty International are quite late to the party as various politicians, including Presidential hopeful Elizabeth Warren, and non-profits, Electronic Frontier Foundation for example, have been protesting Big Tech for some time.

In fairness to the critics, there are some valid points. Firstly, on the market dominance of these two technology giants, and secondly, on the way we as society have sleep-walked into a position where the landscape has been artificially manufactured to compound this dominance.

Some of the more radical critics of Big Tech have been pushing for divestments of certain assets. This will be incredibly difficult, if not impossible, to deliver though the idea does force regulators to think more proactively about approving acquisitions and mergers in the first place.

For example, if regulators knew then what we know now, would Google have been allowed to acquire Android and YouTube? Equally, would Facebook have been allowed to absorb Instagram and WhatsApp? These six different platforms account for such a monstrous amount of internet traffic, opinion, news and debate, it seems irresponsible for such power to be concentrated into two companies.

Whether this can be fixed is still up for debate, though we are sceptical. Those who are under-threat of divestment are working to integrate the under-fire assets in such a complex manner with other areas of the business, it would be an operational and financial nightmare. If these companies can make it look disastrous to pluck apart the operations, politicians will likely back-off. The Government does not want to destroy one of the main drivers of the economy after all.

The second valid point is the creation of the digital public square. The means by which we share opinion and debate ideas has fundamentally shifted in recent years. People might be afraid of confrontation in real life, but they certainly aren’t online. Some might question whether this is healthy, but it is a reality of today’s society.

However, in accessing the digital public square, Amnesty International argues too many rights are waivered. Privacy is the central cog and Big Tech has been gradually eroding the concept of privacy for years. In 2010, we would have never dreamed of sharing some of the information we do today, but like the boiling frog, we have allowed the environment to change without protest.

The issue at the heart of this on-going debate is of course the treasure trove of data which is being horded by Big Tech. These are companies where the very life blood is information, hence why services are offered to the consumer for free. These services have become critical to the way in which we communicate, learn and debate; avoiding the platforms is an impossible task for some.

It is always worth pointing out that while Amnesty International is highly critical of the dominance of Facebook and Google, it is enjoying the benefits. The report has been circulated on the various platform to draw more eyeballs to the issue, while the organization does run ads through both companies to attract more attention and donations.

Like many other of the critical voices, Amnesty International is calling for greater protections to the consumer. The organisation hasn’t gone as far as to call for a break-up of Big Tech, perhaps realising this is an unachievable goal, but further restrictions should be placed on the companies who are so easily influencing every aspect of our lives.

Facebook and Google are here to stay, primarily because they make incredibly intelligent and forward-looking investment decisions, though how much influence they have on the future is open to debate. Today, these companies have scarily detailed profiles on users, though whether the political rhetoric to limit these powers is anything more than campaign promises remains to be seen.

Indian state says it can intercept any communications and hack any device it wants

In response to a question about WhatsApp hacking in parliament, the Indian home affairs Minister revealed the apparently limitless snooping power at his disposal.

The information comes courtesy of TechCrunch, which also helpfully linked to the source material. The Indian government was asked to comment on the following:

  • Whether the Government does tapping of WhatsApp calls and messages and if so, the details thereof;
  • The protocol being followed in getting permissions before tapping WhatsApp calls and messages;
  • Whether it is similar to that of mobile phones/telephones;
  • Whether the Government uses Pegasus software of Israel for this purpose;
  • Whether the Government does tapping of calls and messages of other platforms like Facebook Messenger, Viber, Google and similar platforms and if so, the details thereof.

While it didn’t address each point individually the Indian home affairs Minister, Kishan Reddy, answered with the following statement:

Section 69 of the Information Technology Act, 2000 empowers the Central Government or a State Government to intercept, monitor or decrypt or cause to be intercepted or monitored or decrypted, any information generated, transmitted, received or stored in any computer resource in the interest of the sovereignty or integrity of India, security of the State, friendly relations with foreign States or public order or for preventing incitement to the commission of any cognizable offence relating to above or for investigation of any offence.

There followed some vague stuff about government agencies not having blanket permission to hack electronic communications and devices, and that they would have to ask really nicely before they were allowed to do what they want. But the long and short of it is that anything you say or do in India can be viewed by the government whenever it fancies it.

Pegasus software refers to spyware made by NSO Group, which WhatsApp has openly accused of hacking its service. The government response didn’t address that question at all but it’s beyond question that there is a growing industry around the production of malware designed to help governments spy on their citizens.

Five years ago the India based Software Law and Freedom Centre said the Indian government was issuing over 100,000 telephone interception orders per year. It seems safe to assume that number has grown considerably since then and when you factor in all the other agencies that have a piece of this action you’re looking at a lot of state spying.

In India, as elsewhere, claimed interference in the electoral process, be that through misinformation or more sinister means, is being used as the justification for state interference in private matters. Any time a government claims it needs to spy in its citizens in the name of safety, the correct response is to ask whose safety it has in mind.

California proposes strictest privacy rules in the US

California Attorney General Xavier Becerra has unveiled new privacy proposals which have the potential to rival the impact of Europe’s GDPR on the digital economy.

When Europe announced its General Data Protection Regulation the digital economy was thrown into chaos. Businesses around the world had to audit monstrous amounts of data, as well as reconfigure business models, data collection procedures and relationships to ensure compliance. The rules being proposed here are slightly different, but Becerra is enforcing a privacy first mentality which might not sit comfortably with some in the digital economy.

There are three components of this proposed legislation to keep an eye-on. Firstly, the consumer has the right to request details on the data being stored by companies. Secondly, they have the right to demand this information be deleted. And thirdly, companies will have to seek consent from the consumer to monetize the data.

“Knowledge is power, and in the internet age knowledge is derived from data,” said Becerra. “Our personal data is what powers today’s data-driven economy and the wealth it generates. It’s time we had control over the use of our personal data. That includes keeping it private.

“We take a historic step forward today to protect Californians’ inalienable right to privacy. Once again, California leads the way putting people first in the Age of the Internet.”

However, before the privacy enthusiasts get too excited, there are some hurdles to negotiate. The original California Consumer Privacy Act (CCPA) has been passed, and will come into effect on January 1, though there have been additional bills passed to water-down the strength of these rules.

Although this will hit some like a bad smell, this is the reality of politics. Lobbyists in the US are incredibly powerful, and they are being fuelled by a very profitable technology industry with a lot to lose. This is not to say the new rules will not make an impact, though they might not be as revolutionary as some would hope when they come into effect.

That said, this will create the strongest privacy legislative regime across the US, ironically, in the home of the company’s who play so carelessly with privacy rights.

Looking at the similarities with GDPR, it does seem there has been some inspiration drawn from the rules. The right to request more information, as well as the right to demand deletion, are two elements which seem to be taken from GDPR. The final element mentioned above is very interesting and we suspect will be the focal point of the lobby efforts as these rules gather momentum.

The inclusion of a ‘Do not sell my data’ link is an aspect no-one in the data-sharing economy will want to see. The industry has largely profited to date through inaction. No-one can do anything about the monetization of data short of refusing to download the app. Consumers are effectively being forced into participating in the digital economy as there are no rules to provide an alternative. This element of the legislation would certainly cause a stir.

Some people will not like the fact companies are making money off their personal data if they are not getting a share of the rewards, irrelevant as to whether they are getting a service for free. Some will object on ethical grounds. Some will reject the concept as the risk of data breaches or leaks is deemed too great. Some will feel uneasy as there are still so many unknowns regarding the darker corners of the world wide web.

Irrelevant as to why an individual might not like the current status quo, as there has been no alternative, it has mattered little. The introduction of an alternative presents a lot of unknown scenarios. More moving parts will have to be factored into risk assessment protocols. It presents uncertainty, which is the enemy of profit.

Interestingly enough, Becerra seems to have learnt the residents of Silicon Valley have very elusive lawyers. Also included in the rules are definitions of those who would be subject to the rules. The company would have to:

  • Have revenues in excess of $25 million
  • Buy, receive, or sell the personal information of 50,000 or more consumers, households, or devices
  • Derives 50% of annual revenues from selling data

These are quite crafty conditions and could potentially cover every type of organization out there. The lawyers will have to be on top-form to find the grey areas here.

The rules still have to negotiate the turns and throws of the political aisles before the digital economy gets too worried, but California is setting the pace when it comes to tackling privacy concerns in the US.

UK, US and Australia demand security delay from Facebook

Politicians from the UK, the US and Australia have penned an open letter to Facebook CEO Mark Zuckerberg requesting the team delay end-to-end encryption plans.

Signed by UK Secretary of State Priti Patel, US Attorney General William Barr, Acting-Secretary of Homeland Security Kevin McAleenan, and Australian Minister for Home Affairs Peter Dutton, the letter requests that before any encryption technologies are applied to messaging services Facebook includes a means for enforcement agencies to access the content transmitted across the platforms.

Once again, politicians are defying logic by requesting the creation of a backdoor to by-pass the security and privacy features which are being implemented on messaging platforms and services.

“We are committed to working with you to focus on reasonable proposals that will allow Facebook and our governments to protect your users and the public, while protecting their privacy,” the letter states. “Our technical experts are confident that we can do so while defending cyber security and supporting technological innovation.”

It is as if the politicians do not live in the real world. We understand governments have a duty to protect society, and part of this will include monitoring the communications and activities of nefarious individuals, but this is not the right way to go about doing it.

Using the argument of security to undermine security and make citizens less secure is a preposterous idea, almost laughable. The ‘technical experts’ might be confident a backdoor can be built, but how do you protect it? This letter is requesting the construction of a vulnerability into security features, and once a vulnerability is there, it is only a matter of time before it is exposed by the suspect individuals in the rotting corners of society.

What is being suggested here is similar to building a high-security facility in the real world, with 15-foot, electrified walls, guards and watch-dogs, helicopters patrolling overhead, but then asking to leave the backdoor unlocked. It doesn’t matter how good defences are, eventually someone will find their way to the backdoor, open it and then let all his/her friends know how it was done. Chaos would eventually find a way.

This is of course a theoretical situation, the hackers might never find a way to or through the backdoor, but why tempt fate? No-one leaves their home believing they might be burgled that night, but they lock the door in any case. Why create a situation where the prospect of chaos is a possibility, irrelevant as to how faint? This seems like nothing more than simple logic.

As mentioned before, police forces and intelligence agencies are being tasked with keeping society safe. This is a very difficult job, especially with the progress of technology. Facebook, and others in the technology industry, should assist wherever possible (and legal), though this is not the right way to go about the situation.

This does put Facebook in a difficult position. The company is currently attempting to repair the damage to its reputation, as well as re-gain trust from both governments and wider society. However, it is increasingly looking like an impossible situation to satisfy both parties.

In March, Facebook CEO Mark Zuckerberg outlined a new focus for the company; it would hold the concept of privacy dear, and all new services will be built with privacy at the forefront of demands. Thanks to the Cambridge Analytica scandal, Facebook’s reputation as a guardian of personal information has been severely damaged, thus this new approach is critical to regaining credibility in the eyes of its users.

However, end-to-end encryption is a key element of this privacy strategy. Facebook cannot fulfil its promise to the user and satisfy the demands being laid out in this letter. If it was to build in a vulnerability, it could not tell the user in all honesty it has done everything possible to ensure security and privacy.

As the letter states, Facebook is doing more to clean-up its platform.

“In 2018, Facebook made 16.8 million reports to the US National Center for Missing & Exploited Children (NCMEC) – more than 90% of the 18.4 million total reports that year,” the letter states. “As well as child abuse imagery, these referrals include more than 8,000 reports related to attempts by offenders to meet children online and groom or entice them into sharing indecent imagery or meeting in real life.”

This is the situation which Facebook is in. It is never going to be able to remove all the hideous conversations and activity on its platform, but governments will demand it does. Something will always slip through the net, and the sharp stick of the law will be there to punish the company. Facebook will never be able to do enough to satisfy the demands of governments, and therefore will always be a defensive position.

However, you should not be distracted by the rhetoric which is being put forward in this letter. Yes, there are some horrendous activities which occur on the platform. Yes, Facebook should, and probably could, do more to assist police forces and intelligence services. Yes, the digital economy has largely shirked responsibility in the years leading to today. But no, building vulnerabilities in the system is not the right way forward.

These politicians are saying the right things to gain public support. These actions are in the pursuit of catching child molesters and terrorists; who wouldn’t want to help? But you have to look at the collateral damage. Users would be left open to identify theft, fraud and blackmail. These messaging platforms are used to have private conversations, exchange bank account details and discuss holiday plans. The number of criminals which could be caught is nothing compared to the billions who would be exposed to hackers on the web.

The idea which is presented here does have good intentions, but it pays no consideration to the collateral damage. The negatives of introducing a backdoor vastly outweigh the positives.

Quite frankly, we are still surprised to be having this conversation. Undermining security is no way to improve security. Governments need to understand this is not a viable option.

UK starts laying groundwork for another assault on privacy

UK Home Secretary Priti Patel is reportedly to sign a transatlantic agreement offering the UK Government more clout over the stubborn messaging platforms.

First and foremost, this is not a pact between the UK and US which would compel the messaging platforms to break their encryption protections, but it is a step towards offering the UK Government more opportunity.

According to The Times, Patel will sign an agreement with the US next month which will offer the UK powers to compel US companies which offer messaging services to handover data to police forces, intelligence services and prosecutors. After the Clarifying Lawful Overseas Use of Data (CLOUD) Act was signed into law last year, the US Government was afforded the opportunity to share more data with foreign governments, and this would appear to be the first of such agreements.

This is of course not the first time the UK Government has set its eyes on undermining user privacy. Former-Home Secretary Amber Rudd was the champion of the Government efforts to break the blockage during yesteryear, attempting to force these companies to introduce ‘backdoors’ which would enable the access of information.

There are of course numerous reasons why this would be seen as an awful idea. Firstly, the introduction of a back-door is a vulnerability by design. It doesn’t matter how well secured it is, if there is a vulnerability the nefarious actors in the darker corners of the web will find it.

Secondly, stringent security measures should not be undermined for the sake of it or because the consumer is not driven by security as a reason for using the services. Your correspondent does not buy a car because it has the best airbags, but he would be irked if they didn’t work when called upon.

Finally, governments and public offices have not proven themselves responsible enough to hand over such a potential violation of the human right to privacy. And let’s not forget, Article 8 of the European Convention on Human Rights is solely focused on privacy.

What is worth noting is this pact with the US Government is not a measure to introduce back-doors into encryption software, but you should always bear in mind what the UK Government is driving towards with incremental steps. It is easy to forget the bigger picture when small steps are made, but how often have you looked back and wondered how we got to a certain situation?

The CLOUD Act offers the US agencies the right to collect limited information from the messaging platform providers. Currently, US authorities can request information such as who the user is messaging, when and the frequency. The law does not grant access to the content of the messages, though it is a step towards wielding greater control and influence over the social media companies.

Should Patel sign this agreement, and it is still an if right now, this power would be extended to the UK Government to collect information on UK citizens.

What is worth noting is this is not official, though it would not surprise us. Rudd attempted to revolutionise the relationship between the UK Government and messaging platforms, and this failed spectacularly. This would be a more reasonable approach, taking baby steps towards the ultimate goal.

US tech fraternity pushes its own version of GDPR

The technology industry might enjoy light-touch regulatory landscapes, but change is on the horizon with what appears to be an attempt to be the master of its own fate.

In an open-letter to senior members of US Congress, 51 CEOs of the technology and business community have asked for a federal law governing data protection and privacy. It appears to be a push to gain consistency across the US, removing the ability for aggressive and politically ambitious Attorney Generals and Senators to create their own, local, crusades against the technology industry.

Certain aspects of the framework proposed to the politicians are remarkably similar to GDPR, such as the right for consumers to control their own personal data, seek corrections and even demand deletion. Breach notifications could also be introduced, though the coalition of CEOs are calling for the FTC to be the tip of the spear.

Interestingly enough, there are also calls to remove ‘private right of action’, meaning only the US Government could take an offending company to court over violations. In a highly litigious society like the US, this would be a significant win for any US corporation.

And while there are some big names attached to the letter, there are some notable omissions. Few will be surprised Facebook’s CEO Mark Zuckerberg has not signed a letter requesting a more comprehensive approach to data privacy, though Alphabet, Microsoft, Uber, Verizon, T-Mobile US, Intel, Cisco and Oracle are also absent.

“There is now widespread agreement among companies across all sectors of the economy, policymakers and consumer groups about the need for a comprehensive federal consumer data privacy law that provides strong, consistent protections for American consumers,” the letter states.

“A federal consumer privacy law should also ensure that American companies continue to lead a globally competitive market.”

CEOs who have signed the letter include Jeff Bezos of Amazon, Alfred Kelly of Visa, Salesforce’s Keith Block, Steve Mollenkoph of Qualcomm, Randall Stephenson of AT&T and Brian Roberts of Comcast.

Although it might seem unusual for companies to be requesting a more comprehensive approach to regulation, the over-arching ambition seems to be one of consistency. Ultimately, these executives want one, consolidated approach to data protection and privacy, managed at a Federal level, as opposed to a potentially fragmented environment with the States applying their own nuances.

It does appear the technology and business community is attempting to have some sort of control over its own fate. As much as these companies would want a light-touch regulatory environment to continue, this is not an outcome which is on the table. The world is changing but consolidating this evolution into a single agency the lobbyists can be much more effective, and cheaper.

The statement has been made through Business Roundtable, a lobby group for larger US corporations, requesting a national consumer privacy law which would pre-empt any equivalent from the states or local government. Definitions and ownership rules should be modernised, and a risk-orientated approach to data management, storage and analysis is also being requested.

Ultimately, this looks like a case of damage control. There seems to be an acceptance of regulation overhaul, however the CEOs are attempting to control exposure. In consolidating the regulations through the FTC, punishments and investigations can theoretically only be brought forward through a limited number of routes, with the companies only having to worry about a single set of rules.

Consistency is a very important word in the business world, especially when it comes to regulation.

What we are currently seeing across the US is aggression towards the technology industry from almost every legal avenue. Investigations have been launched by Federal agencies and State-level Attorney Generals, while law suits have also been filed by non-profits and law firms representing citizens. It’s a mess.

Looking at the Attorney Generals, there do seem to be a couple who are attempting to make a name for themselves, pushing into the public domain. This might well be the first steps for higher offices in the political domain. For example, it would surprise few if New York Attorney General Letitia James harbours larger political ambitions and striking a blow for the consumer into Facebook would certainly gain positive PR points.

Another interesting element is the fragmentation of regulations to govern data protection and privacy. For example, there are more aggressive rules in place in New York and California than in North Carolina and Alaska. In California, it becomes even more fragmented, just look at the work the City of San Francisco is undertaking to limit the power of facial recognition and data analytics. These rules will effectively make it impossible to implement the technology, but in the State of Illinois, technology companies only have to seek explicit consent from the consumer.

Inconsistency creates confusion and non-compliance. Confusion and non-compliance cost a lot of money through legal fees, restructuring, product customisation and fines.

Finally, from a PR perspective, this is an excellent move. The perception of Big Business at the moment, is that it does not care about the privacy rights of citizens. There have been too many scandals and data breaches for anyone to take claims of caring about consumer privacy seriously. By suggesting a more comprehensive and consistent approach to privacy, Big Business can more legitimately claim it is the consumer champion.

A more consistent approach to regulation helps the Government, consumers and business, however this is a move from the US technology and business community to control their own fate. This is a move to decrease the power and influence of the disruptive Attorney Generals and make the regulatory evolution more manageable.

Momentum is gathering pace towards a more comprehensive and contextually relevant privacy regulatory landscape, and it might not be too long before a US version of Europe’s GDPR is introduced.

Is $170 million a big enough fine to stop Google privacy violations?

Another week has passed, and we have another story focusing on privacy violations at Google. This time it has cost the search giant $170 million, but is that anywhere near enough?

The Federal Trade Commission (FTC) has announced yet another fine for Google, this time the YouTube video platform has been caught breaking privacy rules. An investigation found YouTube had been collecting and processing personal data of children, without seeking permission from the individuals or parents.

“YouTube touted its popularity with children to prospective corporate clients,” said FTC Chairman Joe Simons. “Yet when it came to complying with COPPA [the Children’s Online Privacy Protection Act], the company refused to acknowledge that portions of its platform were clearly directed to kids. There’s no excuse for YouTube’s violations of the law.”

Once again, a prominent member of the Silicon Valley society has been caught flaunting privacy laws. The ‘act now, seek permission later’ attitude of the internet giants is on show and there doesn’t seem to be any evidence of these incredibly powerful and monstrously influential companies respecting laws or the privacy rights of users.

At some point, authorities are going to have to ask whether these companies will ever respect these rules on their own, or whether they have to be forced. If there is a carrot and stick approach, the stick has to be sharp, and we wonder whether it is anywhere near sharp enough. The question which we would like to pose here is whether $170 million is a large enough deterrent to ensure Google does something to respect the rules.

Privacy violations are nothing new when it comes to the internet. This is partly down to the fragrant attitude of those left in positions of responsibility, but also the inability for rule makers to keep pace with the eye-watering fast progress Silicon Valley is making.

In this example, rules have been introduced to hold Google accountable, however we do not believe the fine is anywhere near large enough to ensure action.

Taking 2018 revenues at Google, the $170 million fine represents 0.124% of the total revenues made across the year. Google made on average, $370 million per day, roughly $15 million per hour. It would take Google just over 11 hours and 20 minutes to pay off this fine.

Of course, what is worth taking into account is that these numbers are 12 months old. Looking at the most recent financial results, revenues increased 19% year-on-year for Q2 2019. Over the 91-day period ending June 30, Google made $38.9 billion, or $427 million a day, $17.8 million an hour. It would now take less than 10 hours to pay off the fine.

Fines are supposed to act as a deterrent, a call to action to avoid receiving another one. We question whether these numbers are relevant to Google and if the US should consider its own version of Europe’s General Data Protection Regulation (GDPR).

This is a course which would strike fear into the hearts of Silicon Valley’s leadership, as well as pretty much every other company which has any form of digital presence. It was hard work to become GDPR compliant, though it was necessary. Those who break the rules are now potentially exposed to a fine of €20 million or 3% of annual revenue. British Airways was recently fined £183 million for GDPR violations, a figure which represented 1.5% of total revenues due to co-operation from BA during the investigation and the fact it owned-up.

More importantly, European companies are now taking privacy, security and data protection very seriously, though the persistent presence of privacy violations in the US suggests a severe overhaul of the rules and punishments are required.

Of course, Google and YouTube have reacted to the news in the way you would imagine. The team has come, cap in hand, to explain the situation.

“We will also stop serving personalized ads on this content entirely, and some features will no longer be available on this type of content, like comments and notifications,” YouTube CEO Susan Wojcicki said in a statement following the fine.

“In order to identify content made for kids, creators will be required to tell us when their content falls in this category, and we’ll also use machine learning to find videos that clearly target young audiences, for example those that have an emphasis on kids characters, themes, toys, or games.”

The appropriate changes have been made to privacy policies and the way in which ads are served to children, though amazingly, the blog post does not feature the words ‘sorry’, ‘apology’, ‘wrong’ or ‘inappropriate’. There is no admission of fault, simply a statement that suggests they will be compliant with the rules.

We wonder how long it will be before Google will be caught breaking privacy rules again. Of course, Google is not alone here, if you cast the net wider to include everyone from Silicon Valley, we suspect there will be another incident, investigation or fine to report on next week.

Privacy rules are not acting as a deterrent nowadays. These companies have simply grown too large for the fines imposed by agencies to have a material impact. We suspect Google made much more than $170 million through the adverts served to children over this period. If the fine does not exceed the benefit, will the guilty party stop? Of course not, Google is designed to make money not serve the world.

Losing face in seconds: the app takes deepfakes to a new depth

Zao, a new mobile app coming out of China, can replace characters in TV or movie clips with the user’s own facial picture within seconds, raising new privacy and fraud concerns.

Developed by Momo, the company behind Tantan, China’s answer to Tinder, Zao went viral shortly after it was made available on the iOS App Store in China, Japan, India, Korea, and a couple of other Asian markets. It allows users to swap a character in a video clip for the user’s own face. The user would choose a character in a clip from the selections, often iconic Hollywood movies or popular TV programs, upload his or her own picture to be used, and let the app do the swapping in the cloud. In about eight seconds the swap is done, and the user can share the altered clip on social media.

While many are enjoying the quirkiness of the app, others have raised concerns. First there is the concern for privacy. Before a user can upload their pictures to have the app do the swapping, they have to log in with their phone number and email address, literally losing face and giving away identification to the app. More worryingly, the app, in its earlier version of terms and conditions would assume the full rights to the altered videos, therefore the rights to the users’ images.

Another concern is fraud. Facial recognition is used extensively in China, in benign and not so benign circumstances alike. In this case, when an altered video with the user’s face in it is shared on social networks, it is out of the user’s control and will be open to abuse by belligerent parties. One of such possible abuses will be payment. Alipay, the online and mobile payment system of Alibaba, has enabled retail check-out with face, that is, the customer only needs to look at the camera when leaving the retailer, and the bill will be placed on the users’ Alipay account. By adding a bit fun into the process, check-out by face not only facilitates retail transactions but also continuously enriches Alibaba’s database. (It would not be a complete surprise if this should be one reason behind the euphoria towards AI voice by Jack Ma, Alibaba’s founder.) The payment platform rushed to reassure its users that the system will not be tricked by the images on Zao, without sharing details on how.

Though Zao is not the first AI-powered deepfake application, it is one of the best worked out, therefore most unsettling ones. In another recent case, involving voice simulation and the controversial scholar Jordan Peterson, an AI-powered voice simulator enabled users to type out sentences up to 280 characters for the tool to read out loud in the distinct, uncannily accurate Jordan Peterson voice. This led Peterson to call for a wide-ranging legislation to protect the “sanctity of your voice, and your image.” He called the stealing of other people’s voice a “genuinely criminal act, regardless (perhaps) of intent.”

One can only imagine the impact of seamless image doctoring coupled with flawless voice simulation on all aspects of life, not the least on the already abated trust in news.

The good news is that the Zao developer is responding to users’ concerns. The app said on its official Weibo account (China’s answer to Twitter) that they understood the concerns about privacy and are thinking about how to fix the issues, but “please give us a little time”. The app’s T&C has been updated following the outcry. Now the app would only use the uploaded data for app improvement purposes. Once the user deletes the video from the app, it will also be deleted in the cloud.

Zao Weibo

UK’s laissez-faire attitude to privacy and facial recognition tech is worrying

Big Brother Watch has described the implementation of facial recognition tech as an ‘epidemic’ as it emerges the police has been colluding with private industry for trials.

There are of course significant benefits to be realised through the introduction of facial recognition, but the risks are monstrous. It is a sensitive subject, where transparency should be assumed as a given, but the general public has little or no understanding of the implications to personal privacy rights.

Thanks to an investigation from the group, it has been uncovered that shopping centres, casinos and even publicly-owned museums have been using the technology. Even more worryingly, in some cases the data has been shared with police forces. Without public consultation, the introduction of such technologies is an insult to the general public and a violation of the trust which has been put in public institutions and private industry.

“There is an epidemic of facial recognition in the UK,” said Director of Big Brother Watch, Silkie Carlo.

“The collusion between police and private companies in building these surveillance nets around popular spaces is deeply disturbing. Facial recognition is the perfect tool of oppression and the widespread use we’ve found indicates we’re facing a privacy emergency.”

What is worth noting is that groups such as Big Brother Watch have a tendency to over engineer certain developments, adding an element of theatrics to drum up support and dramatize events. However, in this instance, we completely agree.

When introducing new technology to society, there should be some form of public consultation, especially when the risk of abuse can have such a monumental impact on everyday life. Here, the risk is to the human right to privacy, a benefit many in the UK overlook, due to the assumption rights will be honoured by those given the responsibility of management of our society.

The general public should be given the right to choose. Increased safety might be a benefit, but there will be a sacrifice to personal privacy. We should have the opportunity to say no.

While the UK Government is clip-clopping along, sat pleasantly atop of its high-horse, criticising other administrations of human right violations, this incident blurs the line. Using facial recognition in a private environment without telling customers is a suspect position, though sharing this data with police forces is wrong.

Is there any material difference between these programmes and initiatives launched in autocratic and totalitarian governments elsewhere in the world? It smells very similar to the dreary picture painted in George Orwell’s “1984”, with a nanny-state assuming the right to decide what is reasonable and what is not.

And for those who appreciated a bit of irony, one of the examples Big Brother Watch has identified of unwarranted surveillance was at Liverpool’s World Museum, during a “China’s First Emperor and the Terracotta Warriors” exhibition.

“The idea of a British museum secretly scanning the faces of children visiting an exhibition on the first emperor of China is chilling,” said Carlo. “There is a dark irony that this authoritarian surveillance tool is rarely seen outside of China.

“Facial recognition surveillance risks making privacy in Britain extinct.”

Aside from this museum, private development companies including British Land, have been implementing the technology. There is reference to the technology in terms and conditions documents, though it is unlikely many members of the general public have been made aware.

As a result of the suspect implementations, including at Kings Cross in London, the Information Commission Officer Elizabeth Denham has launched an investigation. The investigation will look into an increasingly common theme; whether the implementation of new technology is taking advantage of the slow-moving process of legislation, and the huge number of grey areas currently present in the law.

Moving forward, facial recognition technologies will have a role to play in the digital society. Away from the clearly obvious risk of abuse, there are very material benefits. If a programme can identify fear or stress, for example, emergency services could be potentially alerted to an incident much quicker. Response to such incidents today are reliant on someone calling 999 in most cases, new technology could help here and save lives.

However, the general public must be informed, and blessings must be given. Transparency is key, and right now, it is missing.

Facebook faces yet another monstrous privacy headache in Illinois

Just as the Cambridge Analytica scandal re-emerged to heighten Facebook frustrations, the social media giant is contemplating a class-action lawsuit regarding facial-recognition.

It has been a tough couple of weeks for Facebook. With the ink still wet on a $5 billion FTC fine, the UK Government questioning discrepancies in evidence presented to Parliamentary Committees and a Netflix documentary reopening the wounds of the Cambridge Analytica scandal, the last thing needed was another headache. This is exactly what has been handed across to Mountain View from Illinois.

In a 3-0 ruling, the Court of Appeals for the Ninth District has ruled against Facebook, allowing for a class-action lawsuit following the implementation of facial-recognition technologies without consultation or the creation of public policy.

“Plaintiffs’ complaint alleges that Facebook subjected them to facial-recognition technology without complying with an Illinois statute intended to safeguard their privacy,” the court opinion states.

“Because a violation of the Illinois statute injures an individual’s concrete right to privacy, we reject Facebook’s claim that the plaintiffs have failed to allege a concrete injury-in-fact for purposes of Article III standing. Additionally, we conclude that the district court did not abuse its discretion in certifying the class.”

After introducing facial recognition technologies to the platform to offer tag suggestions on uploaded photos and video content in 2010, Facebook was the subject to a lawsuit under the Illinois Biometric Information Privacy Act. This law compels companies to create public policy before implementing facial-recognition technologies and analysing biometric data, a means to protect the privacy rights of consumers.

Facebook appealed against the lawsuit, suggesting the plaintiffs had not demonstrated material damage, therefore the lower courts in California were exceeding granted responsibilities. However, the appeals court has dismissed this opinion. The lawsuit will proceed as planned.

The law in question was enacted in 2008, with the intention of protecting consumer privacy. As biometric data can be seen as unique as a social security number, legislators feared the risk of identity theft, as well as the numerous unknowns as to how this technology could be implemented in the future. This was a protectionary piece of legislation and does look years ahead of its time when you consider the inability of legislators to create relevant rules today.

As part of this legislation, private companies are compelled to establish a “retention

schedule and guidelines for permanently destroying biometric identifiers and biometric information”. The statute also forces companies to obtain permission before applying biometric technologies used to identify individuals or analyse and retain data.

Facebook is not arguing it was compliant with the requirements but suggested as there have been no material damages to individuals or their right to privacy, the lawsuit should have been dismissed by the lower courts in California. The senior judges clearly disagree.

But what could this lawsuit actually mean?

Firstly, you have the reputational damage. Facebook’s credibility is dented at best and shattered at worst, depending on who you talk to of course. The emergence of the Netflix documentary ‘The Great Hack’, detailing the Cambridge Analytica scandal, is dragging the brand through the mud once again, while questions are also being asked whether the management team directly misread the UK Government.

Secondly, you have to look at the financial impact. Facebook is a profit-machine, but few will be happy with another fine. It was only three weeks ago the FTC issued a $5 billion fine for various privacy inadequacies over the last decade, while this is a lawsuit which could become very expensive, very quickly.

Not only will Facebook have to hire another battalion of lawyers to combat the threat posed by the likes of the American Civil Liberties Union, the Electronic Frontier Foundation, the Center for Democracy &Technology and the Illinois PIRG Education Fund, the pay-out could be significant.

Depending on the severity of the violation, users could be entitled to a single sum between $1000-$5000. Should Facebook lose this legal foray, the financial damage could be in the 100s of millions or even billions.

From a reputational and financial perspective, this lawsuit could be very damaging to Facebook.