US tech fraternity pushes its own version of GDPR

The technology industry might enjoy light-touch regulatory landscapes, but change is on the horizon with what appears to be an attempt to be the master of its own fate.

In an open-letter to senior members of US Congress, 51 CEOs of the technology and business community have asked for a federal law governing data protection and privacy. It appears to be a push to gain consistency across the US, removing the ability for aggressive and politically ambitious Attorney Generals and Senators to create their own, local, crusades against the technology industry.

Certain aspects of the framework proposed to the politicians are remarkably similar to GDPR, such as the right for consumers to control their own personal data, seek corrections and even demand deletion. Breach notifications could also be introduced, though the coalition of CEOs are calling for the FTC to be the tip of the spear.

Interestingly enough, there are also calls to remove ‘private right of action’, meaning only the US Government could take an offending company to court over violations. In a highly litigious society like the US, this would be a significant win for any US corporation.

And while there are some big names attached to the letter, there are some notable omissions. Few will be surprised Facebook’s CEO Mark Zuckerberg has not signed a letter requesting a more comprehensive approach to data privacy, though Alphabet, Microsoft, Uber, Verizon, T-Mobile US, Intel, Cisco and Oracle are also absent.

“There is now widespread agreement among companies across all sectors of the economy, policymakers and consumer groups about the need for a comprehensive federal consumer data privacy law that provides strong, consistent protections for American consumers,” the letter states.

“A federal consumer privacy law should also ensure that American companies continue to lead a globally competitive market.”

CEOs who have signed the letter include Jeff Bezos of Amazon, Alfred Kelly of Visa, Salesforce’s Keith Block, Steve Mollenkoph of Qualcomm, Randall Stephenson of AT&T and Brian Roberts of Comcast.

Although it might seem unusual for companies to be requesting a more comprehensive approach to regulation, the over-arching ambition seems to be one of consistency. Ultimately, these executives want one, consolidated approach to data protection and privacy, managed at a Federal level, as opposed to a potentially fragmented environment with the States applying their own nuances.

It does appear the technology and business community is attempting to have some sort of control over its own fate. As much as these companies would want a light-touch regulatory environment to continue, this is not an outcome which is on the table. The world is changing but consolidating this evolution into a single agency the lobbyists can be much more effective, and cheaper.

The statement has been made through Business Roundtable, a lobby group for larger US corporations, requesting a national consumer privacy law which would pre-empt any equivalent from the states or local government. Definitions and ownership rules should be modernised, and a risk-orientated approach to data management, storage and analysis is also being requested.

Ultimately, this looks like a case of damage control. There seems to be an acceptance of regulation overhaul, however the CEOs are attempting to control exposure. In consolidating the regulations through the FTC, punishments and investigations can theoretically only be brought forward through a limited number of routes, with the companies only having to worry about a single set of rules.

Consistency is a very important word in the business world, especially when it comes to regulation.

What we are currently seeing across the US is aggression towards the technology industry from almost every legal avenue. Investigations have been launched by Federal agencies and State-level Attorney Generals, while law suits have also been filed by non-profits and law firms representing citizens. It’s a mess.

Looking at the Attorney Generals, there do seem to be a couple who are attempting to make a name for themselves, pushing into the public domain. This might well be the first steps for higher offices in the political domain. For example, it would surprise few if New York Attorney General Letitia James harbours larger political ambitions and striking a blow for the consumer into Facebook would certainly gain positive PR points.

Another interesting element is the fragmentation of regulations to govern data protection and privacy. For example, there are more aggressive rules in place in New York and California than in North Carolina and Alaska. In California, it becomes even more fragmented, just look at the work the City of San Francisco is undertaking to limit the power of facial recognition and data analytics. These rules will effectively make it impossible to implement the technology, but in the State of Illinois, technology companies only have to seek explicit consent from the consumer.

Inconsistency creates confusion and non-compliance. Confusion and non-compliance cost a lot of money through legal fees, restructuring, product customisation and fines.

Finally, from a PR perspective, this is an excellent move. The perception of Big Business at the moment, is that it does not care about the privacy rights of citizens. There have been too many scandals and data breaches for anyone to take claims of caring about consumer privacy seriously. By suggesting a more comprehensive and consistent approach to privacy, Big Business can more legitimately claim it is the consumer champion.

A more consistent approach to regulation helps the Government, consumers and business, however this is a move from the US technology and business community to control their own fate. This is a move to decrease the power and influence of the disruptive Attorney Generals and make the regulatory evolution more manageable.

Momentum is gathering pace towards a more comprehensive and contextually relevant privacy regulatory landscape, and it might not be too long before a US version of Europe’s GDPR is introduced.

Is $170 million a big enough fine to stop Google privacy violations?

Another week has passed, and we have another story focusing on privacy violations at Google. This time it has cost the search giant $170 million, but is that anywhere near enough?

The Federal Trade Commission (FTC) has announced yet another fine for Google, this time the YouTube video platform has been caught breaking privacy rules. An investigation found YouTube had been collecting and processing personal data of children, without seeking permission from the individuals or parents.

“YouTube touted its popularity with children to prospective corporate clients,” said FTC Chairman Joe Simons. “Yet when it came to complying with COPPA [the Children’s Online Privacy Protection Act], the company refused to acknowledge that portions of its platform were clearly directed to kids. There’s no excuse for YouTube’s violations of the law.”

Once again, a prominent member of the Silicon Valley society has been caught flaunting privacy laws. The ‘act now, seek permission later’ attitude of the internet giants is on show and there doesn’t seem to be any evidence of these incredibly powerful and monstrously influential companies respecting laws or the privacy rights of users.

At some point, authorities are going to have to ask whether these companies will ever respect these rules on their own, or whether they have to be forced. If there is a carrot and stick approach, the stick has to be sharp, and we wonder whether it is anywhere near sharp enough. The question which we would like to pose here is whether $170 million is a large enough deterrent to ensure Google does something to respect the rules.

Privacy violations are nothing new when it comes to the internet. This is partly down to the fragrant attitude of those left in positions of responsibility, but also the inability for rule makers to keep pace with the eye-watering fast progress Silicon Valley is making.

In this example, rules have been introduced to hold Google accountable, however we do not believe the fine is anywhere near large enough to ensure action.

Taking 2018 revenues at Google, the $170 million fine represents 0.124% of the total revenues made across the year. Google made on average, $370 million per day, roughly $15 million per hour. It would take Google just over 11 hours and 20 minutes to pay off this fine.

Of course, what is worth taking into account is that these numbers are 12 months old. Looking at the most recent financial results, revenues increased 19% year-on-year for Q2 2019. Over the 91-day period ending June 30, Google made $38.9 billion, or $427 million a day, $17.8 million an hour. It would now take less than 10 hours to pay off the fine.

Fines are supposed to act as a deterrent, a call to action to avoid receiving another one. We question whether these numbers are relevant to Google and if the US should consider its own version of Europe’s General Data Protection Regulation (GDPR).

This is a course which would strike fear into the hearts of Silicon Valley’s leadership, as well as pretty much every other company which has any form of digital presence. It was hard work to become GDPR compliant, though it was necessary. Those who break the rules are now potentially exposed to a fine of €20 million or 3% of annual revenue. British Airways was recently fined £183 million for GDPR violations, a figure which represented 1.5% of total revenues due to co-operation from BA during the investigation and the fact it owned-up.

More importantly, European companies are now taking privacy, security and data protection very seriously, though the persistent presence of privacy violations in the US suggests a severe overhaul of the rules and punishments are required.

Of course, Google and YouTube have reacted to the news in the way you would imagine. The team has come, cap in hand, to explain the situation.

“We will also stop serving personalized ads on this content entirely, and some features will no longer be available on this type of content, like comments and notifications,” YouTube CEO Susan Wojcicki said in a statement following the fine.

“In order to identify content made for kids, creators will be required to tell us when their content falls in this category, and we’ll also use machine learning to find videos that clearly target young audiences, for example those that have an emphasis on kids characters, themes, toys, or games.”

The appropriate changes have been made to privacy policies and the way in which ads are served to children, though amazingly, the blog post does not feature the words ‘sorry’, ‘apology’, ‘wrong’ or ‘inappropriate’. There is no admission of fault, simply a statement that suggests they will be compliant with the rules.

We wonder how long it will be before Google will be caught breaking privacy rules again. Of course, Google is not alone here, if you cast the net wider to include everyone from Silicon Valley, we suspect there will be another incident, investigation or fine to report on next week.

Privacy rules are not acting as a deterrent nowadays. These companies have simply grown too large for the fines imposed by agencies to have a material impact. We suspect Google made much more than $170 million through the adverts served to children over this period. If the fine does not exceed the benefit, will the guilty party stop? Of course not, Google is designed to make money not serve the world.

Losing face in seconds: the app takes deepfakes to a new depth

Zao, a new mobile app coming out of China, can replace characters in TV or movie clips with the user’s own facial picture within seconds, raising new privacy and fraud concerns.

Developed by Momo, the company behind Tantan, China’s answer to Tinder, Zao went viral shortly after it was made available on the iOS App Store in China, Japan, India, Korea, and a couple of other Asian markets. It allows users to swap a character in a video clip for the user’s own face. The user would choose a character in a clip from the selections, often iconic Hollywood movies or popular TV programs, upload his or her own picture to be used, and let the app do the swapping in the cloud. In about eight seconds the swap is done, and the user can share the altered clip on social media.

While many are enjoying the quirkiness of the app, others have raised concerns. First there is the concern for privacy. Before a user can upload their pictures to have the app do the swapping, they have to log in with their phone number and email address, literally losing face and giving away identification to the app. More worryingly, the app, in its earlier version of terms and conditions would assume the full rights to the altered videos, therefore the rights to the users’ images.

Another concern is fraud. Facial recognition is used extensively in China, in benign and not so benign circumstances alike. In this case, when an altered video with the user’s face in it is shared on social networks, it is out of the user’s control and will be open to abuse by belligerent parties. One of such possible abuses will be payment. Alipay, the online and mobile payment system of Alibaba, has enabled retail check-out with face, that is, the customer only needs to look at the camera when leaving the retailer, and the bill will be placed on the users’ Alipay account. By adding a bit fun into the process, check-out by face not only facilitates retail transactions but also continuously enriches Alibaba’s database. (It would not be a complete surprise if this should be one reason behind the euphoria towards AI voice by Jack Ma, Alibaba’s founder.) The payment platform rushed to reassure its users that the system will not be tricked by the images on Zao, without sharing details on how.

Though Zao is not the first AI-powered deepfake application, it is one of the best worked out, therefore most unsettling ones. In another recent case, involving voice simulation and the controversial scholar Jordan Peterson, an AI-powered voice simulator enabled users to type out sentences up to 280 characters for the tool to read out loud in the distinct, uncannily accurate Jordan Peterson voice. This led Peterson to call for a wide-ranging legislation to protect the “sanctity of your voice, and your image.” He called the stealing of other people’s voice a “genuinely criminal act, regardless (perhaps) of intent.”

One can only imagine the impact of seamless image doctoring coupled with flawless voice simulation on all aspects of life, not the least on the already abated trust in news.

The good news is that the Zao developer is responding to users’ concerns. The app said on its official Weibo account (China’s answer to Twitter) that they understood the concerns about privacy and are thinking about how to fix the issues, but “please give us a little time”. The app’s T&C has been updated following the outcry. Now the app would only use the uploaded data for app improvement purposes. Once the user deletes the video from the app, it will also be deleted in the cloud.

Zao Weibo

UK’s laissez-faire attitude to privacy and facial recognition tech is worrying

Big Brother Watch has described the implementation of facial recognition tech as an ‘epidemic’ as it emerges the police has been colluding with private industry for trials.

There are of course significant benefits to be realised through the introduction of facial recognition, but the risks are monstrous. It is a sensitive subject, where transparency should be assumed as a given, but the general public has little or no understanding of the implications to personal privacy rights.

Thanks to an investigation from the group, it has been uncovered that shopping centres, casinos and even publicly-owned museums have been using the technology. Even more worryingly, in some cases the data has been shared with police forces. Without public consultation, the introduction of such technologies is an insult to the general public and a violation of the trust which has been put in public institutions and private industry.

“There is an epidemic of facial recognition in the UK,” said Director of Big Brother Watch, Silkie Carlo.

“The collusion between police and private companies in building these surveillance nets around popular spaces is deeply disturbing. Facial recognition is the perfect tool of oppression and the widespread use we’ve found indicates we’re facing a privacy emergency.”

What is worth noting is that groups such as Big Brother Watch have a tendency to over engineer certain developments, adding an element of theatrics to drum up support and dramatize events. However, in this instance, we completely agree.

When introducing new technology to society, there should be some form of public consultation, especially when the risk of abuse can have such a monumental impact on everyday life. Here, the risk is to the human right to privacy, a benefit many in the UK overlook, due to the assumption rights will be honoured by those given the responsibility of management of our society.

The general public should be given the right to choose. Increased safety might be a benefit, but there will be a sacrifice to personal privacy. We should have the opportunity to say no.

While the UK Government is clip-clopping along, sat pleasantly atop of its high-horse, criticising other administrations of human right violations, this incident blurs the line. Using facial recognition in a private environment without telling customers is a suspect position, though sharing this data with police forces is wrong.

Is there any material difference between these programmes and initiatives launched in autocratic and totalitarian governments elsewhere in the world? It smells very similar to the dreary picture painted in George Orwell’s “1984”, with a nanny-state assuming the right to decide what is reasonable and what is not.

And for those who appreciated a bit of irony, one of the examples Big Brother Watch has identified of unwarranted surveillance was at Liverpool’s World Museum, during a “China’s First Emperor and the Terracotta Warriors” exhibition.

“The idea of a British museum secretly scanning the faces of children visiting an exhibition on the first emperor of China is chilling,” said Carlo. “There is a dark irony that this authoritarian surveillance tool is rarely seen outside of China.

“Facial recognition surveillance risks making privacy in Britain extinct.”

Aside from this museum, private development companies including British Land, have been implementing the technology. There is reference to the technology in terms and conditions documents, though it is unlikely many members of the general public have been made aware.

As a result of the suspect implementations, including at Kings Cross in London, the Information Commission Officer Elizabeth Denham has launched an investigation. The investigation will look into an increasingly common theme; whether the implementation of new technology is taking advantage of the slow-moving process of legislation, and the huge number of grey areas currently present in the law.

Moving forward, facial recognition technologies will have a role to play in the digital society. Away from the clearly obvious risk of abuse, there are very material benefits. If a programme can identify fear or stress, for example, emergency services could be potentially alerted to an incident much quicker. Response to such incidents today are reliant on someone calling 999 in most cases, new technology could help here and save lives.

However, the general public must be informed, and blessings must be given. Transparency is key, and right now, it is missing.

Facebook faces yet another monstrous privacy headache in Illinois

Just as the Cambridge Analytica scandal re-emerged to heighten Facebook frustrations, the social media giant is contemplating a class-action lawsuit regarding facial-recognition.

It has been a tough couple of weeks for Facebook. With the ink still wet on a $5 billion FTC fine, the UK Government questioning discrepancies in evidence presented to Parliamentary Committees and a Netflix documentary reopening the wounds of the Cambridge Analytica scandal, the last thing needed was another headache. This is exactly what has been handed across to Mountain View from Illinois.

In a 3-0 ruling, the Court of Appeals for the Ninth District has ruled against Facebook, allowing for a class-action lawsuit following the implementation of facial-recognition technologies without consultation or the creation of public policy.

“Plaintiffs’ complaint alleges that Facebook subjected them to facial-recognition technology without complying with an Illinois statute intended to safeguard their privacy,” the court opinion states.

“Because a violation of the Illinois statute injures an individual’s concrete right to privacy, we reject Facebook’s claim that the plaintiffs have failed to allege a concrete injury-in-fact for purposes of Article III standing. Additionally, we conclude that the district court did not abuse its discretion in certifying the class.”

After introducing facial recognition technologies to the platform to offer tag suggestions on uploaded photos and video content in 2010, Facebook was the subject to a lawsuit under the Illinois Biometric Information Privacy Act. This law compels companies to create public policy before implementing facial-recognition technologies and analysing biometric data, a means to protect the privacy rights of consumers.

Facebook appealed against the lawsuit, suggesting the plaintiffs had not demonstrated material damage, therefore the lower courts in California were exceeding granted responsibilities. However, the appeals court has dismissed this opinion. The lawsuit will proceed as planned.

The law in question was enacted in 2008, with the intention of protecting consumer privacy. As biometric data can be seen as unique as a social security number, legislators feared the risk of identity theft, as well as the numerous unknowns as to how this technology could be implemented in the future. This was a protectionary piece of legislation and does look years ahead of its time when you consider the inability of legislators to create relevant rules today.

As part of this legislation, private companies are compelled to establish a “retention

schedule and guidelines for permanently destroying biometric identifiers and biometric information”. The statute also forces companies to obtain permission before applying biometric technologies used to identify individuals or analyse and retain data.

Facebook is not arguing it was compliant with the requirements but suggested as there have been no material damages to individuals or their right to privacy, the lawsuit should have been dismissed by the lower courts in California. The senior judges clearly disagree.

But what could this lawsuit actually mean?

Firstly, you have the reputational damage. Facebook’s credibility is dented at best and shattered at worst, depending on who you talk to of course. The emergence of the Netflix documentary ‘The Great Hack’, detailing the Cambridge Analytica scandal, is dragging the brand through the mud once again, while questions are also being asked whether the management team directly misread the UK Government.

Secondly, you have to look at the financial impact. Facebook is a profit-machine, but few will be happy with another fine. It was only three weeks ago the FTC issued a $5 billion fine for various privacy inadequacies over the last decade, while this is a lawsuit which could become very expensive, very quickly.

Not only will Facebook have to hire another battalion of lawyers to combat the threat posed by the likes of the American Civil Liberties Union, the Electronic Frontier Foundation, the Center for Democracy &Technology and the Illinois PIRG Education Fund, the pay-out could be significant.

Depending on the severity of the violation, users could be entitled to a single sum between $1000-$5000. Should Facebook lose this legal foray, the financial damage could be in the 100s of millions or even billions.

From a reputational and financial perspective, this lawsuit could be very damaging to Facebook.

Microsoft has also been a member of the eavesdropping gang – report

Microsoft contractors have been listening to Skype and Cortana conversations without the full knowledge and consent of the apps’ users, claims a report.

We were almost immediately proved wrong when we said Microsoft, in comparison with Apple, Google, and Amazon, “fortunately has not suffered high profile embarrassment” by its voice assistant Cortana. Motherboard, part of the media outlet Vice, reported that Microsoft contractors, some of them working from home, have been listening to some Skype calls using the app’s instant translation feature, as well as users’ interactions with the Cortana.

Motherboard has acquired audio clips, screenshots as well as internal documents to show that Microsoft, just as its peers, have been employing humans to constantly improve the software algorithm and the quality and accuracy of the translations and responses. Also similar to the other leading tech companies that run voice assistants, Microsoft is ambiguous in its consumer communication, lax in its policy implementation, and does not give the users a way to opt out.

“The fact that I can even share some of this with you shows how lax things are in terms of protecting user data,” the Microsoft contractor turned whistle-blower, who supplied the evidence and decided to remain anonymous, told Motherboard.

“Microsoft collects voice data to provide and improve voice-enabled services like search, voice commands, dictation or translation services,” Microsoft said a statement sent to Motherboard. “We strive to be transparent about our collection and use of voice data to ensure customers can make informed choices about when and how their voice data is used. Microsoft gets customers’ permission before collecting and using their voice data.”

“Skype Translator Privacy FAQ” states that “Voice conversations are only recorded when translation features are selected by a user.” It then goes on to guide users how to turn off the translation feature. There is no possibility for a customer to use the translation service without having the conversation recorded. Neither does the official document say the recorded conversations may be listened to by another human.

Due to the “gig economy” nature of the job, some contractors work from home when undertaking the tasks to correct translations or improve Cortana’s response quality. This is also made obvious by Microsoft contractors’ job listings. However, the content they deal with can be sensitive, from conversations between people in an intimate relationship, to health status and home addresses, as well as query records on Cortana. “While I don’t know exactly what one could do with this information, it seems odd to me that it isn’t being handled in a more controlled environment,” the whistle-blower contractor told Motherboard.

The report does not specify where the eavesdropping they uncovered took place, but the line in the Microsoft statement that “We … require that vendors meet the high privacy standards set out in European law” can’t help but raise some suspicion that the practice could run afoul of GDPR, the European Union’s privacy protection regulation.

At the time of writing, Microsoft has not announced a suspension the practice.

Apple and Google suspend some of their eavesdropping

Two of the world’s leading voice assistant makers pulled the plug on their respective analytics programmes of Siri and Google Assistant after private information including confidential conversations were leaked.

Apple decided to suspend its outsourced programme to “grade” Siri, by which it assesses the voice assistant’s response accuracy, following reports that private conversations are being listened to by its contractors without the users’ explicit consent. The company committed to add an opt-out option for users in a future update of Siri. It also promised that the programme would not be restarted until it had completed a thorough review.

“We are committed to delivering a great Siri experience while protecting user privacy. While we conduct a thorough review, we are suspending Siri grading globally,” the Cupertino-based iPhone maker told The Guardian. “Additionally, as part of a future software update, users will have the ability to choose to participate in grading.”

This is in response to the leak that was first reported by the British broadsheet, which received tipoff from whistle-blowers. The paper learned that contractors regularly hear private conversations ranging from dialogues between patients and doctors, to communications between drug dealers and buyers, with everything is between. These could include cases when Siri has triggered unintentionally without the users’ awareness.

The biggest problem with Apple’s analytics programme is that it does not explicitly disclose to consumers that some of Siri recordings are shared with contractors in different parts of the world who will listen to the anonymous content, as a means to improve Siri’s accuracy. By not being upfront, Apple does not provide users with the option to opt out either.

Shortly before Apple’s decision to call a halt to Siri grading, Google also pulled the plug on its own human analysis of Google Assistant in the European Union, reported Associated Press. The company promised to the office of Johannes Caspar, Hamburg’s commissioner for data protection and Germany’s lead regulator of Google on privacy issues, that the suspension will last at least three months.

The decision was made after Google admitted that one of the language reviewers it partners with, who are supposed to assess Google Assistant’s response accuracy, “has violated our data security policies by leaking confidential Dutch audio data.” Over 1,000 private conversations in Flemish, some of which included private data, were sent to the Belgian news outlet VRT. Though the messages are supposed to be anonymised, staff at VRT were able to identify the users through private information like home addresses.

At that time Google promised “we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again.”

These are not the first cases where private conversations are leaked over voice assistants. Last year an Alexa-equipped Amazon Echo recorded a conversation between a couple in Portland, Oregan, and sent it to a friend, which was another recent case that rang the alarm bell of private data security.

It should not surprise those in the tech world that AI powered natural language processing software still has a long way to go before it can get all the intricacies right. Before that it needs human input to continuously improve the accuracy. The problems that bedevilled Google and Apple today, and Amazon in the past, and Microsoft (Cortana) which fortunately has not suffered high profile embarrassment recently, are down to the lack of stringent oversight of the role humans play, the lack of clear communication to consumers that their interactions with voice assistants may be used for data analysis purposes, and the failure to give consumers the choice to opt out.

There is also the controversy of data sovereignty, as well as the question of whether private data should be allowed to be stored in the cloud or should be kept on device. Apple’s leak case is not geographically specified, but Google’s case is a clear violation of GDPR.  According to the AP report, Germany has already started proceedings against Google.

DCMS calls out Facebook for stretching the truth

Facebook might have thought the worst of the Cambridge Analytica affair was behind it, but the UK Government is questioning whether it was entirely truthful with evidence presented to a parliamentary committee.

In a letter written to Sir Nick Clegg, Facebook’s VP of Global Affairs and Communications, Facebook is being asked to clarify discrepancies between testimonies it gave to the UK’s investigation into the scandal and evidence which was presented to the Security and Exchange Committee’s own investigation. The letter very politely and appropriately asks for clarification on statements made which seem to contradict.

“Further to our letter dated 17 July 2019, we would also like to raise several concerns considering recent charges made against Facebook by the US Securities and Exchange Commission on Wednesday 24 July,” the letter reads.

“The SEC Complaint seemingly directly contradicts written and oral evidence we received from Facebook representatives over the course of our enquiry into ‘Disinformation and fake news’ on several points raised below, and we request clarity on these issues.”

The letter itself was penned by Damien Collins, the Conservative MP for Folkestone and Hythe and Chair of the Digital, Culture, Media and Sport Committee. The evidence in question refers to testimonies given to the Committee by CTO Mike Schroepfer, Head of UK Public Policy Rebecca Stimson and VP of Privacy Solutions Lord Richard Allan, during DCMS investigations in 2018.

As Facebook is repairing its reputation across the world, attempting to regain trust and credibility in the eyes of the consumer, the last thing it needs is to be accused of lying to the UK Government.

The letter itself asks for clarity on three areas. Firstly, when Facebook executives were made aware of the abuse from Cambridge Analytica. Secondly, how the misuse of data was handled internally. And finally, communication between senior executives.

On the first point, Schroepfer and Lord Allan insisted the team was only made aware to the abuses through the article which exposed Cambridge Analytica published in the Guardian. That said, evidence presented to the SEC suggests internal concerns and complaints were raised in 2015, months before the article exposed the abuses.

On the continued abuse, Facebook executives suggested Cambridge Analytica had confirmed the deletion of the data in 2016, though it wasn’t until 2018 that executives were made aware the data was still be utilised. Evidence presented to the SEC contradicts these testimonies given to Collins and the other members of the DCMS Committee, as employees had on-going concerns through the intervening years thanks to Cambridge Analytica marketing materials.

Finally, evidence submitted to the Committee by Lord Allan and Stimson suggest CEO Mark Zuckerberg was not made aware of the continued abuse until 2018. However, Schroepfer has stated Zuckerberg was the primary decision maker for any privacy issues. If both statements are to be believed, there has been a systematic failure in dealing with privacy issues and policies. Collins questions why Zuckerberg and senior management were not made aware of these issues until the reports emerged in the press.

Although many assumed Facebook executives were not being entirely truthful when giving evidence, perhaps choosing to hold-back certain snippets of information, it might appear the social media giant has been caught trying to be too clever for its own good.

This is not a good headline for Facebook. It has shown little respect to the UK Government during the Cambridge Analytica saga, and these revelations just rub salt into the wounds. At a time where it is attempting to justify its existence and prove it can be a trustworthy guardian of user’s personal information, this letter shakes the foundations of credibility once again.

FTC hits Facebook with $5bn privacy fine

The Federal Trade Commission (FTC) has hit Facebook with a fine of $5 billion relating to numerous privacy violations over the last few years.

The fine itself, which is the largest ever imposed on any company for violating consumers’ privacy, will be accompanied by broad changes to its consumer privacy practices. The decision will also force Facebook to add in more decision-making capability on its privacy policies.

“Despite repeated promises to its billions of users worldwide that they could control how their personal information is shared, Facebook undermined consumers’ choices,” said FTC Chairman Joe Simons.

“The magnitude of the $5 billion penalty and sweeping conduct relief are unprecedented in the history of the FTC. The relief is designed not only to punish future violations but, more importantly, to change Facebook’s entire privacy culture to decrease the likelihood of continued violations.”

The accusations directed towards Facebook will sound very familiar. Whether it is using deceptive disclosures or secretive settings to disguise features and undermine privacy principles, or violation of previous commitments made to privacy in a 2012 FTC Order and dubious data-sharing relationships with third-parties, Facebook is facing a massive disruption to the way it manages data and approaches user privacy.

Looking at the changes Facebook will have to make, CEO Mark Zuckerberg is no-longer allowed to be the single decision maker for privacy policies, a position which was ridiculous in the first place. Facebook will also be forced to appoint an ‘independent privacy committee’ to ensure a position which is consistent with society’s expectations.

Privacy policies will filter down through the organization, theoretically, through the appointment of Compliance Officers. Another condition set upon Facebook is granting more powers to independent third-party assessors, who will conduct privacy orders every other year.

There are numerous other orders placed on Facebook as part of the negotiation between the FTC and the social media giant, including:

  • Facebook must exercise greater oversight over third-party apps
  • Phone numbers obtained to enable a security feature cannot be used in advertising mechanisms
  • Facebook must provide clear and conspicuous notice of its use of facial recognition technology
  • Facebook must encrypt user passwords and regularly audit security systems

While many of these demands from the FTC might be considered as business practise in today’s privacy conscious world, they are likely to cause a disruption for Facebook internally.

“After months of negotiations, we’ve reached an agreement with the Federal Trade Commission that provides a comprehensive new framework for protecting people’s privacy and the information they give us,” said Facebook General Counsel Colin Stretch.

“The agreement will require a fundamental shift in the way we approach our work and it will place additional responsibility on people building our products at every level of the company. It will mark a sharper turn toward privacy, on a different scale than anything we’ve done in the past.”

Although it is an incredibly steep fine for Facebook to stomach, we suspect it won’t bother the bean counters than much. Facebook is a money-making machine, and this will soon enough be nothing more than a minor blip. The disruption to its finely-tuned advertising machine will be more of an issue, but it could work in Facebook’s favour.

Facebook is being forced to be more transparent and treat privacy principles with respect. Left to its own fate, the social media giant probably wouldn’t have taken such drastic measures to disrupt itself. However, being forced into these changes could earn Facebook trust and credibility points in the eyes of the consumer.

If Facebook owns this punishment, while shouting and screaming about the changes it is making to become compliant with the order, it could swing public favour back onto its side. Facebook needs to present itself as a privacy conscious organization and this is a perfect opportunity to do so.

Researchers point to 1,300 apps which circumnavigate Android’s opt-in

Research from a coalition of professors has suggested Android location permissions mean little, as more than 1,300 apps have developed ways and means around the Google protections.

A team of researchers from the International Computer Science Institute (ICSI) has been working to identify short-comings of the data privacy protections offered users through Android permissions and the outcome might worry a few. Through the use of side and covert channels, 1,300 popular applications around the world extracted sensitive information on the user, including location, irrelevant of the permissions sought or given to the app.

The team has informed Google of the oversight, which will be addressed in the up-coming Android Q release, receiving a ‘bug bounty’ for their efforts.

“In the US, privacy practices are governed by the ’notice and consent’ framework: companies can give notice to consumers about their privacy practices (often in the form of a privacy policy), and consumers can consent to those practices by using the company’s services,” the research paper states.

This framework is a relatively simple one to understand. Firstly, app providers provide ‘notice’ to inform the user and provide transparency, while ‘consent’ is provided to ensure both parties have entered into the digital contract with open eyes.

“That apps can and do circumvent the notice and consent framework is further evidence of the framework’s failure. In practical terms, though, these app behaviours may directly lead to privacy violations because they are likely to defy consumers’ expectations.”

What is worth noting is while this sounds incredibly nefarious, it is no-where near the majority. Most applications and app providers act in accordance with the rules and consumer expectations, assuming they have read the detailed terms and conditions. This is a small percentage of the apps which are installed en-mass, but it is certainly an oversight worth drawing attention to.

Looking at the depth and breadth of the study, it is pretty comprehensive. Using a Google Play Store scraper, the team downloaded the most popular apps for each category; in total, more than 88,000 apps were downloaded due to the long-tail of popularity. To cover all bases however, the scraper also kept an eye on app updates, meaning 252,864 different versions of 88,113 Android apps were analysed during the study.

The behaviour of each of these apps were measured at the kernel, Android-framework, and network traffic levels, reaching scale using a tool called Android Automator Monkey. All of the OS-execution logs and network traffic was stored in a database for offline analysis.

Now onto how these apps developers can circumnavigate the protections put in place by Google. For ‘side channels’, the developer has discovered a path to a resource which is outside the security perimeters, perhaps due to a mistake during design stages or a flaw in applying the design. With ‘covert channels’ these are more nefarious.

“A covert channel is a more deliberate and intentional effort between two cooperating entities so that one with access to some data provides it to the other entity without access to

the data in violation of the security mechanism,” the paper states. “As an example, someone could execute an algorithm that alternates between high and low CPU load to pass a binary message to another party observing the CPU load.”

Ultimately this is further evidence the light-touch regulatory environment which has governed the technology industry over the last few years can no-longer be allowed to persist. The technology industry has protested and quietly lobbied against any material regulatory or legislative changes, though the bad apples are spoiling the harvest for everyone else.

As it stands, under Section 5 of the Federal Trade Commission (FTC) Act, such activities would be deemed as non-compliant, and we suspect the European Commission would have something to say with its GDPR stick as well. There are protections in place, though it seems there are elements of the technology industry who consider these more guidelines than rules.

Wholesale changes should be expected in the regulatory environment and it seems there is little which can be done to prevent them. These politicians might be chasing PR points as various elections loom on the horizon, but the evolution of rules in this segment should be considered a necessity nowadays.

There have simply been too many scandals, too much abuse of grey areas and too numerous examples of oversight (or negligence, whichever you choose) to continue on this path. Of course, there are negative consequences to increased regulation, but the right to privacy is too important a principle for rule-makers to ignore; the technology industry has consistently shown it does not respect these values therefore will have to be forced to do so.

This will be an incredibly difficult equation to balance however. The technology industry is leading the growth statistics for many economies around the world, but changes are needed to protect consumer rights.