Australia sues Google for misleading users over location data

The Australian Competition and Consumer Commission has taken Google to court over allegations that it misled consumers over the collection of their location data.

The ACCC reckons that from 2017 at the latest Google broke the law when it made on-screen representations to Android users that it alleges misled consumers about the location data Google collected or used when certain Google Account settings were enabled or disabled. In short the ACCC is claiming Google gave users insufficient information to ensure their location data wasn’t collected if they didn’t want it to be.

“We are taking court action against Google because we allege that as a result of these on-screen representations, Google has collected, kept and used highly sensitive and valuable personal information about consumers’ location without them making an informed choice,” said ACCC Chair Rod Sims.

The problem is that Android has multiple settings that need to be adjusted if you don’t want your location data collected and the ACCC is alleging that Google didn’t flag up all of them. That will have resulted in some consumers thinking their location data wasn’t being collected when it still was. At the very least it seems Google has been insufficiently clear in communicating with Android users about this stuff.

Underlying a lot of the current wave of litigation towards internet giants is the desire by regulators and governments to retrospectively address the personal data land grab that characterised the first decade or so of the modern mobile device. Free services such as Android and Facebook have always sought payment in kind through the collection of personal data but have usually been very opaque in the ways they have gone about it. Regulators are now trying to shut the stable door after the horse has bolted.

Microsoft might be toying with European data protection compliance

The European Data Protection Supervisor has raised ‘serious concerns’ over whether Microsoft is compliant with data protection regulations.

The contracts in question are between the software giant and various European Union institutions which are making use of said products. The central issue is whether contractual terms are compliant with data protection laws intended to protect individual rights across the region from foreign bodies which do not hold data protection to the same standards.

“Though the investigation is still ongoing, preliminary results reveal serious concerns over the compliance of the relevant contractual terms with data protection rules and the role of Microsoft as a processor for EU institutions using its products and services,” a statement reads.

“Similar risk assessments were carried out by the Dutch Ministry of Justice and Security confirmed that public authorities in the Member States face similar issues.”

The preliminary findings from the European Data Protection Supervisor follow on from investigations taking place in the Netherlands and also changes to the Microsoft privacy policies for its VoIP product Skype and AI assistant Cortana. The changes were seemingly a knee-jerk reaction to reports contractors were listening to audio clips to improve translations and the accuracy of inferences.

What is worth noting is that Microsoft is not the only company which has been bending the definition of privacy with regard to contractors and audio clips. Amazon and Google have also been dragged into the hazy definition of privacy and consent.

The issue which seems to be at the heart of this investigation is one of arm’s length. While government authorities and agencies might hand-over responsibility of data protection and privacy compliance to the cloud companies, the European Data Protection Supervisor is suggesting more scrutiny and oversight should be applied by said government parties.

Once again, the definition and extent of privacy principles are causing problems. Europe takes a much more stringent stance on the depth of privacy, as well as the rights which are affording to individuals, than other regions around the world. Ensuring the rights of European citizens are extended elsewhere was one of the primary objectives of the GDPR, though it seems there are still teething problems.

“When using the products and services of IT service providers, EU institutions outsource the processing of large amounts of personal data,” the statement continues.

“Nevertheless, they remain accountable for any processing activities carried out on their behalf. They must assess the risks and have appropriate contractual and technical safeguards in place to mitigate those risks. The same applies to all controllers operating within the EEA.”

One development which could result in additional scrutiny is The Hague Forum, an initiative to create standardised contracts for European member states which meet the baseline data protection and privacy conditions set forward. The European Data Protection Supervisor has encouraged all European institutions to join the Forum.

Although GDPR was seen as a headache for many companies around the world, such statements from the European Data Protection Supervisor proves this is not an area which can simply be addressed once and then forgotten. GDPR was supposed to set a baseline, and there will be more regulation to build further protections. Perhaps the fact that Microsoft is seemingly non-compliant with current regulations justifies the introduction of more rules and red-tape.

Facebook starts taking data guardian role seriously

Facebook needs to get back in the good books of both regulators and the general public sharpish, and it seems it is taking a machete to the developer ecosystem to do so.

As part of the agreement with the Federal Trade Commission, Facebook has promised to create a more comprehensive oversight model for the development and implementation of apps on its platform, and it does seem to be taking its responsibility seriously this time around. Whether this prevents a repeat of the Cambridge Analytica scandal which kicked-off the privacy debate remains to be seen, though it is making the right noises.

“Our App Developer Investigation is by no means finished,” said Ime Archibong, VP of Product Partnerships.

“But there is meaningful progress to report so far. To date, this investigation has addressed millions of apps. Of those, tens of thousands have been suspended for a variety of reasons while we continue to investigate.”

Although it is very difficult to figure out how many app developers and applications there are actually on the Facebook platform at any single point, Archibong has stated that 400 developers have been deemed to be breaking the rules. These 400 are responsible for the ‘tens of thousands’ of apps which have been suspended.

While this is a promising start from the social media giant, it will have to do a lot more. We struggle to believe the number of suspect app developers is as low as 400. There might be 400 in London, but worldwide it is going to be a number which is monstrously larger.

This is where Facebook will struggle to be the perfect guardian of our digital lives. With the number of developers and apps unthinkable it will never be able to protect us from every bad actor. Whether best effort is good enough for the critics remains to be seen.

Dating back to March 2018, this is a saga which Facebook cannot shake-off. The general public, politicians and regulators were all enraged by what can only be described as gross negligence from the social media giant. Rules were in place, though there were not nearly comprehensive enough and rarely were bad actors put to the sword and held accountable.

This is what Facebook has to prove to its critics; it is a company which is responsible and can act as an effective guardian of the user’s personal information. It is currently being judged in court of public opinion, a very difficult place to make any progress when the masses are baying for blood.

Although the Cambridge Analytica scandal is only part of the problem, it was the incident which turned the tides against the technology industry. Along with other privacy scandals and debatable business practices, Silicon Valley is being placed under the microscope and it is not working out well. Best case scenario for the likes of Facebook and Google is stricter regulation, though the worst outcome could see acquisitions reversed in the pursuit of increased competition and diluted influence at these companies.

This Facebook investigation is looking to identify the developers who are most likely to break the rules, though there are stricter guidelines being put in place. Archibong is suggesting many of the quiz apps which plague the platform will be banned moving forward, as many will be judged to collect too much information when measured against the value which they offer. Moving forward, these developers shouldn’t be able to get away with it.

This in itself is the problem; Facebook was asleep at the wheel. It created a valuable product and then started to count the cash. It didn’t evolve the rules as the platform grew into an entirely different proposition and it didn’t keep an eye on whether app developers were breaking the basic rules which it had in place anyway.

If Facebook’s quest continues on its current trajectory, the developer ecosystem might have to work a bit harder to access personal information. Apps with very limited functionality and value will not be granted access to the same treasure troves, while the team will also have to prove collecting personal information will improve experience for the user.

Another interesting point which was raised in the commitment is an annual review. Archibong is suggesting every app will be assessed on a yearly basis, and those who do not respond effectively to the audits will be temporarily suspended or banned.

It remains to be seen whether Facebook is doing enough to keep critics happy, though there is no such thing as being heavy-handed here. Facebook will have to take the strictest approach, over compensating even, to ensure it regains the trust and credibility it threw away through inaction.

US tech fraternity pushes its own version of GDPR

The technology industry might enjoy light-touch regulatory landscapes, but change is on the horizon with what appears to be an attempt to be the master of its own fate.

In an open-letter to senior members of US Congress, 51 CEOs of the technology and business community have asked for a federal law governing data protection and privacy. It appears to be a push to gain consistency across the US, removing the ability for aggressive and politically ambitious Attorney Generals and Senators to create their own, local, crusades against the technology industry.

Certain aspects of the framework proposed to the politicians are remarkably similar to GDPR, such as the right for consumers to control their own personal data, seek corrections and even demand deletion. Breach notifications could also be introduced, though the coalition of CEOs are calling for the FTC to be the tip of the spear.

Interestingly enough, there are also calls to remove ‘private right of action’, meaning only the US Government could take an offending company to court over violations. In a highly litigious society like the US, this would be a significant win for any US corporation.

And while there are some big names attached to the letter, there are some notable omissions. Few will be surprised Facebook’s CEO Mark Zuckerberg has not signed a letter requesting a more comprehensive approach to data privacy, though Alphabet, Microsoft, Uber, Verizon, T-Mobile US, Intel, Cisco and Oracle are also absent.

“There is now widespread agreement among companies across all sectors of the economy, policymakers and consumer groups about the need for a comprehensive federal consumer data privacy law that provides strong, consistent protections for American consumers,” the letter states.

“A federal consumer privacy law should also ensure that American companies continue to lead a globally competitive market.”

CEOs who have signed the letter include Jeff Bezos of Amazon, Alfred Kelly of Visa, Salesforce’s Keith Block, Steve Mollenkoph of Qualcomm, Randall Stephenson of AT&T and Brian Roberts of Comcast.

Although it might seem unusual for companies to be requesting a more comprehensive approach to regulation, the over-arching ambition seems to be one of consistency. Ultimately, these executives want one, consolidated approach to data protection and privacy, managed at a Federal level, as opposed to a potentially fragmented environment with the States applying their own nuances.

It does appear the technology and business community is attempting to have some sort of control over its own fate. As much as these companies would want a light-touch regulatory environment to continue, this is not an outcome which is on the table. The world is changing but consolidating this evolution into a single agency the lobbyists can be much more effective, and cheaper.

The statement has been made through Business Roundtable, a lobby group for larger US corporations, requesting a national consumer privacy law which would pre-empt any equivalent from the states or local government. Definitions and ownership rules should be modernised, and a risk-orientated approach to data management, storage and analysis is also being requested.

Ultimately, this looks like a case of damage control. There seems to be an acceptance of regulation overhaul, however the CEOs are attempting to control exposure. In consolidating the regulations through the FTC, punishments and investigations can theoretically only be brought forward through a limited number of routes, with the companies only having to worry about a single set of rules.

Consistency is a very important word in the business world, especially when it comes to regulation.

What we are currently seeing across the US is aggression towards the technology industry from almost every legal avenue. Investigations have been launched by Federal agencies and State-level Attorney Generals, while law suits have also been filed by non-profits and law firms representing citizens. It’s a mess.

Looking at the Attorney Generals, there do seem to be a couple who are attempting to make a name for themselves, pushing into the public domain. This might well be the first steps for higher offices in the political domain. For example, it would surprise few if New York Attorney General Letitia James harbours larger political ambitions and striking a blow for the consumer into Facebook would certainly gain positive PR points.

Another interesting element is the fragmentation of regulations to govern data protection and privacy. For example, there are more aggressive rules in place in New York and California than in North Carolina and Alaska. In California, it becomes even more fragmented, just look at the work the City of San Francisco is undertaking to limit the power of facial recognition and data analytics. These rules will effectively make it impossible to implement the technology, but in the State of Illinois, technology companies only have to seek explicit consent from the consumer.

Inconsistency creates confusion and non-compliance. Confusion and non-compliance cost a lot of money through legal fees, restructuring, product customisation and fines.

Finally, from a PR perspective, this is an excellent move. The perception of Big Business at the moment, is that it does not care about the privacy rights of citizens. There have been too many scandals and data breaches for anyone to take claims of caring about consumer privacy seriously. By suggesting a more comprehensive and consistent approach to privacy, Big Business can more legitimately claim it is the consumer champion.

A more consistent approach to regulation helps the Government, consumers and business, however this is a move from the US technology and business community to control their own fate. This is a move to decrease the power and influence of the disruptive Attorney Generals and make the regulatory evolution more manageable.

Momentum is gathering pace towards a more comprehensive and contextually relevant privacy regulatory landscape, and it might not be too long before a US version of Europe’s GDPR is introduced.

Silicon Valley’s ‘ask for forgiveness, not permission’ attitude is wearing thin

Silicon Valley has often pushed the boundaries in pursuit of progress, but the it deserves everything it gets if it continues to try the patience of consumers and regulators with privacy.

‘It is easier to ask for forgiveness, than beg for permission’ is a common, if largely unattributable, phrase which seems to apply very well to the on-going position of Silicon Valley. It is certainly easier to act and face the consequences later, but it should not be right or allowed. This is the approach the internet giants are taking on a weekly basis, and someone will have to find the stomach and muscle to stop this abuse of power, influence and trust.

The most recent chapter in this on-going tale of deceit and betrayal concerns the voice assistants which are becoming increasingly popular with consumers around the world.

Apple is the latest company to test the will of the general public as it has now officially ended an internal process which is known as ‘grading’. In short, humans listen to Siri interactions with customers, transcribing the interaction in certain cases, to help improve the accuracy of the digital assistant.

“We know that customers have been concerned by recent reports of people listening to audio Siri recordings as part of our Siri quality evaluation process — which we call grading,” Apple said in a blog entry. “We heard their concerns, immediately suspended human grading of Siri requests and began a thorough review of our practices and policies. We’ve decided to make some changes to Siri as a result.”

Of course, it is perfectly reasonable for Apple to want to improve the performance of Siri, though it must ask for permission. This is the vital step in the process which Apple decided to leave out.

The new process will seek consent from users through an ‘opt-in’ system, making it compliant, while the default position for all Siri interactions will be to not store information. For those consumers who do opt-in to aid Apple in training Siri, the audio will only be transcribed and reviewed by permanent Apple employees.

This process should have been in-place prior to the ‘grading’ system being implemented. It is inconceivable that Apple did not realise this would break privacy regulations or breach the trust it has been offered by the customer. It decided not to tell the consumer or authorities this practice was in place. It muddied the waters to hide the practice. It lied to the user when it said it respects privacy principles and rights.

Apple acted irresponsibly, unethically and underhandedly. And there is almost no plausible explanation that it did so without knowledge and understanding of the potential impact of these actions. If it did not understand how or why this practice violated privacy principles or regulations, there must be an epidemic of incompetence spreading through the ranks at Cupertino.

What is worth noting is Apple is not alone; Google and Facebook are just as bad at misleading or lying to the user, breaking the trust which has been offered to these undeserving companies.

Google is currently under investigation for the same abuse of trust and privacy principles, this time for the Google Assistant.

“We have made it clear to Google’s representatives that essential requirements for the operation of the Google Assistant are currently not fulfilled,” said Johannes Caspar, Hamburg Commissioner for Data Protection and Freedom of Information. “This not only applies to the practice of transcribing, but to the overall processing of audio data generated by the operation of the language assistance system.”

The investigation from the Hamburg data protection authority has pressured Google into changing the way it trains its digital assistant. Earlier this month, Belgian news outlet VRT NWS revealed 0.2% of conversations with Google Assistant were being listened to by external contractors. At least one audio clip leaked to the news outlet included a couple’s address and personal information about their family.

Google has now said it has stopped the practice in the EU, but not necessarily elsewhere, and the Hamburg DPA has said it will have to seek permission from users before beginning anything remotely similar.

At the same regulator, Facebook has been dragged into the drama.

“In a special way, this also applies to Facebook Inc., where as part of the Facebook Messenger to improve the transcription function offered there a scheduled manual evaluation was not only the human-to-machine communication, but also the human-to-human communication,” said Caspar. “This is currently the subject of a separate investigation.”

Two weeks ago, reports emerged Facebook had hired external contractors to transcribe audio from calls made across the Messenger platform. Once again, users were not informed, while consent was not obtained, but what makes this incident even worse, is there does not appear to be any logical reason for Facebook to need this data.

The only reason we can see why Facebook would want this data to improve algorithms is to take the insight to feed the big-data, hyper-targeted advertising machine. However, this is a massive no-no and a significant (and illegal) breach of trust.

All of these examples are focused on transcription of audio data, though there are many other instances of privacy violations, and demonstrate the ‘easier to ask for forgiveness than permission attitude’ which has engulfed Silicon Valley.

We cannot believe there is any way these companies did not understand or comprehend these actions and practices were a breach of trust and potentially breaking privacy rules. These companies are run by incredibly smart and competent people. Recruitment drives are intense, offices and benefits are luxurious, and salaries are sky-high for a very good reason; Silicon Valley wants to attract the best and brightest talent around.

And it works. The likes of Google, Facebook and Apple have the most innovative engineers, data scientists who can spot the wood for the trees, the savviest businessmen, accountants who are hide-and-seek champions and the slipperiest lawyers. They consider and contemplate all potential gains and consequences from any initiative. We cannot believe there is any conceivable explanation as to why these incredibly intelligent people did not recognise these initiatives were either misleading, untransparent or non-compliant.

The days of appearing before a committee, cap in hand, begging for forgiveness with a promise it will never happen again cannot be allowed to continue. The judges, politicians and consumers who believe these privacy violations are done by accident are either incredibly naïve, absurdly short-sighted, woefully ill-informed or, quite frankly, moronic.

Silicon Valley must be forced to act responsible and ethically, because it clearly won’t do it on its own.

Google prefers cookies to fingerprints

Internet giant Google has announced some measures designed to better protect the privacy of users of its Chrome browser.

Under the heading of ‘Privacy Sandbox’ Google wants to develop a set of open privacy standards. At the core of this initiative is the use of cookies, which are bits of software that track people’s online activity and, so the theory goes, serve them more relevant advertising. Google concedes that some use of cookies doesn’t meet acceptable data privacy standards, but that blocking them isn’t the answer.

A major reason for this is that it encourages the use of another tracking technique called fingerprinting. This aggregates a bunch of other user preferences and behaviours to generate a unique identifier that performs a similar function to cookies. The problem with fingerprints, however, is that there’s no user control over them and hence they’re bad for data privacy.

Since the digital ad market now expects a considerable degree of targeting, but fingerprinting is considered an unacceptable solution to the blocking of cookies, Google wants to come up with a better one that will be implemented across all browsers, hence this initiative. The Privacy Sandbox is a secure environment designed to enable safe experimentation with other personalization technologies.

“We are following the web standards process and seeking industry feedback on our initial ideas for the Privacy Sandbox,” blogged Justin Schuh Director of Chrome Engineering at Google. “While Chrome can take action quickly in some areas (for instance, restrictions on fingerprinting) developing web standards is a complex process, and we know from experience that ecosystem changes of this scope take time. They require significant thought, debate, and input from many stakeholders, and generally take multiple years.”

While this is all laudable it should be noted that Google has possibly the greatest vested interest in optimising targeted advertising online. While that makes it perfectly understandable that it would want to take the initiative in standardizing the way it’s done, other big advertisers and browser providers may have reservations about surrendering much control of the process to Google.

Europe set to join the facial recognition debate

With more authorities demonstrating they cannot be trusted to act responsibly or transparently, the European Commission is reportedly on the verge of putting the reigns on facial recognition.

According to reports in The Financial Times, the European Commission is considering imposing new rules which would extend consumer rights to include facial recognition technologies. The move is part of a greater upheaval to address the ethical and responsible use of artificial intelligence in today’s digital society.

Across the world, police forces and intelligence agencies are imposing technologies which pose a significant risk of abuse without public consultation or processes to create accountability or justification. There are of course certain nations who do not care about privacy rights of citizens, though when you see the technology being implemented for surveillance purposes in the likes of the US, UK and Sweden, states where such rights are supposedly sacred, the line starts to be blurry.

The reasoning behind the implementation of facial recognition in surveillance networks is irrelevant; without public consultation and transparency, these police forces, agencies, public sector authorities and private companies are completely disregarding the citizens right to privacy.

These citizens might well support such initiatives, electing for greater security or consumer benefits over the right to privacy, but they have the right to be asked.

What is worth noting, is that this technology can be a driver for positive change in the world when implemented and managed correctly. Facial scanners are speeding up the immigration process in airports, while Telia is trialling a payment system using facial recognition in Finland. When deployed with consideration and the right processes, there are many benefits to be realised.

The European Commission has not confirmed or denied the reports to Telecoms.com, though it did reaffirm its on-going position on artificial intelligence during a press conference yesterday.

“In June, the high-level expert group on artificial intelligence, which was appointed by the Commission, presented the first policy recommendations and ethics guidelines on AI,” spokesperson Natasha Bertaud said during the afternoon briefing. “These are currently being tested and going forward the Commission will decide on any future steps in-light of this process which remains on-going.”

The Commission does not comment on leaked documents and memos, though reading between the lines, it is on the agenda. One of the points the 52-person expert group will address over the coming months is building trust in artificial intelligence, while one of the seven principles presented for consultation concerns privacy.

On the privacy side, parties implementing these technologies must ensure data ‘will not be used to unlawfully or unfairly discriminate’, as well as setting systems in place to dictate who can access the data. We suspect that in the rush to trial and deploy technology such as facial recognition, few systems and processes to drive accountability and justification have been put in place.

Although these points do not necessarily cover the right for the citizen to decide, tracking and profiling are areas where the group has recommended the European Commission consider adding more regulation to protect against abuses and irresponsible deployment or management of the technology.

Once again, the grey areas are being exploited.

As there are only so many bodies in the European Commission or working for national regulators, and technology is advancing so quickly, there is often a void in the rules governing the newly emerging segments. Artificial intelligence, surveillance and facial recognition certainly fall into this chasm, creating a digital wild-west landscape where those who do not understand the ‘law of unintended consequence’ play around with new toys.

In the UK, it was unveiled several private property owners and museums were using the technology for surveillance without telling consumers. Even more worryingly, some of this data has been shared with police forces. Information Commissioner Elizabeth Denham has already stated her agency will be looking into the deployments and will attempt to rectify the situation.

Prior to this revelation, a report from the Human Rights, Big Data & Technology Project attacked a trial from the London Metropolitan Police Force, suggesting it could be found to be illegal should it be challenged in court. The South Wales Police Force has also found itself in hot water after it was found its own trials saw only an 8% success rate.

Over in Sweden, the data protection regulator used powers granted by GDPR to fine a school which had been using facial recognition to monitor attendance of pupils. The school claimed they had received consent from the students, but as they are in a dependent position, this was not deemed satisfactory. The school was also found to have substandard processes when handling the data.

Finally, in the US, Facebook is going to find itself in court once again, this time over the implementation of facial recognition software in 2010. A class-action lawsuit has been brought against the social media giant, suggesting the use of the technology was non-compliant under the Illinois Biometric Information Privacy Act.

This is one example where law makers have been very effective in getting ahead of trends. The law in question was enacted in 2008 and demanded companies gain consent before any facial recognition technologies are introduced. This is an Act which should be applauded for its foresight.

The speed in which progress is being made with facial recognition in the surveillance world is incredibly worrying. Private and public parties have an obligation to consider the impact on the human right to privacy, though much distaste has been shown to these principles in recent months. Perhaps it is more ignorance, short-sightedness or a lack of competence, but without rules to govern this segment, the unintended consequences could be compounded years down the line.

Another point worth noting is the gathering momentum to stop the wrongful implementation of facial recognition. Aside from Big Brother Watch raising concerns in the UK, the City of San Francisco is attempting to implement an approval function for police forces, while Google is facing an internal rebellion. Last week, it emerged several hundred employees had signed a petition refusing to work on any projects which would aid the government in tracking citizens through facial recognition surveillance.

Although the European Commission has not confirmed or denied the report, we suspect (or at the very least hope) work is being taken on to address this area. Facial recognition needs rules, or we will find ourselves in a very difficult position, similar to today.

A lack of action surrounding fake news, online bullying, cybersecurity, supply chain diversity and resilience, or the consolidation of power in the hands of a few has created some difficult situations around the world. Now the Commission and national governments are finding it difficult to claw back the progress of technology. This is one area where the European Commission desperately needs to get ahead of the technology industry; the risk and consequence of abuse is far too great.

Facebook investors brush off leaked $5 billion fine

It has been widely reported that Facebook will receive a record fine for privacy violations, but investors seems strangely pleased about it.

All the usual-suspect business papers seem to have received the leak late last week that the US Federal Trade Commission voted narrowly to fine Facebook $5 billion for data privacy violations related to the Cambridge Analytica thing. The FTC, like the FCC, has five commissioners, three of which are affiliated to the Republican party and two the Democrats. As ever they voted on partisan lines, with the Democrats once more opposing the move.

The FTC has yet to make an official announcement, so we don’t know the stated reasons for the Democrat objections. But since that party seems to have decided it would have won the last general election if it wasn’t for those meddling targeted political ads, it’s safe to assume they think the fine is too lenient.

Just because the Democrats have a vested interest, that doesn’t mean they’re wrong, however. Of course Democrat politicians have criticised the decision, but many more independent commentators have noted that the fine amounts to less than a quarter’s profit for the social media giant. Nilay Patel, Editor in Chief of influential tech site The Verge, seems to speak for many in this tweet.

That Facebook’s share price actually went up after such a big fine initially seems remarkable, but all it really indicates is that Facebook had done a good job of communicating the risk to its investors, so a five bil hit was already priced in. The perfectly legitimate point, however, is that as a punishment one month’s revenue is unlikely to serve as much of a deterrent from future transgressions.

Patel seems very hostile to Facebook, stating in his opinion piece on the matter “Facebook has done nothing but behave badly from inception.” A lot of this bad behaviour consists of exploiting user data, but what is really under attack seems to be Facebook’s core business model and, to some extent, the whole-ad-funded model on which sites like The Verge rely.

Debates need to be had about the way the Internet operates and monetizes itself, but identifying Facebook as a uniquely bad actor when it comes to exploiting user data seems disingenuous. Laws and regulations are struggling to catch up with the business models of internet giants and there are many other questions to be asked about how they operate.

The fact that Facebook’s share price has now largely recovered from the Cambridge Analytica scandal of a year or so ago, as illustrated by the Google Finance screenshot below, indicates that investors consider these issues to be just another business risk, to be weighed up against obscene profits. While we have always considered the scandal to be overblown, it also seems clear that, as a meaningful punishment, even a $5 billion fine is totally inadequate in this case.

Facebook share price July 19

ICO gets serious on British Airways over GDPR

The UK’s Information Commissioner Officer has swung the sharp stick of GDPR at British Airways and it looks like the damage might be a £183.39 million fine.

With GDPR inked into the rule book in May last year, the first investigations under the new guidelines will be coming to a conclusion in the near future. There have been several judgments passed in the last couple of months, but this is one of the most significant in the UK to date.

What is worth noting is this is not the final decision; this is an intention to fine £183.39 million. We do not imagine the final figure will differ too much, the ICO will want to show it is serious, but BA will be giving the opportunity to have its voice heard with regard to the amount.

“People’s personal data is just that – personal,” said Information Commissioner Elizabeth Denham.

“When an organisation fails to protect it from loss, damage or theft it is more than an inconvenience. That’s why the law is clear – when you are entrusted with personal data you must look after it. Those that don’t will face scrutiny from my office to check they have taken appropriate steps to protect fundamental privacy rights.”

The EU’s GDPR, General Data Protection Regulation, offers regulators the opportunity to fine guilty parties €20 million or as much as 3% of total revenues for the year the incident occurred. In this case, BA will be fined 1.5% of its total revenues for 2018, with the fine being reduced for several reasons.

In September 2018, user traffic was directed towards a fake British Airways site, with the nefarious actors harvesting the data of more than 500,000 customers. In this instance, BA informed the authorities of the breach the defined window, co-operated during the investigation and made improvements to its security systems.

While many might have suggested the UK watchdog, or many regulators around the world for that matter, lack teeth when it comes to dealing with privacy violations, this ruling should put that preconception to rest. This is a weighty fine, which should force the BA management team to take security and privacy seriously; if there is one way to make executives listen, its hit them in the pocket.

This should also be seen as a lesson for other businesses in the UK. Not only is the ICO brave enough to hand out fines for non-compliance, it is mature enough to reduce the fine should the effected organization play nice. £183.39 million is half of what was theoretically possible and should be seen as a win for BA.

Although this is a good start, we would like to see the ICO, and other regulatory bodies, set their sight on the worst offenders when it comes to data privacy. Companies like BA should be punished when they end up on the wrong side of right, but the likes of Facebook, Google and Amazon have gotten an easy ride so far. These are the companies who have the greatest influence when it comes to personal information, and the ones which need to be shown the rod.

This is one of the first heavy fines implemented in the era of GDPR and the difference is clear. Last November, Uber was fined £385,000 for a data breach which impacted 2.7 million customers and drivers in the UK. The incident occurred prior to the introduction of GDPR, the reason the punishment looks so measly compared to the BA fine here.

The next couple of months might be a busy time in the office of the ICO as more investigations conclude. We expect some heavy fines as the watchdog bears its teeth and forces companies back onto the straight and narrow when it comes to privacy and data protection.

UK launches competition probe into digital advertising market

The UK Competition and Markets Authority wants to know if the digital advertising market is being corrupted by internet giants like Google and Facebook.

The investigation is being called the ‘Online platforms and digital advertising market study’ and it will look into the following:

  • To what extent online platforms have market power in user-facing markets, and what impact this has on consumers
  • Whether consumers are able and willing to control how data about them is used and collected by online platforms
  • Whether competition in the digital advertising market may be distorted by any market power held by platforms

So this seems to be a combination of a monopoly investigation and an audit of how digital platforms are handling personal data. The dominance of the Silicon Valley platforms over the digital advertising market seems clear, so the question is whether they abuse that dominance to unfairly crush competition. The matter of data privacy seems secondary, especially since there are already loads of similar investigations happening around the world.

“It is our job to ensure that companies innovate and compete,” explained CMA Chairman Andrew Tyrie. “And every bit as much, it’s our job to ensure that consumers are protected from detriment. Implementation of the Furman Report should help a lot. As part of the work announced today, we will be advising Government on how aspects of Furman can most effectively be implemented.

“Much about these fast-changing markets is a closed book to most people. The work we do will open them up to greater scrutiny, and should give Parliament and the public a better grip on what global online platforms are doing. These are global markets, so we should and will work more closely than before with authorities around the world, as we all consider new approaches to the challenges posed by them.

“The market study will examine concerns about how online platforms are using people’s personal data, including whether making this data available to advertisers in return for payment is producing good outcomes for consumers,” said CMA Chief Executive Andrea Coscelli. “The CMA will examine whether people have the skills, knowledge and control over how information about them is collected and used, so they can decide whether or not to share it in the first place.”

While they’re at it why don’t they do an investigation into how many people read the terms and conditions of using a service, let alone understand them. While there can be little doubt that online platforms have been very effective at monetising third party data, anyone who uses them for free and then claims to feel exploited is being disingenuous. Much more interesting will be the measures taken if they’re viewed as a harmful monopoly.