Tinder comes under the scope of Irish GDPR watchdog

Dating apps have forever changed the way millennials find relationships (for however long they last…) but Tinder has found itself under the scrutiny of the Irish regulator.

The dating trailblazer has found itself alongside serial privacy offender Google as the focal point of an investigation from lead-European GDPR regulator the Irish Data Protection Commission. The question is whether MTCH Technology Services, the parent-company of Tinder, complies with GDPR in terms of processing user data.

“The identified issues pertain to MTCH Technology Services Limited’s ongoing processing of users’ personal data with regard to its processing activities in relation to the Tinder platform, the transparency surrounding the ongoing processing, and the company’s compliance with its obligations with regard to data subject right’s requests,” a statement from the regulator said.

Interestingly enough, a recent investigation from the Norwegian Consumer Council (NCC) suggested several dating apps such as Grindr, OkCupid, and Tinder might be breaking GDPR. The investigation suggested nine out of ten of the most popular dating apps were transmitting data to ‘unexpected third-parties’ without seeking consent from users, potentially violating GDPR.

As these applications collect sensitive information, sexual preferences, behavioural data, and location, there could be quite the backlash. The Irish Data Protection Commission will investigate how this information is processed, whether it then transmitted onto third parties and if the developers are being transparent enough with their users.

Alongside the Tinder investigation, the Irish watchdog is also investigating a regular for the privacy enforcement community, Google.

Once again, transparency is the key word here, as it so often is when one of the Silicon Valley residents are placed under the microscope. The authority will hope to understand how Google collects and processes location data, while also seeing whether it has been effectively informing users prior to collecting consent.

Google is seemingly constantly under the scrutiny of one regulator or another due to the complex web that is its operations. No-one outside of Google genuinely understands every aspect of the business, therefore a new potential privacy scandal emerges every so often as the layers of complexity are pulled back. In this investigation, it is not entirely clear what product or service is the focal point.

What is worth bearing in mind that any new privacy investigations are most likely to focus on timelines which were initiated following the introduction of GDPR in 2018. Anything prior to this, for example the Equifax leak or Yahoo hack, would not have been subject to the same financial penalties.

For the Tinder and Google investigations, any wrongdoing could be punished with a fine up to €2 million or 4% of total annual revenues, whichever is greater. We haven’t seen many of these fines to date because of the timing of the incidents or investigations, but regulators might well be looking for a case to prove there is a bite behind the regulatory bark, a means to scare corporates into action and proactive security measures.

An excellent example of this enforcement concerns Facebook and the Cambridge Analytica scandal. The investigation into potential GDPR violations takes into account several different things; the incident itself, security procedures and features, transparency with the user and assistance with the investigation, to name a few. Facebook did not cover itself with glory and was not exactly helpful during the investigation, CEO Mark Zuckerberg refused to appear in front of a Parliamentary Committee in the UK when called upon.

As this incident occurred prior to the introduction of GDPR, the Information Commissioner’s Office in the UK was only permitted to fine the social media giant £500,000. Facebook’s annual revenue for 2013, when the incident occurred, was $7.87 billion. The maximum penalty which could have been applied under GDPR would have been $314 million.

Although the potential fines have been well-documented, until there is a case to point to most companies will push the boundary between right and wrong. Caution is generally only practised when the threat of punishment is followed through to make an example.

Netherlands named as Europe’s meanest GDPR henchman

The Netherlands has seen the most GDPR breach notifications reported to the regulator, but the spread of activity, or inactivity in some nations, is quite remarkable.

In the eight months since GDPR was officially written into European regulations, law firm DLA Piper has said regulators have been alerted to breaches more than 59,000 times. The Netherlands, Germany and the UK have seen the biggest numbers of notifications, with 15,400, 12,600 and 10,600 respectively, though the new privacy status quo has not been embraced with such enthusiasm everywhere.

“GDPR has driven the issue of data breach well and truly into the open,” said Ross McKean, a partner at DLA Piper, “The rate of breach notification has increased by over 12% compared to last year’s report and regulators have been busy road-testing their new powers to sanction and fine organisations.”

The scale and depth of these breaches vary considerably, a mis-sent email there and a cybersecurity hack here, but the number does represent a significant shift in the tides; data breaches are now being taken seriously, or at least in some nations.

As you can see from the table below, we have selected the ten largest economies across the bloc, the variance is quite interesting.

Nation Breaches in total Breaches per 100,00 people
Germany 12,600 15.6
UK 10,600 16.3
France 1,300 1.9
Italy 610 0.9
Spain 670 1.3
Netherlands 15,400 89.8
Sweden 2,500 24.9
Poland 2,200 5.7
Belgium 420 3.6
Austria 580 6.6

There might be a few reasons for increased number of notifications in certain countries, allowing for the presence of different industries. For example, Ireland has the 4th largest number of notifications to the data watchdog (c.3,800) but the 20th smallest population (out of 28). This is also a country where the economy and society is dominated by the presence of the technology sector.

This will explain some of the variance on figures, but not completely. Take Italy for example. This is the 4th largest economy across the bloc, but in the eight months since May 25 when GDPR was introduced, the regulator was only notified of 610 data breaches. There are two explanations for such a low figure:

  • Italian businesses have some of most advanced data protection policies and mechanisms worldwide
  • The culture of owning mistakes and reporting data protection and privacy inadequacies is almost non-existent in the country

We have made the Italians the centre of this point, but there are quite a few who would fall into this category of (a) squeaky clean or (b) don’t care about GDPR. Spain has 670 breach notifications to the regulator, Belgium 420, Greece 70, Cyprus 35 and Liechtenstein 15.

Although GDPR has certainly made promising sets forward in forcing a more privacy orientated society and economy, the issues will continue to persist unless the same stringent attitudes are adopted across the board. Such is the fluidity and borderless nature of the digital economy, a weak link in the chain can cause disruption. All economies are interlinked, make no doubt about that.

Interestingly enough, momentum will gather as the digital economy becomes more complex. Security and data protection are still not high enough priorities on the corporate agenda, although trends are heading the right direction. Breaches will still continue to occur, and fines will start to get very large.

GDPR violations carry a maximum penalty of €20 million or 3% of annual revenues. These numbers can be reduced if the breach is reportedly in a timely manner and the company is helpful. However, fines to date have not been to this magnitude largely because the incidents occurred prior to the introduction of GDPR. Any breach which occurred after May 25 will be met with a much sharper stick than previously.

For example, Equifax is a company which collects and aggregates information on over 800 million individual consumers and more than 88 million businesses worldwide. Hundreds of millions of customers and consumers were impacted by the Equifax data breach of 2017, though the maximum fine which could be imposed by the UK’s Information Commissioner’s Office (ICO) was £500,000. Under GDPR, Equifax would have been fined £20 million.

GDPR took Europe into the 21st century when it comes to data protection and privacy. It forced companies and regulators to take a more stringent approach to the security of personal and corporate information. Despite the pain everyone had to endure to be GDPR-compliant, it should only be viewed as a good thing.

Data breaches are almost certainly going to continue, but one thing you can guarantee is the numbers are going to be getting a lot bigger.

The internet could be set for a fresh GDPR nightmare

A new academic study into online consent management platforms has concluded many of them could be flouting GDPR rules.

The study was conducted by a consortium of universities and its findings published under the header: ‘Dark Patterns after the GDPR: Scraping Consent Pop-ups and Demonstrating their Influence’. We’re all aware of the pop-ups that have, well, popped up since GDPR came into force, requiring us to click ‘I agree’ to cookies and that sort of thing when we first visit a website, and often continually afterwards. But what are we actually agreeing to?

The issue this study seems to have been conducted to address concerns how much information people are supplied with when asked for their consent, as well as the matter of presumed consent – i.e. opt-out as opposed to opt-in. In many cases this process is managed by third party consent management platforms (CMP), and that’s what the study focused on.

We scraped the designs of the five most popular CMPs on the top 10,000 websites in the UK,” says the abstract to the report. We found that dark patterns and implied consent are ubiquitous; only 11.8% meet the minimal requirements that we set based on European law. Second, we conducted a field experiment with 40 participants to investigate how the eight most common designs affect consent choices.

“We found that notification style (banner or barrier) has no effect; removing the opt-out button from the first page increases consent by 22–23 percentage points; and providing more granular controls on the first page decreases consent by 8–20 percentage points. This study provides an empirical basis for the necessary regulatory action to enforce the GDPR, in particular the possibility of focusing on the centralised, third-party CMP services as an effective way to increase compliance.

So, at its simplest, the study is saying the vast majority of CMPs flout European law and thus expose their users to enforcement action. You can download the full report through the abstract link above, but if you don’t feel like sifting through the typically opaque academic writing, Techcrunch has done a great job of decoding it here.

GDPR compliance was always a minefield and the only surprise is that enforcement action has been so muted so far. That could be set to change with studies like this, however, as such widespread transgression can surely not be allowed to go unchallenged. On the other hand the GDPR people could end up deciding the current rules are too strict and unworkable, but that’s not likely.

Fitness tracker use is exploding in the US, especially among rich young women

A recent Pew survey shows 21% of US adults regularly wear a smartwatch or fitness tracker. Over half of them think it acceptable for the device makers to share user data with medical researchers.

According to the survey results shared by the Pew Research Center, an American think-tank, smartwatch and fitness tracker adoption may have crossed the chasm from earlier adopters to early majority. 21% of the surveyed panellists already are regularly using smartwatch or specialised tracker to monitor their fitness.

Such a trajectory is in line with the recent market feedback that the total wearables market volume has nearly doubled from a year ago (though what counts as wearables may be contested), and both wristbands and smartwatches have grown by nearly 50%.

When it comes to difference in adoption rates between social groups, the penetration went up to nearly a third (31%) among those with a household income of over $75,000. In comparison, among those with a household income of less than $30,000, only 12% regularly wear such a device. In addition to variance by income groups, women, Hispanic adults, and respondents with a college degree and above are also more likely to wear such devices than men, non-college graduates, and other major ethnic groups.

Another question on the survey asks the respondents if they think makers of a fitness tracking app can share “their users’ data with medical researchers seeking to better understand the link between exercise and heart disease”. The response is divided. 41% of all the respondents said yes, as opposed to 35% saying no, while 22% unsure. However, the percentage of those believing such sharing acceptable went up to 53% among the respondents that are already regularly using such devices, compared to 38% among the non-adopters.

Due to the lack of a GDPR equivalent in the US, it is not much of a surprise that there is neither a consensus among users nor a standard industry practice related to user data sharing. “Recently, some concerns have been raised over who can and should have access to this health data. Military analysts have also expressed concern about how third parties can use the data to find out where there is an American military presence,” Pew said in its press release.

Meanwhile, how useful the data tracked by the devices can be for medical research purposes may also be debatable. For example, even the best of the devices, the Apple Watch, does not qualify as a medical device, despite its being “FDA certified”.

The survey was conducted by Pew from 3 to 17 June 2019. 4,272 qualified panellists responded to the survey.

Facebook gets a thumbs-up from privacy officials

The Advocate General to the Court of Justice of the European Union (CJEU) has said Facebook is not in violation of privacy rules in transferring data to US servers.

In a rare sign of approval from privacy officials, Facebook has won the backing of Advocate General Saugmandsgaard Øe, who has confirmed Facebook Ireland is acting legally by sending data to servers located in the US. The opinion from Øe is in connection with a lawsuit filed by Austrian privacy advocate Max Schrems.

Removing all the legal jargon, Øe’s opinion is that there are adequate protections in place to ensure the rights of European citizens are maintained in the event data is transferred from Facebook’s Irish servers to be processed in the US. Agreements have been signed between the two parties which contain contractual clauses to enforce the privacy rights of European citizens.

Although this is the opinion of the Advocate General and not binding for the CJEU, it is a very positive (and perhaps surprising) note for a company which so often flirts with privacy controversy.

For Schrems, this is not the most encouraging of signs. The CJEU is not bound to Øe’s opinion, but the courts rarely hold a different view to such high-ranking officials.

The court case in question was initially filed by Schrems, the man largely responsible for the downfall of the Safe Harbour mechanism dictating trans-Atlantic data transfer, in 2015. Schrems argued that in light of privacy violations highlighted by Edward Snowden, the Irish data protection authorities were falling short of their own responsibilities. As it had been proven intelligence agencies were spying on citizens, Schrems argued it was not possible to maintain the privacy rights of European citizens if data is transferred to the US.

With the downfall of Safe Harbour, the mechanism that deems protections were being upheld in the US, big questions were being asked. Schrems suggested that even with the contractual clauses in place protections could not be maintained and there was little justification to transfer data to US servers in the first place.

Øe’s opinion disagrees with these assertions. Firstly, the ‘exporter’ has placed appropriate protections, and secondly, the US Government is entitled to process some data under the banner of national security.

Schrems has been fighting Facebook and other internet platforms for years in an attempt to stop the flow of information across the Atlantic. He and other privacy advocates suggest this information is being used to aide US intelligence agencies in snooping on European citizens. While his actions certainly were successful in bringing down Safe Harbour, he has been less successful in arguing the invalidity of the replacement mechanism, Privacy Shield.

Data protection is, and will continue to be, a significant talking point in the increasingly digital world, though this is a case which will add some confidence in the internet platforms so many people blindly trust. The new digital world needs people like Schrems to hold Big Tech accountable, though it does appear this is a case where the internet giants are on the right side of the line.

You don’t need to understand AI to trust it, says German politician

The minister for artificial intelligence at the German government has spoken about the European vision for AI, especially how to grow and gain trust from non-expert users.

Prof. Dr. Ina Schieferdecker, a junior minister in Germany’s Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF), who has artificial intelligence in her portfolio, recently attended an AI Camp in Berlin (or KI-Camp in German, for “künstliche Intelligenz”). She was interviewed there by DW (Deutsche Welle, Germany’s answer to the BBC World Service) on how the German government and the European Union can help alleviate concerns about AI among ordinary users of the internet and information technologies.

When addressing the question that AI is often seen as a “black box”, and the demand for algorithms to be made transparent, Schieferdecker said she saw it differently. “I don’t believe that everyone has to understand AI. Not everyone can understand it,” she said. “Technology should be trustworthy. But we don’t all understand how planes work or how giant tankers float on water. So, we have learn (sic.) to trust digital technology, too.”

Admittedly not all Europeans share this way of looking at AI and non-expert users. Finland, the current holder of the European presidency, believes that as many people as possible should understand what AI is about, not only to alleviate the concerns but also unleash its power more broadly. So it decided to give 1% of its population AI training.

Schieferdecker also called for a communal approach to developing AI, which should involve science, technology, education, and business sectors. She also demanded that AI developers should consider users’ safety concerns and other basic principles from the beginning. This is very much in line with what has been outlined in the EU’s “Ethics guidelines for trustworthy AI” published in April this year, where, as guideline number one, it is stated “AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.” As we subsequently reported, those guidelines are too vague and lack tangible measurements of success.

Schieferdecker was more confident. She believed when Germany, which has presumable heavily shaped the guidelines, assumes the European presidency in the second half of 2020, it “will try to pool Europe’s strengths in an effort to transform the rules on paper into something real and useable for the people.”

The interview also touched upon how user data, for example shopping or browsing records, are being used by AI in an opaque way and the concerns about privacy this may raise. Schieferdecker believed GDPR has “made a difference” while also admitting there are “issues here and there, but it’s being further developed.” She also claimed the government is working to achieve a data sovereignty in some shape and “offer people alternatives to your Amazons, Googles, Instagrams” without disclosing further details.

The camp took place on 5 December in Berlin as part of the Science Year 2019 programme (Wissenschaftsjahr 2019) and was co-organised by the BMBF and the Society for Information Technology (Gesellschaft für Informatik, GI), an industry organisation. The interview was subjected to a vetting process by the BMBF before it could be published. As DW put it, “the text has been redacted and altered by the BMBF in addition to DW’s normal editorial guidelines. As such, the text does not entirely reflect the audio of the interview as recorded”.

US tech fraternity pushes its own version of GDPR

The technology industry might enjoy light-touch regulatory landscapes, but change is on the horizon with what appears to be an attempt to be the master of its own fate.

In an open-letter to senior members of US Congress, 51 CEOs of the technology and business community have asked for a federal law governing data protection and privacy. It appears to be a push to gain consistency across the US, removing the ability for aggressive and politically ambitious Attorney Generals and Senators to create their own, local, crusades against the technology industry.

Certain aspects of the framework proposed to the politicians are remarkably similar to GDPR, such as the right for consumers to control their own personal data, seek corrections and even demand deletion. Breach notifications could also be introduced, though the coalition of CEOs are calling for the FTC to be the tip of the spear.

Interestingly enough, there are also calls to remove ‘private right of action’, meaning only the US Government could take an offending company to court over violations. In a highly litigious society like the US, this would be a significant win for any US corporation.

And while there are some big names attached to the letter, there are some notable omissions. Few will be surprised Facebook’s CEO Mark Zuckerberg has not signed a letter requesting a more comprehensive approach to data privacy, though Alphabet, Microsoft, Uber, Verizon, T-Mobile US, Intel, Cisco and Oracle are also absent.

“There is now widespread agreement among companies across all sectors of the economy, policymakers and consumer groups about the need for a comprehensive federal consumer data privacy law that provides strong, consistent protections for American consumers,” the letter states.

“A federal consumer privacy law should also ensure that American companies continue to lead a globally competitive market.”

CEOs who have signed the letter include Jeff Bezos of Amazon, Alfred Kelly of Visa, Salesforce’s Keith Block, Steve Mollenkoph of Qualcomm, Randall Stephenson of AT&T and Brian Roberts of Comcast.

Although it might seem unusual for companies to be requesting a more comprehensive approach to regulation, the over-arching ambition seems to be one of consistency. Ultimately, these executives want one, consolidated approach to data protection and privacy, managed at a Federal level, as opposed to a potentially fragmented environment with the States applying their own nuances.

It does appear the technology and business community is attempting to have some sort of control over its own fate. As much as these companies would want a light-touch regulatory environment to continue, this is not an outcome which is on the table. The world is changing but consolidating this evolution into a single agency the lobbyists can be much more effective, and cheaper.

The statement has been made through Business Roundtable, a lobby group for larger US corporations, requesting a national consumer privacy law which would pre-empt any equivalent from the states or local government. Definitions and ownership rules should be modernised, and a risk-orientated approach to data management, storage and analysis is also being requested.

Ultimately, this looks like a case of damage control. There seems to be an acceptance of regulation overhaul, however the CEOs are attempting to control exposure. In consolidating the regulations through the FTC, punishments and investigations can theoretically only be brought forward through a limited number of routes, with the companies only having to worry about a single set of rules.

Consistency is a very important word in the business world, especially when it comes to regulation.

What we are currently seeing across the US is aggression towards the technology industry from almost every legal avenue. Investigations have been launched by Federal agencies and State-level Attorney Generals, while law suits have also been filed by non-profits and law firms representing citizens. It’s a mess.

Looking at the Attorney Generals, there do seem to be a couple who are attempting to make a name for themselves, pushing into the public domain. This might well be the first steps for higher offices in the political domain. For example, it would surprise few if New York Attorney General Letitia James harbours larger political ambitions and striking a blow for the consumer into Facebook would certainly gain positive PR points.

Another interesting element is the fragmentation of regulations to govern data protection and privacy. For example, there are more aggressive rules in place in New York and California than in North Carolina and Alaska. In California, it becomes even more fragmented, just look at the work the City of San Francisco is undertaking to limit the power of facial recognition and data analytics. These rules will effectively make it impossible to implement the technology, but in the State of Illinois, technology companies only have to seek explicit consent from the consumer.

Inconsistency creates confusion and non-compliance. Confusion and non-compliance cost a lot of money through legal fees, restructuring, product customisation and fines.

Finally, from a PR perspective, this is an excellent move. The perception of Big Business at the moment, is that it does not care about the privacy rights of citizens. There have been too many scandals and data breaches for anyone to take claims of caring about consumer privacy seriously. By suggesting a more comprehensive and consistent approach to privacy, Big Business can more legitimately claim it is the consumer champion.

A more consistent approach to regulation helps the Government, consumers and business, however this is a move from the US technology and business community to control their own fate. This is a move to decrease the power and influence of the disruptive Attorney Generals and make the regulatory evolution more manageable.

Momentum is gathering pace towards a more comprehensive and contextually relevant privacy regulatory landscape, and it might not be too long before a US version of Europe’s GDPR is introduced.

Microsoft has also been a member of the eavesdropping gang – report

Microsoft contractors have been listening to Skype and Cortana conversations without the full knowledge and consent of the apps’ users, claims a report.

We were almost immediately proved wrong when we said Microsoft, in comparison with Apple, Google, and Amazon, “fortunately has not suffered high profile embarrassment” by its voice assistant Cortana. Motherboard, part of the media outlet Vice, reported that Microsoft contractors, some of them working from home, have been listening to some Skype calls using the app’s instant translation feature, as well as users’ interactions with the Cortana.

Motherboard has acquired audio clips, screenshots as well as internal documents to show that Microsoft, just as its peers, have been employing humans to constantly improve the software algorithm and the quality and accuracy of the translations and responses. Also similar to the other leading tech companies that run voice assistants, Microsoft is ambiguous in its consumer communication, lax in its policy implementation, and does not give the users a way to opt out.

“The fact that I can even share some of this with you shows how lax things are in terms of protecting user data,” the Microsoft contractor turned whistle-blower, who supplied the evidence and decided to remain anonymous, told Motherboard.

“Microsoft collects voice data to provide and improve voice-enabled services like search, voice commands, dictation or translation services,” Microsoft said a statement sent to Motherboard. “We strive to be transparent about our collection and use of voice data to ensure customers can make informed choices about when and how their voice data is used. Microsoft gets customers’ permission before collecting and using their voice data.”

“Skype Translator Privacy FAQ” states that “Voice conversations are only recorded when translation features are selected by a user.” It then goes on to guide users how to turn off the translation feature. There is no possibility for a customer to use the translation service without having the conversation recorded. Neither does the official document say the recorded conversations may be listened to by another human.

Due to the “gig economy” nature of the job, some contractors work from home when undertaking the tasks to correct translations or improve Cortana’s response quality. This is also made obvious by Microsoft contractors’ job listings. However, the content they deal with can be sensitive, from conversations between people in an intimate relationship, to health status and home addresses, as well as query records on Cortana. “While I don’t know exactly what one could do with this information, it seems odd to me that it isn’t being handled in a more controlled environment,” the whistle-blower contractor told Motherboard.

The report does not specify where the eavesdropping they uncovered took place, but the line in the Microsoft statement that “We … require that vendors meet the high privacy standards set out in European law” can’t help but raise some suspicion that the practice could run afoul of GDPR, the European Union’s privacy protection regulation.

At the time of writing, Microsoft has not announced a suspension the practice.

European court rules websites are equally responsible for some shared data

If you’ve got Facebook ‘like’ functionality on your website then you could be held responsible for any misuse of user data by the social media giant.

The court of Justice of the European Union made this judgment as part of an ongoing action brought by a German consumer rights group called Verbraucherzentrale NRW against German clothing etailer Fashion ID. It turns out that merely having the ‘like’ button embedded on your site results in personal data being automatically transferred to Facebook for it to use in whatever way it chooses, without the consent or even knowledge of the exploited punter.

Sifting through the legalese it looks like the court has concluded that Fashion ID is responsible for the user data it passes on to Facebook since the only reason it embedded the button in the first place is the commercial benefit it gets from people sharing its stuff on social media. This, in turn, means it must be subject to certain data protection obligations such as at least telling visitors to its site what they’re letting themselves in for.

While the case itself is relatively niche and arcane, it could represent the thin end of the wedge when it comes to data protection and consumer rights online in general. The internet is awash with contraptions, such as cookies, designed to track your every move and feed that data into the cyber hive-mind, all the better to work out how best to entice you into spending cash on stuff you didn’t even know you wanted.

Having said that it could be the case that, since Cambridge Analytica, the internet has already got the memo, as those ‘like’ buttons seem to be much less common than they were a few years ago. High profile fines for Facebook and violators of GDPR rules probably mean that website owners have become wary of just embedding any old third party rubbish onto their sites and rulings such as this should serve as a warning not slip back into bad habits.

ICO gets serious on British Airways over GDPR

The UK’s Information Commissioner Officer has swung the sharp stick of GDPR at British Airways and it looks like the damage might be a £183.39 million fine.

With GDPR inked into the rule book in May last year, the first investigations under the new guidelines will be coming to a conclusion in the near future. There have been several judgments passed in the last couple of months, but this is one of the most significant in the UK to date.

What is worth noting is this is not the final decision; this is an intention to fine £183.39 million. We do not imagine the final figure will differ too much, the ICO will want to show it is serious, but BA will be giving the opportunity to have its voice heard with regard to the amount.

“People’s personal data is just that – personal,” said Information Commissioner Elizabeth Denham.

“When an organisation fails to protect it from loss, damage or theft it is more than an inconvenience. That’s why the law is clear – when you are entrusted with personal data you must look after it. Those that don’t will face scrutiny from my office to check they have taken appropriate steps to protect fundamental privacy rights.”

The EU’s GDPR, General Data Protection Regulation, offers regulators the opportunity to fine guilty parties €20 million or as much as 3% of total revenues for the year the incident occurred. In this case, BA will be fined 1.5% of its total revenues for 2018, with the fine being reduced for several reasons.

In September 2018, user traffic was directed towards a fake British Airways site, with the nefarious actors harvesting the data of more than 500,000 customers. In this instance, BA informed the authorities of the breach the defined window, co-operated during the investigation and made improvements to its security systems.

While many might have suggested the UK watchdog, or many regulators around the world for that matter, lack teeth when it comes to dealing with privacy violations, this ruling should put that preconception to rest. This is a weighty fine, which should force the BA management team to take security and privacy seriously; if there is one way to make executives listen, its hit them in the pocket.

This should also be seen as a lesson for other businesses in the UK. Not only is the ICO brave enough to hand out fines for non-compliance, it is mature enough to reduce the fine should the effected organization play nice. £183.39 million is half of what was theoretically possible and should be seen as a win for BA.

Although this is a good start, we would like to see the ICO, and other regulatory bodies, set their sight on the worst offenders when it comes to data privacy. Companies like BA should be punished when they end up on the wrong side of right, but the likes of Facebook, Google and Amazon have gotten an easy ride so far. These are the companies who have the greatest influence when it comes to personal information, and the ones which need to be shown the rod.

This is one of the first heavy fines implemented in the era of GDPR and the difference is clear. Last November, Uber was fined £385,000 for a data breach which impacted 2.7 million customers and drivers in the UK. The incident occurred prior to the introduction of GDPR, the reason the punishment looks so measly compared to the BA fine here.

The next couple of months might be a busy time in the office of the ICO as more investigations conclude. We expect some heavy fines as the watchdog bears its teeth and forces companies back onto the straight and narrow when it comes to privacy and data protection.