Privacy International lines up US firms for GDPR breaches

UK data protection and privacy advocacy group Privacy International has submitted complaints to European watchdogs suggesting GDPR violations at several US firms including Oracle, Equifax and Experian.

The complaints have been submitted to regulators in the UK, Ireland and France, bringing the data broker activities of Oracle and Acxiom into question, as well as ad-tech companies Criteo, Quantcast and Tapad, and credit referencing agencies Equifax and Experian. The complaints are specifically focused on the depth of personal data processing, which Privacy International believes violates Articles five and six of the General Data Protection Regulation (GDPR).

“It’s been more than five months since the EU’s General Data Protection Regulation (GDPR) came into effect,” a Privacy International statement read. “Fundamentally, the GDPR strengthens rights of individuals with regard to the protection of their data, imposes more stringent obligations on those processing personal data, and provides for stronger regulatory enforcement powers – in theory. In practice, the real test for GDPR will be in its enforcement.

“Nowhere is this more evident than for data broker and ad-tech industries that are premised on exploiting people’s data. Despite exploiting the data of millions of people, are on the whole non-consumer facing and therefore rarely have their practices challenged.”

The GDPR Articles in question relate to the collection and processing of information. Article Five dictates a company has to be completely transparent in how it collects and processes information, but also the reasons for doing so. Reasonable steps must be taken to ensure data is erased once the purpose has been fulfilled, this is known as data minimisation. Article Six states a company must seek consent from the individual to collect and process information for an explicit purpose; broad brush collection, storage and continued exploitation of data is being tackled here.

In both articles, the objective is to ensure companies are being specific in their collection of personal information, and that it is utilised in a timely manner before being deleted once it has served its purpose. These are two of the articles which will hit the data-sharing economy the hardest, and it will be interesting to see how stringently GDPR will be enforced if there is any evidence of wrong-doing.

This is where Privacy International is finding issue with the firms. The advocacy group is challenging the business practises on the principles of transparency, fairness, lawfulness, purpose limitation,

data minimisation, accuracy and integrity and confidentiality. It is also requesting further investigations into Articles 13 and 14 (the right to information), Article 15 (the right of access), Article 22 (automated decision making and profiling), Article 25 (data protection and by design and default) and Article 35 (data protection impact assessments).

While GDPR sounds very scary, the reality is no-one has been punished to the full extent of the regulation yet. This might be because every company has taken the guidance on effectively and is operating entirely within the legal parameters, though we doubt this is the case. It is probably a case of no-one being caught yet.

The threat of a €20 million fine, or one which is up to 3% of a business’ total revenues, is nothing more than a piece of paper at the moment. If there is no evidence or fear authorities will punish to the full extent of the law, GDPR doesn’t act as much of a protection mechanism or a deterrent. When a genuine violation of GDPR is uncovered, Europe needs to bear its teeth and demonstrate there will be no breathing room.

This has been the problem for years in the technology industry; fines have been dished out, though there has been no material impact on the business. The staggering growth of revenues in the industry has far exceeded the ability of regulators to act as judge and executioner. Take the recent fines for Apple and Samsung over planned obsolescence in Italy. The $10 million and $5 million fines for Apple and Samsung would have taken 20 and 16 minutes respectively to pay off. This is not good enough.

Regulators now have the authority to hold the suspect characters in the industry accountable for nefarious actions concerning data protection and privacy, but it has to prove itself capable of wielding the axe. Until Europe shows it has a menacing side, nothing will change for the better.

Facebook referred to EU over suspect tracking methods

The UK’s Information Commissioners Office has referred an investigation into Facebook to the EU’s lead data protection watchdog over concerns about how the internet giant is tracking users.

The investigation, which was initially launched in May 2017, is primarily focused on the Cambridge Analytica scandal, though this might only be the tip of the iceberg for Facebook. Aside from fining the social media giant, the ICO has referred the case to the Irish Data Protection Commission, as the lead supervisory authority for Facebook under the General Data Protection Regulation (GDPR). As you can see below, Cambridge Analytica might only be the beginning of Facebook’s headache.

“Since we began, the scope of our investigation has extended to 30 organisations, we have formally interviewed 33 individuals and are working through forensic analysis of 700 terabytes of data,” said Information Commissioner Elizabeth Denham. “In layman’s terms, that’s the equivalent of 52 billion pages.

“Now I have published a report to Parliament that brings the various strands of our investigation up to date. It sets out what we have found and what we now know. But it is not the end. Some of the issues uncovered in our investigation are still ongoing or will require further investigation or action.”

Those who practise the dark arts of hyper-targeted advertising rarely give explanations as to how what information is being specifically held and how much of a detailed picture is being built up through primary sourced data and third-party sources. Few have a genuine understanding of the complexities of these advertising machines, though this is the foundation of various investigations. Transparency is the key word here, with many wanting the curtain to be pulled aside and the mechanics explained.

The fine is clear evidence the ICO is not happy with the state of affairs, though continuation of the investigation and referral to the EU overlords suggests there are more skeletons to be uncovered in-between Zuckerberg’s V-neck jumpers and starch ironed chinos.

“We have referred our ongoing concerns about Facebook’s targeting functions and techniques that are used to monitor individuals’ browsing habits, interactions and behaviour across the internet and different devices to the to the IDPC,” said Denham.

The initial focus of the investigation might have been political influence, though the more details which emerge, the less comfortable pro-privacy bureaucrats in Brussels are likely to feel. Regulating the slippery Silicon Valley natives has always been a tricky job, but with the Facebook advertising machine becoming increasingly exposed, the rulebook governing the data sharing economy might well be in need of a refresh.

Google responds to Google+ data privacy breach by shutting it down

The product designed by Google to take on Facebook appears to have suffered from a similar flaw in the way it allowed developers to access user data.

According to a report from the WSJ, a software glitch in Google’s social networking platform – Google+ – gave developers access to private user data between 2015 and 2018. On the surface this looks similar to the Cambridge Analytica scandal that caused Facebook so much trouble earlier this year.

In this case Google identified the glitch internally and, having resolved it, concluded the security vulnerability hadn’t been exploited. Because of this Google didn’t see any need to disclose it and that’s one of the most contentious aspects of this story, especially since it’s alleged that Google was reluctant to do so for fear of reputational damage and regulatory scrutiny.

In a blog post Google VP of Engineering Ben Smith revealed they started Project Strobe at the start of this year, which seems to have been designed, at least in part, to look into this glitch. Somewhat side-stepping the issue Smith found ‘There are significant challenges in creating and maintaining a successful Google+ product that meets consumers’ expectations.’ And as a result they’re shutting it down.

This is a tad disingenuous since Google+ looked dead in the water soon after its 2011 launch and it seems likely that Google only kept it going to avoid the embarrassment of openly admitting it’s rubbish at social networking. The post did eventually get to the specifics of the glitch, stating the following:

  • Users can grant access to their Profile data, and the public Profile information of their friends, to Google+ apps, via the API.
  • The bug meant that apps also had access to Profile fields that were shared with the user, but not marked as public.
  • This data is limited to static, optional Google+ Profile fields including name, email address, occupation, gender and age. (See the full list on our developer site.) It does not include any other data you may have posted or connected to Google+ or any other service, like Google+ posts, messages, Google account data, phone numbers or G Suite content.
  • We discovered and immediately patched this bug in March 2018. We believe it occurred after launch as a result of the API’s interaction with a subsequent Google+ code change.
  • We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks. That means we cannot confirm which users were impacted by this bug. However, we ran a detailed analysis over the two weeks prior to patching the bug, and from that analysis, the Profiles of up to 500,000 Google+ accounts were potentially affected. Our analysis showed that up to 438 applications may have used this API.
  • We found no evidence that any developer was aware of this bug, or abusing the API, and we found no evidence that any Profile data was misused.

The last bullet is critical and Google seems to be counting on that for mitigation, to pre-empt the kind of outcry and subsequent regulatory torture faced by Facebook. As you can see from the tweet below, this position seems to be receiving some sympathy and it looks like some people actually still use it. At time of writing we have seen no reports of further adverse consequences for Google.

 

Google fights back against EU plans to impose its regulations on rest of world

Today the European Court of Justice will make a decision which will impact the global digital economy. Does the European Union have the right to impose its own data protection and privacy standards on everyone else?

The one-day hearing has been brought about because of French data protection watchdog, CNIL, pressing for Google to extend the ‘right to be forgotten’ ruling to all of its domains. When such a request is made and accepted, Google will remove content from search results in the relevant domain (e.g. .fr in France for example), but also when users from that country are searching through other domains (e.g. .com or .co.uk). CNIL argues the content should be removed from all domains, irrelevant where the user is based.

“This case could see the right to be forgotten threatening global free speech,” said Thomas Hughes, Executive Director of free speech advocacy group Article 19. “European data regulators should not be allowed to decide what Internet users around the world find when they use a search engine. The CJEU (European Court of Justice) must limit the scope of the right to be forgotten in order to protect the right of Internet users around the world to access information online.”

While it might not seem like the most damning of cases, the ripples from this ruling could quickly become turbulent waves. Google and numerous other free speech advocacy groups argue this is simply France, and the European Union, pursuing their own form of censorship, imposing their own standards on other nations around the world. Should the judges rule in favour of CNIL precedent would be set and precedent can be very dangerous.

If the European Union can force other countries into complying with its regulations, why shouldn’t others?

“If European regulators can tell Google to remove all references to a website, then it will be only a matter of time before countries like China, Russia and Saudi Arabia start to do the same,” said Hughes. “The CJEU should protect freedom of expression not set a global precedent for censorship.”

The question these judges have to answer is a relatively simple one on the surface; should governments and regulators have influence over those who live in their jurisdiction or should they be afforded power over everyone else as well? For us, the answer is incredibly simple as well; no it shouldn’t.

The whole concept of the CNIL argument is contradictory and patronising; it’s a form of digital colonialism, with France assuming it is the moral, ethical and political authority on such matters. If China or Russia were pressing for their rules to be imposed on the international stage, there would be uproar. Of course, the rules in these countries are backwards, though the principle remains the same. France should not be allowed to dictate to other countries around the world.

This is another example of globalisation trends working against the consumer. Companies like Google make use of the grey areas and cracks between the legislative and regulatory regimes of different countries. They take advantage of lighter-touch regulation in some countries, remaining out of reach of those who are more involved. The absence of an international code or ruling authority simply offers the internet players a blank rule book and encourages lawyers to look for loop-holes to ignore regulations in more privacy-sensitive countries. That said, the will of one nation, or a dozen or 28, should not be imposed on the rest of the world.

For Telecoms.com, the decision is a simple one; France should be told to govern its own country and not get involved in jurisdictions which does not concern it. The precedent set would be far too dangerous.

It turns out regulating the data economy is really hard

New data regulations may well define the economy and society over the next few decades, and they are still far from perfect. But before you get too critical, you have to realise just how complicated this thankless job can actually be.

An interesting question to ask is whether privacy should be a right protected at all cost by regulations, or should rules makers allow the user to trade his/her privacy for benefits? This is the foundation of some information businesses, Facebook is a good example, as targeted advertisements allow for the delivery of free services. On a more simplistic level, free TV channels such as ITV having been doing this for years without the hyper-targeted platform.

If data is the oil of the 21st century, rule makers need to figure out how personal information can be used without inhibiting privacy.

“You have to remember it is a very tricky balance to strike,” said Jocelyn Paulley, Director at law firm Gowling WLG. “On the one hand regulators do want to take a flexible approach which allows for innovation, but they do also have the responsibility to create rules which protect the right to privacy, it is a human right after all.”

Privacy InfographicAs Paulley points out, it’s a little bit more complicated than simply just writing down rules and punishments. The issues arise when you try and predict the future. There are so many different paths technologists and innovators are heading down, how do you possibly write iterations for all the different possible outcomes. Regulators certainly don’t have the man-power to undertake such tasks, and they most likely don’t have the competence either.

Today’s approach is about applying flexibility in the rules, while also listening to the community about what developments are likely to emerge in the future. While this leaves grey areas, Paulley highlights there is little choice at the moment. The breadth of developments in the technology world means almost theoretical laws are written, before being applied into specific use cases. Interpretation does create complications, though the last thing regulators want to be (despite doing an excellent impression at times) is a speed bump to progress; sometimes the grey areas just have to be accepted as the lesser of two evils.

“Part of the complications in the UK are that we are a common law society, not a constitutional one,” said Paulley. “At European level, GDPR has been written to allow for future developments and for member states to localise some rules.”

Will this allow for privacy to be treated as a commodity? Perhaps, but more needs to be done to educate the consumer.

This is perhaps the most touchy aspect of data privacy. Regulators might well be open to the idea of users trading their privacy for benefits, but rules are there to make sure consumers are not abused for their ignorance. Understanding the data economy is incredibly difficult, made more thorny by the complexities of terms and conditions. Technology companies muddy the waters intentionally, therefore regulators cannot offer too much flexibility otherwise the protections are not there for the consumer.

If you were to ask the consumer today whether they would trade privacy for free services, they would probably say no until you start laying out the bill. £10 for Facebook, £2 a month for Google, no free Spotify, 50p for the Evening Standard each commute, £2.50 for every game you download on your smartphone, £50 a year for email and cloud storage, 1p a message and 10p a minute for calls on WhatsApp. It starts to add up and suddenly trading privacy becomes an attractive option, but consumers lack an understanding of the mechanics of the digital economy.

Part of the European Commission’s General Data Protection Regulation (GDPR) is geared at creating a greater level of transparency. With the Cambridge Analytica scandal, part of the reason the consequences were so great for Facebook was due to a lack of transparency. Data science has advanced at a remarkable rate over the last few years, especially when it comes to targeted advertising platforms, but Facebook hadn’t taken the user on the journey with them, explaining how personal information was being used. When the curtain was pulled back, the sight of CEO Mark Zuckerberg manically pulling levers on the big data machine while frantically shouting “Senator, we sell ads” shocked the general public.

Facebook Data

By forcing technology companies to collect opt-in from consumers, regulators are also ensuring the consumer receives an education on what it actually means. Companies now have to explicitly state what personal data will actually be used for. Paulley highlighted the hope here is by becoming more transparent, the consumer will trust these companies more, therefore fuelling the data economy. However, there is a risk once the users understand how the machine works they will opt-out of the system. This is perhaps one of the reason companies have not been so forthcoming with the dark arts of data science; the fear of a negative reaction.

This fear has now become a reality at Facebook. By concealing the dark arts of data science, the trust was broken. Facebook had advanced data science so much without taking the consumer on the same journey, the reality of progress was scary. Campaigns such as # DeleteFacebook on Twitter emerged making the consequences of privacy failure very real. Perhaps this was a watershed moment which will ensure companies do not resist the rules and promote transparency themselves; when you are caught out (and it always happens eventually) the consequences are just as bad, if not worse.

GDPR has some promising areas, but there aspects, such as criminal screening in recruitment, which need more clarity in the future. Flexibility is required to promote innovation, but rigidity to safe guard the consumer. Regulators haven’t gotten it right so far, but that isn’t entirely unexpected. Writing rules for the data economy is unchartered territory, and it’s very complicated.

Infographic: Is privacy a right or a commodity?

With the digital economy leaning more heavily on user openness and sharing personal information, you have to ask how privacy should be regulated.

On one had you have those who want to protect privacy at all costs. That is perfectly reasonable, but it does make it difficult for certain aspects of the digital economy to work effectively. Most publications, for example, now offer free content to the user but the value exchange is personal information which can be used to create advertising platforms.

In pushing for hardcore privacy protections in regulation and legislation, you have to wonder whether this business model could operate effectively. GDPR has caused all sorts of issues for some organizations, and this is only the tip of privacy reforms.

If you asked the consumer to pay instead of offer information as a value exchange, they might not be too happy. Free has become the norm nowadays. So is regulating to stringently protect privacy the right thing to do when the consumer might be happy to trade privacy for benefits?

We do not know the answer to this question, so we asked Telecoms.com readers for their input.

Privacy Infographic

Feel like you’re being watched? Probably Google violating privacy rights

More often than not we’re writing positively about Google, but the ‘do no evil’ company has been caught out tracking smartphone locations even if the user has opted out.

An investigation from by Associated Press, and ratified by researchers at Princeton University, found several Google services on Android and iOS devices have been storing location data of users, even if the individual has set privacy settings to remain invisible. As privacy and the right to access personal data increasingly become hot-topics, Google might have stepped on a bit of a PR and legal landmine.

Generally Google is quite upfront about discussing privacy and location enablement. It has faced various fines over the years for data-dodginess and is even facing an European Commission investigation over its alleged suspect coercion of users into opting-in to various services, though this is potentially either an example of extreme negligence, or illegally misleading the consumer. Neither explanation is something Google execs would want to be associated with.

One of the issues here is the complexity of getting off the grid. Although turning location tracking off stops Google from adding location data to your accounts timeline, leaving ‘Web & App Activity’ on allows Google to collect other location markers.

We mentioned before this is either negligence or illegal activity, but perhaps this is just another example of an internet giant taking advantage of the fact not everyone is a lawyer. The small print is often the best friend of Silicon Valley. Few would know about this little trick from the Googlers which allows them to appear like the data privacy hero, while simply sneaking in through the slight ajar window in your kitchen.

“When Google builds a control into Android and then does not honour it, there is a strong potential for abuse,” said Jesse Victors, Software Security Consultant at Synopsys.

“It is sometimes extremely important to keep one’s location history private; such as visiting a domestic violence shelter, for example. Other times you may simply wish to opt out of data collection. It’s disingenuous and misleading to have a toggle switch that does not completely work. This, and other examples before it, are one of the reasons why my phone runs LineageOS, a Google-free fork of Android.”

On the company support page, Google states users can switch off location services for any of its services at any time, though this would obviously impact the performance of some. The Maps application for example cannot function without it, and does track user movements by the minute once switched on. With such opportunity for abuse, Google introduced pause features for some of its apps, allowing the user to become invisible for a undefined period of time.

The relationship with the user and the concept of trust is critical to Google. Revenues are generated off creating free services and implementing advertising platforms into the services, though to remain relevant Google needs the consumer data to improve applications. Without constant upgrades and fine-tuning, Google could not maintain the dominant position is enjoys today.

Collecting this data requires trust. The user must trust Google is not mishandling the data it acquires, but also respects the users right to privacy. Without this element of trust between the user and Google, it would not be able to acquire the critically important insight. With this revelation, Google has put a dent in its own credibility and damaged the relationship with the user.

The impact on Google overall will of course be limited. There are too many good stories to drown out the negative and ultimately the user needs Google. Such is the importance of Google’s services to the digital economy, or perhaps it should be worded as a lack of effective enough alternatives, we suspect few users will allow this invasion of privacy to impact their daily routines.

This is not supposed to be any form of validation for the contradictory ‘do no evil’ business, but more a sad truth of today.

Should privacy be treated as a right to protect stringently, or a commodity for users to trade for benefits?

Loading ... Loading ...

Trump administration doubles down on killing already dead net neutrality

An Ajit Pai led FCC trampled all over Tom Wheeler’s net neutrality legacy, though a new filing from the Trump administration to the US Supreme Court is looking to crush the final remains into dust.

The last remnants of the net neutrality rules is a 2016 Court of Appeals for the District of Columbia ruling to uphold the reclassification of internet service providers as common carriers (which was originally passed in 2015), one of Wheeler’s final acts as Chairman of the FCC. However, lawyers are now arguing the ridiculously named ‘Restoring Internet Freedom’ proposal has sufficiently changed the regulatory and commercial environment the courts should ditch the final footnotes of Tom Wheeler’s FCC tenure.

“The petitions for writs of certiorari seek review of the court of appeals’ decision upholding the 2015 Order,” the filing states. “That decision does not warrant this Court’s review because the FCC has now issued a new order that supersedes the 2015 Order and repeals its conduct rules. In light of that development, questions concerning the procedural and substantive validity of the 2015 Order lack continuing practical significance.”

What is worth noting is this is not the first action against 2015 ruling. Lobby groups across the US are trying to tackle the reclassification and have launched their own appeal to the US Supreme Court last September, claiming the FCC exceeded its authority, but also didn’t do enough to justify its decision. While the rules themselves have been erased, there is a big picture motive for the industry.

As it stands, telcos are perfectly within their rights to wander down the route of zero rating offers or even venture into the land of paid-prioritization of internet traffic. Individual states are attempting to introduce their own legislation, but, as you can see from this link, lobby groups and Republican politicians are doing their best to stall the progress of bills. So why bother taking down this legislation when it has no impact on today?

The answer is simple; seesaw politics. Like Trump’s administration erasing the progress made by Tom Wheeler, a Democrat-leaning FCC could well do the same. Using the legal classification of telcos as common carriers could be used as the foundation to launch another assault on the industry promoting net neutrality rules. If the telco industry destroys the classification ruling, future administrations will find it much more difficult to undo Pai’s work.

This is essentially the way politics works. An idea will be contested on either side of the aisle, with the controlling party putting forward their idea. This will continue until something more contentious appears and politicians have a fresher argument to pretend they care about and can lean on the emotions of the general public to earn PR points. Few of these politicians will care about the issues at hand, few will actually understand them, but these are shallow-caricatures of human beings who will do anything to boost their polling numbers and popularity ratings.

The industry won when it came to overturning net neutrality rules under Pai’s guidance in the FCC, and now it is trying to ensure future governments have few options to reinstate consumer protections which were implemented through the rules. The long-game is underway, and it might not be too long before people start asking who Tom Wheeler even was.

Indian report on data protections leave consumers open to abuse

The Indian Government has released a paper to address the inadequacies in data protection and privacy legislation, proposing some ‘interesting’ exemptions, to leave the matrix open to abuse.

The purpose of the paper is simple; assess an area of the Indian digital economy which is under-developed, and make the relevant recommendations. It is an investigation which has been burrowing away for months, and follows a global reaction to some very public data abuses. The report is adequately timed, and should provide a framework to protect the Indian consumer as the world becomes increasingly connected. That is the theory of course.

As you would expect, there are some very good things in the report. There are restrictions of what personal information can be gathered by the internet companies, large and small, while justification needs to be present. It seems the purpose is to address the wild-west approach to the data economy across the country, with information being collected for ‘clear, specific and lawful’ purposes. Like GDPR in the European markets, it seems the purpose to provide some restrictions of the free-flowing and potentially-abusive relationship between the internet giants and the consumer.

Other interesting aspects include the right to be forgotten, perhaps giving the consumer more ownership over their own personal information, explicit opt-in consent for certain categories of data (that which is deemed sensitive), and also data localisation. The investigators do seem to have leant on lessons learned in Europe with GDPR, but then localization laws could be deemed a bit more nefarious.

This is the suspect part of the report; there seem to be a number of exemptions to any potential new laws for government agencies and offices. Data maybe processed without consent or knowledge if this is considered necessary for any function of Parliament or the State. The language is hazy and ill-defined, allowing plenty of wiggle-room. The whole situation creates an opportunity for abuse, and we have seen on numerous occasions around the world, governments can rarely be trusted to act without accountability.

Lazy definitions and localised data residency will leave some of the larger players in the digital economy in an dubious position. Resistance to requests for information, or appeals to courts, would have to be made locally in India. Some interest companies can resist government requests due to the way data is stored, think about Microsoft’s battle with the US government over information stored on one of its Irish servers, theoretically adding an extra layer of protection to the consumer. This set up would potentially leave the Indian consumer quite open to abuse.

For Save Our Privacy, an Indian data protection advocacy group, poor definitions and guidance are not the only concerns; Indian spooks are getting too much freedom as well.

“There is a dire need for surveillance law reform in India,” the group said in a statement. “It was our hope that this effort would provide a comprehensive framework overhauling surveillance and interception in India – in consonance with the international standards on necessary and proportionate principles, along with providing proper judicial scrutiny. However, the report and bill does not seem to provide substantive changes in the surveillance regime in India.”

The need to address data protection and privacy laws is clearly evident throughout the world. Numerous economies are still reliant on regulation and legislation written for another era, think about how much society has changed over the last 5 years. Any attempts to create an environment suitable for today’s digital normality should be applauded, but the Indian government has not got this one right.

Governments have shown they cannot be trusted with un-monitored or un-accountable access to data and communications networks. The recommendations in this report, notably focusing around government exemptions, are too loosely defined, making the opportunity for abuse abundant. This cannot be a situation which is allowed to develop.

ICO report shows UK is starting to take privacy and data protection seriously

The UK Information Commissioner’s Office has released its annual report for 2017/18 which hints the UK is starting to present the right attitudes to privacy and data protection.

Privacy and data protection are areas of the technology world which everyone seems to deeply care about, but few seem to want to do anything. Consumers are constantly shocked about the lack of protections offered to their personal information by leaky organizations, but the same consumers are always more than willing to hand over data when it means avoiding payment. It has seemed to be a bugbear of convenience for the consumer, but perhaps this report indicates these attitudes are changing.

“This is an important time for privacy rights, with a new legal framework and increased public interest,” said UK Information Commissioner Elizabeth Denham. “Transparency and accountability must be paramount, otherwise it will be impossible to build trust in the way that personal information is obtained, used and shared online.”

Denham and her team do of course have a challenging task. In the mission statement of the Information Commissioner’s Office some very lofty goals are listed, increasing the public’s trust and confidence in how data is used for instance, or improving standards of information rights practice across industry, though winning this battle will rely not only on companies taking their responsibilities more seriously, but also consumers realising it is also their duty to manage their own personal data. Sceptics would argue neither of these ideas are being taken seriously at the moment, though optimists might point towards the statistics.

The report claims 235,672 calls were received by the ICO’s helpline, an increase of 24.1% year-on-year, while 30,469 live chats were requested, up 31.5%. The caseload from 31 March 2018 to the same date in 2018 has increased from 115 to 3526. Over the course of the year, 21,019 calls were focused on data protection, a 15% increase from 2017, with most people concerned about subject access (39%), the disclosure of data (16%), its accuracy (11%) and securing the right to prevent processing (9%). The sceptics might still have a case that privacy and data protection is not being taken seriously, the fact enquiries and complaints are heading upwards suggests the general public and businesses are starting to acquire a new appreciation for how the digital economy works, as well as the risks.

On the data breach front, the number of self-reported cases is also on the up. 3,172 incidents were reported to the ICO over the course of 2017/18, a 29.6% increase. The majority of these case did not result in a fine, there is wiggle room if a company is able to demonstrate its approach to security could be deemed stringent, though healthcare is proving to be the most porous in the UK, accounting for 36% of the incidents.

Security has seemingly never been a top priority for many organizations, except when trying to generate PR points, though the same could be said of the consumer. The last 12-18 months has seen a change in attitude towards personal information, consumers are more sensitive about giving information out freely, though there does seem to be a lack of understanding of how terms and conditions work in the app economy. How many realise that by playing Clash of Clans, the user is effectively handing over ownership of a lot of personal information?

Awareness is only one area of the industry which needs work, as the ICO also points out there are still a few risks on the horizon. There is still uncertainty over the final wording of the upcoming Data Protection Bill and its enactment, while operational changes necessary to regulate GDPR will cause issue, as will introducing a new funding regime for data protection work.

A lot is changing on the regulatory front, but the worrying question about bureaucrats still remains; are they able to keep up the pace and sheer breadth of change which is constantly taking place in the technology world?