US tech fraternity pushes its own version of GDPR

The technology industry might enjoy light-touch regulatory landscapes, but change is on the horizon with what appears to be an attempt to be the master of its own fate.

In an open-letter to senior members of US Congress, 51 CEOs of the technology and business community have asked for a federal law governing data protection and privacy. It appears to be a push to gain consistency across the US, removing the ability for aggressive and politically ambitious Attorney Generals and Senators to create their own, local, crusades against the technology industry.

Certain aspects of the framework proposed to the politicians are remarkably similar to GDPR, such as the right for consumers to control their own personal data, seek corrections and even demand deletion. Breach notifications could also be introduced, though the coalition of CEOs are calling for the FTC to be the tip of the spear.

Interestingly enough, there are also calls to remove ‘private right of action’, meaning only the US Government could take an offending company to court over violations. In a highly litigious society like the US, this would be a significant win for any US corporation.

And while there are some big names attached to the letter, there are some notable omissions. Few will be surprised Facebook’s CEO Mark Zuckerberg has not signed a letter requesting a more comprehensive approach to data privacy, though Alphabet, Microsoft, Uber, Verizon, T-Mobile US, Intel, Cisco and Oracle are also absent.

“There is now widespread agreement among companies across all sectors of the economy, policymakers and consumer groups about the need for a comprehensive federal consumer data privacy law that provides strong, consistent protections for American consumers,” the letter states.

“A federal consumer privacy law should also ensure that American companies continue to lead a globally competitive market.”

CEOs who have signed the letter include Jeff Bezos of Amazon, Alfred Kelly of Visa, Salesforce’s Keith Block, Steve Mollenkoph of Qualcomm, Randall Stephenson of AT&T and Brian Roberts of Comcast.

Although it might seem unusual for companies to be requesting a more comprehensive approach to regulation, the over-arching ambition seems to be one of consistency. Ultimately, these executives want one, consolidated approach to data protection and privacy, managed at a Federal level, as opposed to a potentially fragmented environment with the States applying their own nuances.

It does appear the technology and business community is attempting to have some sort of control over its own fate. As much as these companies would want a light-touch regulatory environment to continue, this is not an outcome which is on the table. The world is changing but consolidating this evolution into a single agency the lobbyists can be much more effective, and cheaper.

The statement has been made through Business Roundtable, a lobby group for larger US corporations, requesting a national consumer privacy law which would pre-empt any equivalent from the states or local government. Definitions and ownership rules should be modernised, and a risk-orientated approach to data management, storage and analysis is also being requested.

Ultimately, this looks like a case of damage control. There seems to be an acceptance of regulation overhaul, however the CEOs are attempting to control exposure. In consolidating the regulations through the FTC, punishments and investigations can theoretically only be brought forward through a limited number of routes, with the companies only having to worry about a single set of rules.

Consistency is a very important word in the business world, especially when it comes to regulation.

What we are currently seeing across the US is aggression towards the technology industry from almost every legal avenue. Investigations have been launched by Federal agencies and State-level Attorney Generals, while law suits have also been filed by non-profits and law firms representing citizens. It’s a mess.

Looking at the Attorney Generals, there do seem to be a couple who are attempting to make a name for themselves, pushing into the public domain. This might well be the first steps for higher offices in the political domain. For example, it would surprise few if New York Attorney General Letitia James harbours larger political ambitions and striking a blow for the consumer into Facebook would certainly gain positive PR points.

Another interesting element is the fragmentation of regulations to govern data protection and privacy. For example, there are more aggressive rules in place in New York and California than in North Carolina and Alaska. In California, it becomes even more fragmented, just look at the work the City of San Francisco is undertaking to limit the power of facial recognition and data analytics. These rules will effectively make it impossible to implement the technology, but in the State of Illinois, technology companies only have to seek explicit consent from the consumer.

Inconsistency creates confusion and non-compliance. Confusion and non-compliance cost a lot of money through legal fees, restructuring, product customisation and fines.

Finally, from a PR perspective, this is an excellent move. The perception of Big Business at the moment, is that it does not care about the privacy rights of citizens. There have been too many scandals and data breaches for anyone to take claims of caring about consumer privacy seriously. By suggesting a more comprehensive and consistent approach to privacy, Big Business can more legitimately claim it is the consumer champion.

A more consistent approach to regulation helps the Government, consumers and business, however this is a move from the US technology and business community to control their own fate. This is a move to decrease the power and influence of the disruptive Attorney Generals and make the regulatory evolution more manageable.

Momentum is gathering pace towards a more comprehensive and contextually relevant privacy regulatory landscape, and it might not be too long before a US version of Europe’s GDPR is introduced.

Is $170 million a big enough fine to stop Google privacy violations?

Another week has passed, and we have another story focusing on privacy violations at Google. This time it has cost the search giant $170 million, but is that anywhere near enough?

The Federal Trade Commission (FTC) has announced yet another fine for Google, this time the YouTube video platform has been caught breaking privacy rules. An investigation found YouTube had been collecting and processing personal data of children, without seeking permission from the individuals or parents.

“YouTube touted its popularity with children to prospective corporate clients,” said FTC Chairman Joe Simons. “Yet when it came to complying with COPPA [the Children’s Online Privacy Protection Act], the company refused to acknowledge that portions of its platform were clearly directed to kids. There’s no excuse for YouTube’s violations of the law.”

Once again, a prominent member of the Silicon Valley society has been caught flaunting privacy laws. The ‘act now, seek permission later’ attitude of the internet giants is on show and there doesn’t seem to be any evidence of these incredibly powerful and monstrously influential companies respecting laws or the privacy rights of users.

At some point, authorities are going to have to ask whether these companies will ever respect these rules on their own, or whether they have to be forced. If there is a carrot and stick approach, the stick has to be sharp, and we wonder whether it is anywhere near sharp enough. The question which we would like to pose here is whether $170 million is a large enough deterrent to ensure Google does something to respect the rules.

Privacy violations are nothing new when it comes to the internet. This is partly down to the fragrant attitude of those left in positions of responsibility, but also the inability for rule makers to keep pace with the eye-watering fast progress Silicon Valley is making.

In this example, rules have been introduced to hold Google accountable, however we do not believe the fine is anywhere near large enough to ensure action.

Taking 2018 revenues at Google, the $170 million fine represents 0.124% of the total revenues made across the year. Google made on average, $370 million per day, roughly $15 million per hour. It would take Google just over 11 hours and 20 minutes to pay off this fine.

Of course, what is worth taking into account is that these numbers are 12 months old. Looking at the most recent financial results, revenues increased 19% year-on-year for Q2 2019. Over the 91-day period ending June 30, Google made $38.9 billion, or $427 million a day, $17.8 million an hour. It would now take less than 10 hours to pay off the fine.

Fines are supposed to act as a deterrent, a call to action to avoid receiving another one. We question whether these numbers are relevant to Google and if the US should consider its own version of Europe’s General Data Protection Regulation (GDPR).

This is a course which would strike fear into the hearts of Silicon Valley’s leadership, as well as pretty much every other company which has any form of digital presence. It was hard work to become GDPR compliant, though it was necessary. Those who break the rules are now potentially exposed to a fine of €20 million or 3% of annual revenue. British Airways was recently fined £183 million for GDPR violations, a figure which represented 1.5% of total revenues due to co-operation from BA during the investigation and the fact it owned-up.

More importantly, European companies are now taking privacy, security and data protection very seriously, though the persistent presence of privacy violations in the US suggests a severe overhaul of the rules and punishments are required.

Of course, Google and YouTube have reacted to the news in the way you would imagine. The team has come, cap in hand, to explain the situation.

“We will also stop serving personalized ads on this content entirely, and some features will no longer be available on this type of content, like comments and notifications,” YouTube CEO Susan Wojcicki said in a statement following the fine.

“In order to identify content made for kids, creators will be required to tell us when their content falls in this category, and we’ll also use machine learning to find videos that clearly target young audiences, for example those that have an emphasis on kids characters, themes, toys, or games.”

The appropriate changes have been made to privacy policies and the way in which ads are served to children, though amazingly, the blog post does not feature the words ‘sorry’, ‘apology’, ‘wrong’ or ‘inappropriate’. There is no admission of fault, simply a statement that suggests they will be compliant with the rules.

We wonder how long it will be before Google will be caught breaking privacy rules again. Of course, Google is not alone here, if you cast the net wider to include everyone from Silicon Valley, we suspect there will be another incident, investigation or fine to report on next week.

Privacy rules are not acting as a deterrent nowadays. These companies have simply grown too large for the fines imposed by agencies to have a material impact. We suspect Google made much more than $170 million through the adverts served to children over this period. If the fine does not exceed the benefit, will the guilty party stop? Of course not, Google is designed to make money not serve the world.

Europe set to join the facial recognition debate

With more authorities demonstrating they cannot be trusted to act responsibly or transparently, the European Commission is reportedly on the verge of putting the reigns on facial recognition.

According to reports in The Financial Times, the European Commission is considering imposing new rules which would extend consumer rights to include facial recognition technologies. The move is part of a greater upheaval to address the ethical and responsible use of artificial intelligence in today’s digital society.

Across the world, police forces and intelligence agencies are imposing technologies which pose a significant risk of abuse without public consultation or processes to create accountability or justification. There are of course certain nations who do not care about privacy rights of citizens, though when you see the technology being implemented for surveillance purposes in the likes of the US, UK and Sweden, states where such rights are supposedly sacred, the line starts to be blurry.

The reasoning behind the implementation of facial recognition in surveillance networks is irrelevant; without public consultation and transparency, these police forces, agencies, public sector authorities and private companies are completely disregarding the citizens right to privacy.

These citizens might well support such initiatives, electing for greater security or consumer benefits over the right to privacy, but they have the right to be asked.

What is worth noting, is that this technology can be a driver for positive change in the world when implemented and managed correctly. Facial scanners are speeding up the immigration process in airports, while Telia is trialling a payment system using facial recognition in Finland. When deployed with consideration and the right processes, there are many benefits to be realised.

The European Commission has not confirmed or denied the reports to Telecoms.com, though it did reaffirm its on-going position on artificial intelligence during a press conference yesterday.

“In June, the high-level expert group on artificial intelligence, which was appointed by the Commission, presented the first policy recommendations and ethics guidelines on AI,” spokesperson Natasha Bertaud said during the afternoon briefing. “These are currently being tested and going forward the Commission will decide on any future steps in-light of this process which remains on-going.”

The Commission does not comment on leaked documents and memos, though reading between the lines, it is on the agenda. One of the points the 52-person expert group will address over the coming months is building trust in artificial intelligence, while one of the seven principles presented for consultation concerns privacy.

On the privacy side, parties implementing these technologies must ensure data ‘will not be used to unlawfully or unfairly discriminate’, as well as setting systems in place to dictate who can access the data. We suspect that in the rush to trial and deploy technology such as facial recognition, few systems and processes to drive accountability and justification have been put in place.

Although these points do not necessarily cover the right for the citizen to decide, tracking and profiling are areas where the group has recommended the European Commission consider adding more regulation to protect against abuses and irresponsible deployment or management of the technology.

Once again, the grey areas are being exploited.

As there are only so many bodies in the European Commission or working for national regulators, and technology is advancing so quickly, there is often a void in the rules governing the newly emerging segments. Artificial intelligence, surveillance and facial recognition certainly fall into this chasm, creating a digital wild-west landscape where those who do not understand the ‘law of unintended consequence’ play around with new toys.

In the UK, it was unveiled several private property owners and museums were using the technology for surveillance without telling consumers. Even more worryingly, some of this data has been shared with police forces. Information Commissioner Elizabeth Denham has already stated her agency will be looking into the deployments and will attempt to rectify the situation.

Prior to this revelation, a report from the Human Rights, Big Data & Technology Project attacked a trial from the London Metropolitan Police Force, suggesting it could be found to be illegal should it be challenged in court. The South Wales Police Force has also found itself in hot water after it was found its own trials saw only an 8% success rate.

Over in Sweden, the data protection regulator used powers granted by GDPR to fine a school which had been using facial recognition to monitor attendance of pupils. The school claimed they had received consent from the students, but as they are in a dependent position, this was not deemed satisfactory. The school was also found to have substandard processes when handling the data.

Finally, in the US, Facebook is going to find itself in court once again, this time over the implementation of facial recognition software in 2010. A class-action lawsuit has been brought against the social media giant, suggesting the use of the technology was non-compliant under the Illinois Biometric Information Privacy Act.

This is one example where law makers have been very effective in getting ahead of trends. The law in question was enacted in 2008 and demanded companies gain consent before any facial recognition technologies are introduced. This is an Act which should be applauded for its foresight.

The speed in which progress is being made with facial recognition in the surveillance world is incredibly worrying. Private and public parties have an obligation to consider the impact on the human right to privacy, though much distaste has been shown to these principles in recent months. Perhaps it is more ignorance, short-sightedness or a lack of competence, but without rules to govern this segment, the unintended consequences could be compounded years down the line.

Another point worth noting is the gathering momentum to stop the wrongful implementation of facial recognition. Aside from Big Brother Watch raising concerns in the UK, the City of San Francisco is attempting to implement an approval function for police forces, while Google is facing an internal rebellion. Last week, it emerged several hundred employees had signed a petition refusing to work on any projects which would aid the government in tracking citizens through facial recognition surveillance.

Although the European Commission has not confirmed or denied the report, we suspect (or at the very least hope) work is being taken on to address this area. Facial recognition needs rules, or we will find ourselves in a very difficult position, similar to today.

A lack of action surrounding fake news, online bullying, cybersecurity, supply chain diversity and resilience, or the consolidation of power in the hands of a few has created some difficult situations around the world. Now the Commission and national governments are finding it difficult to claw back the progress of technology. This is one area where the European Commission desperately needs to get ahead of the technology industry; the risk and consequence of abuse is far too great.

European court rules websites are equally responsible for some shared data

If you’ve got Facebook ‘like’ functionality on your website then you could be held responsible for any misuse of user data by the social media giant.

The court of Justice of the European Union made this judgment as part of an ongoing action brought by a German consumer rights group called Verbraucherzentrale NRW against German clothing etailer Fashion ID. It turns out that merely having the ‘like’ button embedded on your site results in personal data being automatically transferred to Facebook for it to use in whatever way it chooses, without the consent or even knowledge of the exploited punter.

Sifting through the legalese it looks like the court has concluded that Fashion ID is responsible for the user data it passes on to Facebook since the only reason it embedded the button in the first place is the commercial benefit it gets from people sharing its stuff on social media. This, in turn, means it must be subject to certain data protection obligations such as at least telling visitors to its site what they’re letting themselves in for.

While the case itself is relatively niche and arcane, it could represent the thin end of the wedge when it comes to data protection and consumer rights online in general. The internet is awash with contraptions, such as cookies, designed to track your every move and feed that data into the cyber hive-mind, all the better to work out how best to entice you into spending cash on stuff you didn’t even know you wanted.

Having said that it could be the case that, since Cambridge Analytica, the internet has already got the memo, as those ‘like’ buttons seem to be much less common than they were a few years ago. High profile fines for Facebook and violators of GDPR rules probably mean that website owners have become wary of just embedding any old third party rubbish onto their sites and rulings such as this should serve as a warning not slip back into bad habits.

ICO gets serious on British Airways over GDPR

The UK’s Information Commissioner Officer has swung the sharp stick of GDPR at British Airways and it looks like the damage might be a £183.39 million fine.

With GDPR inked into the rule book in May last year, the first investigations under the new guidelines will be coming to a conclusion in the near future. There have been several judgments passed in the last couple of months, but this is one of the most significant in the UK to date.

What is worth noting is this is not the final decision; this is an intention to fine £183.39 million. We do not imagine the final figure will differ too much, the ICO will want to show it is serious, but BA will be giving the opportunity to have its voice heard with regard to the amount.

“People’s personal data is just that – personal,” said Information Commissioner Elizabeth Denham.

“When an organisation fails to protect it from loss, damage or theft it is more than an inconvenience. That’s why the law is clear – when you are entrusted with personal data you must look after it. Those that don’t will face scrutiny from my office to check they have taken appropriate steps to protect fundamental privacy rights.”

The EU’s GDPR, General Data Protection Regulation, offers regulators the opportunity to fine guilty parties €20 million or as much as 3% of total revenues for the year the incident occurred. In this case, BA will be fined 1.5% of its total revenues for 2018, with the fine being reduced for several reasons.

In September 2018, user traffic was directed towards a fake British Airways site, with the nefarious actors harvesting the data of more than 500,000 customers. In this instance, BA informed the authorities of the breach the defined window, co-operated during the investigation and made improvements to its security systems.

While many might have suggested the UK watchdog, or many regulators around the world for that matter, lack teeth when it comes to dealing with privacy violations, this ruling should put that preconception to rest. This is a weighty fine, which should force the BA management team to take security and privacy seriously; if there is one way to make executives listen, its hit them in the pocket.

This should also be seen as a lesson for other businesses in the UK. Not only is the ICO brave enough to hand out fines for non-compliance, it is mature enough to reduce the fine should the effected organization play nice. £183.39 million is half of what was theoretically possible and should be seen as a win for BA.

Although this is a good start, we would like to see the ICO, and other regulatory bodies, set their sight on the worst offenders when it comes to data privacy. Companies like BA should be punished when they end up on the wrong side of right, but the likes of Facebook, Google and Amazon have gotten an easy ride so far. These are the companies who have the greatest influence when it comes to personal information, and the ones which need to be shown the rod.

This is one of the first heavy fines implemented in the era of GDPR and the difference is clear. Last November, Uber was fined £385,000 for a data breach which impacted 2.7 million customers and drivers in the UK. The incident occurred prior to the introduction of GDPR, the reason the punishment looks so measly compared to the BA fine here.

The next couple of months might be a busy time in the office of the ICO as more investigations conclude. We expect some heavy fines as the watchdog bears its teeth and forces companies back onto the straight and narrow when it comes to privacy and data protection.

Google’s Sidewalk’s bet is a nightmare for the privacy conscious

If you’re concerned about whether Google is listening to you through your phone or smart speaker, soon enough you’ll have to worry about lampposts having ears, or at least if your live in Toronto.

For those who have not been keeping up-to-date with the Canadian tech scene, Google’s Sidewalk Labs is currently working in partnership with Toronto to demonstrate the vision of tomorrow; the smart city. Plans are still being drawn up, though it looks like two neighbourhoods will be created with a new Google campus bang in the middle.

The Master Innovation and Development Plan (MIDP) hope to create the city of tomorrow and will be governed by Waterfront Toronto, a publicly-funded organization. In a move to seemingly appease the data concerns of Waterfront Toronto, Google has now stated all the systems would be run by analysing data, but Sidewalk Labs will not disclose personal information to third parties without explicit consent and will not sell personal information.

This is the first bit of insight we’ve had on this initiative for a while. Having secured the project in 2017, Sidewalk Labs has been in R&D mode. The team is attempting to prove the business case and the products, though it won’t be long before work is underway. Assuming of course Google is able to duck and weave through the red-tape which is going to be presented over the next 12-18 months.

The most recent development is a series of white papers which are addressing numerous topics from sustainable production plans, mobility, data protection and privacy and the envisioned usecases. If you have a spare few hours, you can find all the documentation here.

Of course, there are plenty of smart city initiatives around the world but what makes this one interesting is that the concept of ‘smart’ is being built from the foundations. This is a greenfield project not brownfield, which is substantially easier. Buildings, street furniture and infrastructure can be built with connectivity in mind.

This is the challenge which other cities are facing, lets take London as an example. Construction on the London Underground system started in 1863, while the London sewage system was plumbed in between 1859 and 1865. The city itself, and the basic layout, was established in 50 AD. Although there are creative solutions to enhance connectivity, most cities were built in the days before most could even conceive of the internet.

The Quayside and Villiers West neighbourhoods will be home to almost 7,000 residents and offer jobs to even more, anchored by the new Google campus. The buildings will offer ‘adaptable’ spaces, including floor plates and sliding walls panels to accelerate renovations and reduce vacancies. It will also be incredibly energy friendly, featuring a thermal energy grid which could heat and cool homes using the natural temperature of the earth.

But onto the areas which most people in the industry will be interested in; the introduction of new technologies and access to data.

High-speed internet connections will be promised to all residents and businesses, intelligent traffic lights and curbs will be deployed to better regulate traffic, smart awnings will be introduced for those into gimmicky technology and the neighbours will be designed to allow for an army of underground delivery robots to function.

Autonomous driving is one technology area which fits perfectly into the greenfield advantage. The complications of creating a landscape for autonomous vehicles in older cities are great, but by building up the regions with connectivity in mind many of these challenges can be averted. Not only can the introduction of self-driving vehicles be accelerated, but ride-sharing (Zipcar) or hailing (Uber) alternatives can be assisted while other options such as e-scooters are more realistic.

Such is the ambition nurtured in the Google business, if there is a crazy idea which can be applied to the smart city concept, Sidewalk Labs have probably factored it into the design and build process.

And now onto the data. This is where the project has drawn criticism as Google does not necessarily have the most glistening record when it comes to data privacy and protection. Small print littered throughout various applications has ensured Google is never too far away from criticism. In fairness, this is a problem which is industry wide, but a cloud of scepticism has been placed over any initiative which has data as the fuel.

The latest announcement from Google/Sidewalk Labs focuses on this very issue. Sidewalk Labs will not sell any personal information, this data will not be used to fuel the advertising mechanisms and it will not disclose this insight to third-parties. Explicit consent would have to be provided in any of these circumstances.

Whether these conditions will be up to the standards defined by Waterfront Toronto remains to be seen. This body has the final say and may choose to set its own standards at a higher or lower level. Anonymity might be called into play as many activists have been pushing. This is not a scenario which Google would want to see.

While expanding into new services might seem like an attractive idea, if this expansion can be coupled with additional access to data to fuel the Google data machine, it is a massive win for the internet giant. Let’s not forget, everything which Google has done to date (perhaps excluding Loon and the failed Fiber business) has paid homage to the advertising mechanisms.

Fi offers it interesting data on customer locations, the smart speakers are simply an extension of the core advertising business through a new user interface and Android allowed Google to place incredibly profitable products as default on billions of phones and devices. If Google can start to access new data sets it can offer new services, engage new customers and create new revenues for investors.

Let’s say it can start collecting data on traffic flow, this could become important insight for traffic management and city planners when it comes to adding or altering bus routes. This data could also be used to reduce energy consumption on street lights or traffic lights; if there is no-one there, do they actually need to be on? It could also help retailers forecast demand for new stores and aid the police with their work.

These ideas might not sound revolutionary or that they would bring in billions, but always remember, Google never does anything for free. This is a company which seems to see ideas before anyone else and can monetize them like few others. If Google is paying this much attention to an idea or project, there must be money to be made and we bet there is quite a bit.

But this is where Google is facing the greatest opposition. Because it is so good at extracting insight and value from data, it is one of the companies which is facing the fiercest criticism. This will be the most notable the further afield Google spreads its wings. It seems the world is content with Google sucking value out of personal data when it comes to search engines or mobile apps, but pavements, lampposts and bus stops might be a step too far for some.

Of course, criticism might disappear when jealousy emerges. The hardcore privacy advocates will never rest, but most simply don’t care that much. Privacy violations will of course cause uproar, but if there is a fair trade-off, most will accept Google’s role. If Google can prove these neighbourhoods not only improve the quality of life, but also offer advantages to entertainment and business (for example), this initiative could prove to be very popular with the general public, governments and businesses.

Maine gets tough on telcos over data economy

Maine Governor Janet Mills has signed new privacy rules into law, demanding more proactive engagement from broadband providers in the data-sharing economy.

While the rules are tightening up an area of the digital world which is under-appreciated at the moment, it will have its critics. The law itself is targeting those companies who delivering connectivity solutions to customers, the telcos, not the biggest culprits of data protection and privacy rights, the OTTs and app developers.

The rules are applicable to broadband providers in the state, both mobile and fixed, and force a more proactive approach in seeking consent. Telcos will now be compelled to seek affirmative consent from customers before being allowed to use, disclose, sell or permit access to customer personal information, except in a few circumstances.

As is on-trend with privacy rules, the ‘opt-out’ route, used by many to ensure the lazy and negligent are caught into the data net, has been ruled out.

There are also two clauses included in the legislation which block off any potential coercing behaviour from the telcos also:

  • Providers will not be allowed to refuse service to a customer who does not provide consent
  • Customers cannot be penalised or offered a discount based on that customer’s decision to provide or not provide consent

This is quite an interesting inclusion in the legislation. Other states, California for example, are building rules which will offer freedoms to those participating in the data-sharing economy if the spoils are shared with those providing the data (i.e. the customer), though the second clause removes the opportunity to offer financial incentives or penalties based on consent.

This is not to say rewards will not be offered however. There is wiggle room here, zero-rating offers on in-house services or third-party products for example, which does undermine the rules somewhat.

It is also worth noting that these rules only pertain to what the State deems as personal data. Telcos can continue to monetize data which is not considered personal without seeking affirmative consent, unless the customer has written to the telco to deny it this luxury. Personal data is deemed as the following categories:

  • Web browsing history
  • Application usage history
  • Geolocation
  • Financial
  • Health
  • Device identifiers
  • IP Address
  • Origin and destination of internet access service
  • Content of customer’s communications

What is worth noting is this is a solution to a problem, but perhaps not the problem which many were hoping would be addressed.

Firstly, the telcos are already heavily regulated, with some suggesting already too much so. There are areas which need to be tightened up, but this is not necessarily the problem child of the digital era. The second point is the issue which we are finding hard to look past; what about the OTTs, social media giants and app community?

The communications providers do need to be addressed, though the biggest gulf in regulation is concerning the OTTs and app developers. These are companies which are operating in a relative light-touch regulatory environment and benefiting considerably from it. There are also numerous examples of incidents which indicate they are not able to operate in such a regulatory landscape.

Although it is certainly a lot more challenging to put more constraints on these slippery digital gurus, these companies are perhaps the biggest problem with the data-sharing economy. Maine might grab the headlines here with new privacy rules, which are suitably strict in fairness, but the rule-makers seem to have completely overlooked the biggest problem.

These rules do not add any legislative or regulator restraints on the OTTs or app developers, therefore anyone who believes Maine is taking a notable step in addressing the challenges of the data-sharing economy is fooling themselves. This is a solution, but not to the question which many are asking.

Ambulance chasers are readying themselves for GDPR assault

While getting a firm ready for the introduction of GDPR was a frantic period, the last 12 months have been relatively quiet period for the rules. However that might all be about to change.

At the European Data Protection Summit in London, a few points were raised which should put the fear back into executives. It does appear the ‘sex appeal’ of data protection and privacy has been eroded, but just wait until the summer is over. It might well be dominating the headlines again.

There seem to be four developments bubbling away at the moment, each of which could have a significant impact on the data protection and privacy landscape; Brexit, the UK’s 2018 Data Protection Act and ambulance chasers.

Ditching PPI for GDPR

Although it is not necessarily the most flattering of terms, the ambulance chasers are readying themselves for an assault on the GDPR negligent.

The Financial Conduct Authority (FCA) has set a deadline of August 29 for consumers to complain about the sale of PPI products in the UK. This effectively means all the firms set-up to manage the complaints on behalf of consumers will become redundant. Most will evolve however, the legal world is simply too profitable, and GDPR seems a prime opportunity.

While it might not be the most common practice for the moment, there are certainly examples. Numerous law firms, Hayes Connor Solicitors for example, are already advertising their services for the British Airways data breach, impacting roughly 400,000 people. This is an on-going investigation, though the financial penalty for this breach could be as much as €918 million.

As more PPI lawyers find themselves at the mercy of free time, more will turn their attentions to new fields of expertise. Due to the headline-worth nature of data breaches and privacy violations, as well as the potential consequence to the individual, this is an area which is primed for the legal buzz.

Big fines have been promised

So far, there is only one example of a Data Protection Authority (DPA) swinging the heavy stick of GDPR at a major firm. France’s watchdog fined Google €50 million for numerous offenses, and while there have been other significant breaches over the last few years, most occurred at a time prior to the heavy fines of GDPR.

“Serious fines are coming in the summer, including to some of the big companies,” said Paul Breitbarth, Director of Strategic Research and Regulator Outreach at Nymity. “The DPAs [Data Protection Authorities] are taking this very seriously and so should we.”

The Irish DPA is an example of one regulator taking control of the situation, and quite rightly so. Despite the fact its economy is heavily reliant on the internet giants, the Irish watchdog is Europe’s lead GDPR authority; it should be leading the charge.

In a recent PR defence plea, Commissioner for Data Protection Helen Dixon pointed out the authority has already opened 54 investigation, 19 of which were cross border. According to Breitbarth, we should expect some pretty heavy fines which will also bring data protection and privacy back into public debate.

One of the big challenges being faced by the industry is apathy from the general public and any considered concern from executives. Enforcement of GDPR rules will not only highlight the potential risks to the general public, but also make data protection and privacy a priority for those running the firms.

Executives might want to ignore data protection and privacy, but one way to get the attention is to hit them in their wallets. Both the enforcement of GDPR and the emergence of ambulance chasers will ensure this is a topic of conversation in the board rooms.

New rules, new considerations

The 2018 Data Protection Act is something which has not really generated many headlines, but there is a monumental opportunity for headaches.

“It’s a bit of a minefield to go through,” said Ian Evans, MD of OneTrust.

The Data Protection Act is the UK’s own version of GDPR, required due to the fact we are divorcing the European Union, but it does actually go a lot further than the European rules. This is perhaps worst-case scenario for those wanting to remain compliant, as it creates more work ensuring compliance to two different sets of rules.

New clauses have been introduced creating new grey areas when it comes to confidentiality agreements, while the approach in the immigration department has received criticism. Those who are seeking official residential status in the UK will not be able to force the government into providing insight into the data which has been collected, analysed and actioned. This is the first time a data moat has been embedded into law, and there are come people who are not happy about it.

One area which is very useful is the standardization of usecases. In four areas, the ICO will effectively produce standards to ensure companies can remain compliant. This is the first time an authority has taken such an approach, and we hope it will be replicated by other authorities. The first example, ‘Age-appropriate design’, will be released in the coming weeks.

The groans of Brexit

Brexit is a tricky topic to bring up. People either disagree with it, hate it or are bored of it, but the matter of the fact is, it is crucially important in numerous areas.

Brexit changes the status quo. The UK will no-longer be in the European Union, therefore fundamentally changing the relationship companies have with governments, customers and supply chains.

With the Brexit deadline fast approaching, and little concrete information being offered, the risk is running quite high. This will have to be a major factor in any companies approach to data protection and privacy moving forward.

The risk of a boring conversation

“Everyone is saying they are trying more for data protection, but does anyone actually believe it,” said Ian West, COO of the GDPR Institut.

GDPR was critically important when it was introduced, and it remains critically important today. However, you have to question whether the organizations involved, or the general public, are actually taking it seriously. The last 12 months has seen GDPR fall down the agenda, though it will rise again.

Enforcement is key, and it is coming. GDPR investigations are painfully slow processes due to the vast amount of information and the complexities of the business models in the data-sharing economy. However, many investigations will be finalised over the next few months. With these final decisions come the fines.

This will propel data protection and privacy back into the public debate, and ensure the general public is becoming more aware to the dangers of the digital world.

There is currently a risk of negligence, but soon enough data protection and privacy principles will form part of the buying decision-making process. The companies which are taking data protection and privacy seriously, will become more appealing to those customers, both consumer and enterprise.

Another factor to consider is recruitment. More graduates nowadays want to work for ethically sound organizations, and soon enough this definition will be expanded to include data protection and privacy principles.

GDPR is a topic which is not ‘sexy’ at the moment, but the next couple of months could ensure these conversations are firmly set back into the board room. The question is whether these will be fleeting, defensive discussions, or whether these executives will take the challenge seriously and create a culture which encourages data protection and privacy principles.

Irish data watchdog defends its GDPR actions

The Irish data protection regulator has unveiled a progress report on GDPR on the first anniversary of the rules, perhaps defending itself from a perception of inaction.

As Europe’s lead regulator for GDPR, the Data Protection Commission (DPC) is in an incredibly important position. It is supposed to lead the bloc into an era of increased privacy and data protection, though considering its economy is largely dependent on the very firms GDPR has been designed to punish, it is a tricky position.

Despite some suggesting GDPR is failing to live up to the promise of holding the technology giants accountable, the DPC has defending its positions, actions and ambitions.

“The GDPR is a strong new platform from which we can all demand and drive higher standards of protection of our personal information,” said Commissioner for Data Protection, Helen Dixon.

“As the national supervisory authority, the Data Protection Commission (DPC) is firmly committed to its role in public enforcement of the new law, while also working hard to provide guidance to sectors as they seek to comply with the new requirements.

“The DPC is grateful for the positive and energetic engagement with the GDPR that we have seen from all quarters, particularly from consumers and concerned persons who have raised queries about the processing of their personal data with the office.”

Looking at the numbers, 6,624 complaints have been received since the introduction of GDPR, while 5,818 valid data security breaches were notified. 54 investigations have been opened, 19 of which are cross-border investigations into multinational technology companies and their compliance with the GDPR. Last week, the DPC announced its most recent investigation into Google.

Interestingly enough, more than half of these investigations will see either Facebook, WhatsApp and Instagram as the focal point. The question which remains is whether the rules are having a material impact on data protection and privacy across the world?

According to the International Association of Privacy Professionals, more than 500,000 data protection officers have been appointed at firms across the world, while more than 200,000 instances of data breaches have been reported. However, the largest fine which has been levied at one of the internet giants is €50 million.

Back in January, French data watchdog CNIL fined Google €50 million for various different violations of GDPR. These violations included a lack of transparency, overly complicated wording and inaccessible information on how a user’s data is being collected, stored and processed. This might serve as a wake-up call for the ‘normal’ companies across the world, but it is might not be considered a deterrent for the worst offenders, the tech giants who collect billions in profit each year by monetizing data.

As mentioned previously, the DPC is in a slightly precarious position. Ireland will want to protect the interests of the technology giants due to the role the industry plays in the country. The technology sector has largely been credited with saving Ireland from economic recession a decade ago, and now employees a significant number of individuals. The industry has also fuelled a rise in entrepreneurship, creating bright prospects as the world strides towards the digital economy.

Reading between the lines, this is perhaps the rationale behind today’s announcement from the DPC. It is working to uphold the promise of GDPR.

What is worth noting is one year is not a lot of time. Investigations into complaints will take months on months, due to the number of companies involved, collections of statements and all the relevant information, and the complex nature of data processing business models. The big data machine is incredibly complicated and understanding whether there have been any violations of rules is even more so; some clauses and sections made grey areas to be exploited.

One year one, GDPR has clearly had an impact on the world, but whether this is enough of an impact to create a privacy-orientated digital society still remains to be seen.

Europe’s lead data watchdog opens Google GDPR investigation

Ireland’s data protection watchdog has kicked off a GDPR investigation into Google following a complaint from ad-free web browser Brave.

Although GDPR is approaching its first birthday, there is yet to be an example of the towering fines which were promised for non-compliance. Perhaps everyone is playing merrily by the rules, or it might be that they are very good at covering their tracks. Brave will be hoping to chalk up a victory over Google with this investigation however.

“The Irish Data Protection Commission’s action signals that now – nearly one year after the GDPR was introduced – a change is coming that goes beyond just Google,” said Johnny Ryan, Chief Policy Officer at Brave. “We need to reform online advertising to protect privacy, and to protect advertisers and publishers from legal risk under the GDPR.”

The complaint itself is directed at Google’s DoubleClick/Authorized Buyers advertising system. While giving evidence to the Data Protection Commission, Ryan has suggested the way in which data is processed through the system violates Article 5(1)(a), (b) and (f) of GDPR, as well as Section 110 of the Irish Data Protection Act.

DoubleClick/Authorized Buyers advertising system is active on 8.4 million websites, allowing the search giant to track users as they scour the web. This information is then broadcast to more than 2,000 companies who bid on the traffic to deliver more targeted and personalised ads.

This information can potentially be incredibly personal. Google has various different categories which internet users are neatly filed into, including ‘eating disorders’, ‘left-wing politics’, ‘Judaism’ and ‘male impotence’. The companies bidding on this data will also have access to geo-location information and the type of device which the user is on.

Under Article 5 (1)(f) of the GDPR, companies are only permitted to process personal information if it is tightly controlled. Brave suggests Google has no control over the data once it is broadcast and is therefore violating GDPR.

With the Irish watchdog, Europe’s lead for GDPR, investigating the system in Ireland, similar complaints have been filed the UK, Poland, Spain, Belgium, Luxembourg and the Netherlands. Should Google be found non-compliant, it would be forced to ditch the DoubleClick/Authorized Buyers advertising system and could face a fine as much as 4% of annual turnover. Based on 2018 revenues, that figure would be $5.4 billion.

“For too long, the AdTech industry has operated without due regard for the protection of consumer data,” said Ravi Naik of ITN Solicitors, who will be representing Brave for the complaint. “We are pleased that the Data Protection Commissioner has taken action. The industry must change.”

GDPR is supposed to be a suitable deterrent for the internet economy, but without enforcement and demonstrable consequences little will change. If GDPR is to work as designed, a monstrous fine will have to be directed at someone sooner or later. Could this be the first domino to fall?