US tech fraternity pushes its own version of GDPR

The technology industry might enjoy light-touch regulatory landscapes, but change is on the horizon with what appears to be an attempt to be the master of its own fate.

In an open-letter to senior members of US Congress, 51 CEOs of the technology and business community have asked for a federal law governing data protection and privacy. It appears to be a push to gain consistency across the US, removing the ability for aggressive and politically ambitious Attorney Generals and Senators to create their own, local, crusades against the technology industry.

Certain aspects of the framework proposed to the politicians are remarkably similar to GDPR, such as the right for consumers to control their own personal data, seek corrections and even demand deletion. Breach notifications could also be introduced, though the coalition of CEOs are calling for the FTC to be the tip of the spear.

Interestingly enough, there are also calls to remove ‘private right of action’, meaning only the US Government could take an offending company to court over violations. In a highly litigious society like the US, this would be a significant win for any US corporation.

And while there are some big names attached to the letter, there are some notable omissions. Few will be surprised Facebook’s CEO Mark Zuckerberg has not signed a letter requesting a more comprehensive approach to data privacy, though Alphabet, Microsoft, Uber, Verizon, T-Mobile US, Intel, Cisco and Oracle are also absent.

“There is now widespread agreement among companies across all sectors of the economy, policymakers and consumer groups about the need for a comprehensive federal consumer data privacy law that provides strong, consistent protections for American consumers,” the letter states.

“A federal consumer privacy law should also ensure that American companies continue to lead a globally competitive market.”

CEOs who have signed the letter include Jeff Bezos of Amazon, Alfred Kelly of Visa, Salesforce’s Keith Block, Steve Mollenkoph of Qualcomm, Randall Stephenson of AT&T and Brian Roberts of Comcast.

Although it might seem unusual for companies to be requesting a more comprehensive approach to regulation, the over-arching ambition seems to be one of consistency. Ultimately, these executives want one, consolidated approach to data protection and privacy, managed at a Federal level, as opposed to a potentially fragmented environment with the States applying their own nuances.

It does appear the technology and business community is attempting to have some sort of control over its own fate. As much as these companies would want a light-touch regulatory environment to continue, this is not an outcome which is on the table. The world is changing but consolidating this evolution into a single agency the lobbyists can be much more effective, and cheaper.

The statement has been made through Business Roundtable, a lobby group for larger US corporations, requesting a national consumer privacy law which would pre-empt any equivalent from the states or local government. Definitions and ownership rules should be modernised, and a risk-orientated approach to data management, storage and analysis is also being requested.

Ultimately, this looks like a case of damage control. There seems to be an acceptance of regulation overhaul, however the CEOs are attempting to control exposure. In consolidating the regulations through the FTC, punishments and investigations can theoretically only be brought forward through a limited number of routes, with the companies only having to worry about a single set of rules.

Consistency is a very important word in the business world, especially when it comes to regulation.

What we are currently seeing across the US is aggression towards the technology industry from almost every legal avenue. Investigations have been launched by Federal agencies and State-level Attorney Generals, while law suits have also been filed by non-profits and law firms representing citizens. It’s a mess.

Looking at the Attorney Generals, there do seem to be a couple who are attempting to make a name for themselves, pushing into the public domain. This might well be the first steps for higher offices in the political domain. For example, it would surprise few if New York Attorney General Letitia James harbours larger political ambitions and striking a blow for the consumer into Facebook would certainly gain positive PR points.

Another interesting element is the fragmentation of regulations to govern data protection and privacy. For example, there are more aggressive rules in place in New York and California than in North Carolina and Alaska. In California, it becomes even more fragmented, just look at the work the City of San Francisco is undertaking to limit the power of facial recognition and data analytics. These rules will effectively make it impossible to implement the technology, but in the State of Illinois, technology companies only have to seek explicit consent from the consumer.

Inconsistency creates confusion and non-compliance. Confusion and non-compliance cost a lot of money through legal fees, restructuring, product customisation and fines.

Finally, from a PR perspective, this is an excellent move. The perception of Big Business at the moment, is that it does not care about the privacy rights of citizens. There have been too many scandals and data breaches for anyone to take claims of caring about consumer privacy seriously. By suggesting a more comprehensive and consistent approach to privacy, Big Business can more legitimately claim it is the consumer champion.

A more consistent approach to regulation helps the Government, consumers and business, however this is a move from the US technology and business community to control their own fate. This is a move to decrease the power and influence of the disruptive Attorney Generals and make the regulatory evolution more manageable.

Momentum is gathering pace towards a more comprehensive and contextually relevant privacy regulatory landscape, and it might not be too long before a US version of Europe’s GDPR is introduced.

Silicon Valley’s ‘ask for forgiveness, not permission’ attitude is wearing thin

Silicon Valley has often pushed the boundaries in pursuit of progress, but the it deserves everything it gets if it continues to try the patience of consumers and regulators with privacy.

‘It is easier to ask for forgiveness, than beg for permission’ is a common, if largely unattributable, phrase which seems to apply very well to the on-going position of Silicon Valley. It is certainly easier to act and face the consequences later, but it should not be right or allowed. This is the approach the internet giants are taking on a weekly basis, and someone will have to find the stomach and muscle to stop this abuse of power, influence and trust.

The most recent chapter in this on-going tale of deceit and betrayal concerns the voice assistants which are becoming increasingly popular with consumers around the world.

Apple is the latest company to test the will of the general public as it has now officially ended an internal process which is known as ‘grading’. In short, humans listen to Siri interactions with customers, transcribing the interaction in certain cases, to help improve the accuracy of the digital assistant.

“We know that customers have been concerned by recent reports of people listening to audio Siri recordings as part of our Siri quality evaluation process — which we call grading,” Apple said in a blog entry. “We heard their concerns, immediately suspended human grading of Siri requests and began a thorough review of our practices and policies. We’ve decided to make some changes to Siri as a result.”

Of course, it is perfectly reasonable for Apple to want to improve the performance of Siri, though it must ask for permission. This is the vital step in the process which Apple decided to leave out.

The new process will seek consent from users through an ‘opt-in’ system, making it compliant, while the default position for all Siri interactions will be to not store information. For those consumers who do opt-in to aid Apple in training Siri, the audio will only be transcribed and reviewed by permanent Apple employees.

This process should have been in-place prior to the ‘grading’ system being implemented. It is inconceivable that Apple did not realise this would break privacy regulations or breach the trust it has been offered by the customer. It decided not to tell the consumer or authorities this practice was in place. It muddied the waters to hide the practice. It lied to the user when it said it respects privacy principles and rights.

Apple acted irresponsibly, unethically and underhandedly. And there is almost no plausible explanation that it did so without knowledge and understanding of the potential impact of these actions. If it did not understand how or why this practice violated privacy principles or regulations, there must be an epidemic of incompetence spreading through the ranks at Cupertino.

What is worth noting is Apple is not alone; Google and Facebook are just as bad at misleading or lying to the user, breaking the trust which has been offered to these undeserving companies.

Google is currently under investigation for the same abuse of trust and privacy principles, this time for the Google Assistant.

“We have made it clear to Google’s representatives that essential requirements for the operation of the Google Assistant are currently not fulfilled,” said Johannes Caspar, Hamburg Commissioner for Data Protection and Freedom of Information. “This not only applies to the practice of transcribing, but to the overall processing of audio data generated by the operation of the language assistance system.”

The investigation from the Hamburg data protection authority has pressured Google into changing the way it trains its digital assistant. Earlier this month, Belgian news outlet VRT NWS revealed 0.2% of conversations with Google Assistant were being listened to by external contractors. At least one audio clip leaked to the news outlet included a couple’s address and personal information about their family.

Google has now said it has stopped the practice in the EU, but not necessarily elsewhere, and the Hamburg DPA has said it will have to seek permission from users before beginning anything remotely similar.

At the same regulator, Facebook has been dragged into the drama.

“In a special way, this also applies to Facebook Inc., where as part of the Facebook Messenger to improve the transcription function offered there a scheduled manual evaluation was not only the human-to-machine communication, but also the human-to-human communication,” said Caspar. “This is currently the subject of a separate investigation.”

Two weeks ago, reports emerged Facebook had hired external contractors to transcribe audio from calls made across the Messenger platform. Once again, users were not informed, while consent was not obtained, but what makes this incident even worse, is there does not appear to be any logical reason for Facebook to need this data.

The only reason we can see why Facebook would want this data to improve algorithms is to take the insight to feed the big-data, hyper-targeted advertising machine. However, this is a massive no-no and a significant (and illegal) breach of trust.

All of these examples are focused on transcription of audio data, though there are many other instances of privacy violations, and demonstrate the ‘easier to ask for forgiveness than permission attitude’ which has engulfed Silicon Valley.

We cannot believe there is any way these companies did not understand or comprehend these actions and practices were a breach of trust and potentially breaking privacy rules. These companies are run by incredibly smart and competent people. Recruitment drives are intense, offices and benefits are luxurious, and salaries are sky-high for a very good reason; Silicon Valley wants to attract the best and brightest talent around.

And it works. The likes of Google, Facebook and Apple have the most innovative engineers, data scientists who can spot the wood for the trees, the savviest businessmen, accountants who are hide-and-seek champions and the slipperiest lawyers. They consider and contemplate all potential gains and consequences from any initiative. We cannot believe there is any conceivable explanation as to why these incredibly intelligent people did not recognise these initiatives were either misleading, untransparent or non-compliant.

The days of appearing before a committee, cap in hand, begging for forgiveness with a promise it will never happen again cannot be allowed to continue. The judges, politicians and consumers who believe these privacy violations are done by accident are either incredibly naïve, absurdly short-sighted, woefully ill-informed or, quite frankly, moronic.

Silicon Valley must be forced to act responsible and ethically, because it clearly won’t do it on its own.

Google prefers cookies to fingerprints

Internet giant Google has announced some measures designed to better protect the privacy of users of its Chrome browser.

Under the heading of ‘Privacy Sandbox’ Google wants to develop a set of open privacy standards. At the core of this initiative is the use of cookies, which are bits of software that track people’s online activity and, so the theory goes, serve them more relevant advertising. Google concedes that some use of cookies doesn’t meet acceptable data privacy standards, but that blocking them isn’t the answer.

A major reason for this is that it encourages the use of another tracking technique called fingerprinting. This aggregates a bunch of other user preferences and behaviours to generate a unique identifier that performs a similar function to cookies. The problem with fingerprints, however, is that there’s no user control over them and hence they’re bad for data privacy.

Since the digital ad market now expects a considerable degree of targeting, but fingerprinting is considered an unacceptable solution to the blocking of cookies, Google wants to come up with a better one that will be implemented across all browsers, hence this initiative. The Privacy Sandbox is a secure environment designed to enable safe experimentation with other personalization technologies.

“We are following the web standards process and seeking industry feedback on our initial ideas for the Privacy Sandbox,” blogged Justin Schuh Director of Chrome Engineering at Google. “While Chrome can take action quickly in some areas (for instance, restrictions on fingerprinting) developing web standards is a complex process, and we know from experience that ecosystem changes of this scope take time. They require significant thought, debate, and input from many stakeholders, and generally take multiple years.”

While this is all laudable it should be noted that Google has possibly the greatest vested interest in optimising targeted advertising online. While that makes it perfectly understandable that it would want to take the initiative in standardizing the way it’s done, other big advertisers and browser providers may have reservations about surrendering much control of the process to Google.

Europe set to join the facial recognition debate

With more authorities demonstrating they cannot be trusted to act responsibly or transparently, the European Commission is reportedly on the verge of putting the reigns on facial recognition.

According to reports in The Financial Times, the European Commission is considering imposing new rules which would extend consumer rights to include facial recognition technologies. The move is part of a greater upheaval to address the ethical and responsible use of artificial intelligence in today’s digital society.

Across the world, police forces and intelligence agencies are imposing technologies which pose a significant risk of abuse without public consultation or processes to create accountability or justification. There are of course certain nations who do not care about privacy rights of citizens, though when you see the technology being implemented for surveillance purposes in the likes of the US, UK and Sweden, states where such rights are supposedly sacred, the line starts to be blurry.

The reasoning behind the implementation of facial recognition in surveillance networks is irrelevant; without public consultation and transparency, these police forces, agencies, public sector authorities and private companies are completely disregarding the citizens right to privacy.

These citizens might well support such initiatives, electing for greater security or consumer benefits over the right to privacy, but they have the right to be asked.

What is worth noting, is that this technology can be a driver for positive change in the world when implemented and managed correctly. Facial scanners are speeding up the immigration process in airports, while Telia is trialling a payment system using facial recognition in Finland. When deployed with consideration and the right processes, there are many benefits to be realised.

The European Commission has not confirmed or denied the reports to Telecoms.com, though it did reaffirm its on-going position on artificial intelligence during a press conference yesterday.

“In June, the high-level expert group on artificial intelligence, which was appointed by the Commission, presented the first policy recommendations and ethics guidelines on AI,” spokesperson Natasha Bertaud said during the afternoon briefing. “These are currently being tested and going forward the Commission will decide on any future steps in-light of this process which remains on-going.”

The Commission does not comment on leaked documents and memos, though reading between the lines, it is on the agenda. One of the points the 52-person expert group will address over the coming months is building trust in artificial intelligence, while one of the seven principles presented for consultation concerns privacy.

On the privacy side, parties implementing these technologies must ensure data ‘will not be used to unlawfully or unfairly discriminate’, as well as setting systems in place to dictate who can access the data. We suspect that in the rush to trial and deploy technology such as facial recognition, few systems and processes to drive accountability and justification have been put in place.

Although these points do not necessarily cover the right for the citizen to decide, tracking and profiling are areas where the group has recommended the European Commission consider adding more regulation to protect against abuses and irresponsible deployment or management of the technology.

Once again, the grey areas are being exploited.

As there are only so many bodies in the European Commission or working for national regulators, and technology is advancing so quickly, there is often a void in the rules governing the newly emerging segments. Artificial intelligence, surveillance and facial recognition certainly fall into this chasm, creating a digital wild-west landscape where those who do not understand the ‘law of unintended consequence’ play around with new toys.

In the UK, it was unveiled several private property owners and museums were using the technology for surveillance without telling consumers. Even more worryingly, some of this data has been shared with police forces. Information Commissioner Elizabeth Denham has already stated her agency will be looking into the deployments and will attempt to rectify the situation.

Prior to this revelation, a report from the Human Rights, Big Data & Technology Project attacked a trial from the London Metropolitan Police Force, suggesting it could be found to be illegal should it be challenged in court. The South Wales Police Force has also found itself in hot water after it was found its own trials saw only an 8% success rate.

Over in Sweden, the data protection regulator used powers granted by GDPR to fine a school which had been using facial recognition to monitor attendance of pupils. The school claimed they had received consent from the students, but as they are in a dependent position, this was not deemed satisfactory. The school was also found to have substandard processes when handling the data.

Finally, in the US, Facebook is going to find itself in court once again, this time over the implementation of facial recognition software in 2010. A class-action lawsuit has been brought against the social media giant, suggesting the use of the technology was non-compliant under the Illinois Biometric Information Privacy Act.

This is one example where law makers have been very effective in getting ahead of trends. The law in question was enacted in 2008 and demanded companies gain consent before any facial recognition technologies are introduced. This is an Act which should be applauded for its foresight.

The speed in which progress is being made with facial recognition in the surveillance world is incredibly worrying. Private and public parties have an obligation to consider the impact on the human right to privacy, though much distaste has been shown to these principles in recent months. Perhaps it is more ignorance, short-sightedness or a lack of competence, but without rules to govern this segment, the unintended consequences could be compounded years down the line.

Another point worth noting is the gathering momentum to stop the wrongful implementation of facial recognition. Aside from Big Brother Watch raising concerns in the UK, the City of San Francisco is attempting to implement an approval function for police forces, while Google is facing an internal rebellion. Last week, it emerged several hundred employees had signed a petition refusing to work on any projects which would aid the government in tracking citizens through facial recognition surveillance.

Although the European Commission has not confirmed or denied the report, we suspect (or at the very least hope) work is being taken on to address this area. Facial recognition needs rules, or we will find ourselves in a very difficult position, similar to today.

A lack of action surrounding fake news, online bullying, cybersecurity, supply chain diversity and resilience, or the consolidation of power in the hands of a few has created some difficult situations around the world. Now the Commission and national governments are finding it difficult to claw back the progress of technology. This is one area where the European Commission desperately needs to get ahead of the technology industry; the risk and consequence of abuse is far too great.

Facebook investors brush off leaked $5 billion fine

It has been widely reported that Facebook will receive a record fine for privacy violations, but investors seems strangely pleased about it.

All the usual-suspect business papers seem to have received the leak late last week that the US Federal Trade Commission voted narrowly to fine Facebook $5 billion for data privacy violations related to the Cambridge Analytica thing. The FTC, like the FCC, has five commissioners, three of which are affiliated to the Republican party and two the Democrats. As ever they voted on partisan lines, with the Democrats once more opposing the move.

The FTC has yet to make an official announcement, so we don’t know the stated reasons for the Democrat objections. But since that party seems to have decided it would have won the last general election if it wasn’t for those meddling targeted political ads, it’s safe to assume they think the fine is too lenient.

Just because the Democrats have a vested interest, that doesn’t mean they’re wrong, however. Of course Democrat politicians have criticised the decision, but many more independent commentators have noted that the fine amounts to less than a quarter’s profit for the social media giant. Nilay Patel, Editor in Chief of influential tech site The Verge, seems to speak for many in this tweet.

That Facebook’s share price actually went up after such a big fine initially seems remarkable, but all it really indicates is that Facebook had done a good job of communicating the risk to its investors, so a five bil hit was already priced in. The perfectly legitimate point, however, is that as a punishment one month’s revenue is unlikely to serve as much of a deterrent from future transgressions.

Patel seems very hostile to Facebook, stating in his opinion piece on the matter “Facebook has done nothing but behave badly from inception.” A lot of this bad behaviour consists of exploiting user data, but what is really under attack seems to be Facebook’s core business model and, to some extent, the whole-ad-funded model on which sites like The Verge rely.

Debates need to be had about the way the Internet operates and monetizes itself, but identifying Facebook as a uniquely bad actor when it comes to exploiting user data seems disingenuous. Laws and regulations are struggling to catch up with the business models of internet giants and there are many other questions to be asked about how they operate.

The fact that Facebook’s share price has now largely recovered from the Cambridge Analytica scandal of a year or so ago, as illustrated by the Google Finance screenshot below, indicates that investors consider these issues to be just another business risk, to be weighed up against obscene profits. While we have always considered the scandal to be overblown, it also seems clear that, as a meaningful punishment, even a $5 billion fine is totally inadequate in this case.

Facebook share price July 19

ICO gets serious on British Airways over GDPR

The UK’s Information Commissioner Officer has swung the sharp stick of GDPR at British Airways and it looks like the damage might be a £183.39 million fine.

With GDPR inked into the rule book in May last year, the first investigations under the new guidelines will be coming to a conclusion in the near future. There have been several judgments passed in the last couple of months, but this is one of the most significant in the UK to date.

What is worth noting is this is not the final decision; this is an intention to fine £183.39 million. We do not imagine the final figure will differ too much, the ICO will want to show it is serious, but BA will be giving the opportunity to have its voice heard with regard to the amount.

“People’s personal data is just that – personal,” said Information Commissioner Elizabeth Denham.

“When an organisation fails to protect it from loss, damage or theft it is more than an inconvenience. That’s why the law is clear – when you are entrusted with personal data you must look after it. Those that don’t will face scrutiny from my office to check they have taken appropriate steps to protect fundamental privacy rights.”

The EU’s GDPR, General Data Protection Regulation, offers regulators the opportunity to fine guilty parties €20 million or as much as 3% of total revenues for the year the incident occurred. In this case, BA will be fined 1.5% of its total revenues for 2018, with the fine being reduced for several reasons.

In September 2018, user traffic was directed towards a fake British Airways site, with the nefarious actors harvesting the data of more than 500,000 customers. In this instance, BA informed the authorities of the breach the defined window, co-operated during the investigation and made improvements to its security systems.

While many might have suggested the UK watchdog, or many regulators around the world for that matter, lack teeth when it comes to dealing with privacy violations, this ruling should put that preconception to rest. This is a weighty fine, which should force the BA management team to take security and privacy seriously; if there is one way to make executives listen, its hit them in the pocket.

This should also be seen as a lesson for other businesses in the UK. Not only is the ICO brave enough to hand out fines for non-compliance, it is mature enough to reduce the fine should the effected organization play nice. £183.39 million is half of what was theoretically possible and should be seen as a win for BA.

Although this is a good start, we would like to see the ICO, and other regulatory bodies, set their sight on the worst offenders when it comes to data privacy. Companies like BA should be punished when they end up on the wrong side of right, but the likes of Facebook, Google and Amazon have gotten an easy ride so far. These are the companies who have the greatest influence when it comes to personal information, and the ones which need to be shown the rod.

This is one of the first heavy fines implemented in the era of GDPR and the difference is clear. Last November, Uber was fined £385,000 for a data breach which impacted 2.7 million customers and drivers in the UK. The incident occurred prior to the introduction of GDPR, the reason the punishment looks so measly compared to the BA fine here.

The next couple of months might be a busy time in the office of the ICO as more investigations conclude. We expect some heavy fines as the watchdog bears its teeth and forces companies back onto the straight and narrow when it comes to privacy and data protection.

UK launches competition probe into digital advertising market

The UK Competition and Markets Authority wants to know if the digital advertising market is being corrupted by internet giants like Google and Facebook.

The investigation is being called the ‘Online platforms and digital advertising market study’ and it will look into the following:

  • To what extent online platforms have market power in user-facing markets, and what impact this has on consumers
  • Whether consumers are able and willing to control how data about them is used and collected by online platforms
  • Whether competition in the digital advertising market may be distorted by any market power held by platforms

So this seems to be a combination of a monopoly investigation and an audit of how digital platforms are handling personal data. The dominance of the Silicon Valley platforms over the digital advertising market seems clear, so the question is whether they abuse that dominance to unfairly crush competition. The matter of data privacy seems secondary, especially since there are already loads of similar investigations happening around the world.

“It is our job to ensure that companies innovate and compete,” explained CMA Chairman Andrew Tyrie. “And every bit as much, it’s our job to ensure that consumers are protected from detriment. Implementation of the Furman Report should help a lot. As part of the work announced today, we will be advising Government on how aspects of Furman can most effectively be implemented.

“Much about these fast-changing markets is a closed book to most people. The work we do will open them up to greater scrutiny, and should give Parliament and the public a better grip on what global online platforms are doing. These are global markets, so we should and will work more closely than before with authorities around the world, as we all consider new approaches to the challenges posed by them.

“The market study will examine concerns about how online platforms are using people’s personal data, including whether making this data available to advertisers in return for payment is producing good outcomes for consumers,” said CMA Chief Executive Andrea Coscelli. “The CMA will examine whether people have the skills, knowledge and control over how information about them is collected and used, so they can decide whether or not to share it in the first place.”

While they’re at it why don’t they do an investigation into how many people read the terms and conditions of using a service, let alone understand them. While there can be little doubt that online platforms have been very effective at monetising third party data, anyone who uses them for free and then claims to feel exploited is being disingenuous. Much more interesting will be the measures taken if they’re viewed as a harmful monopoly.

Google’s Sidewalk’s bet is a nightmare for the privacy conscious

If you’re concerned about whether Google is listening to you through your phone or smart speaker, soon enough you’ll have to worry about lampposts having ears, or at least if your live in Toronto.

For those who have not been keeping up-to-date with the Canadian tech scene, Google’s Sidewalk Labs is currently working in partnership with Toronto to demonstrate the vision of tomorrow; the smart city. Plans are still being drawn up, though it looks like two neighbourhoods will be created with a new Google campus bang in the middle.

The Master Innovation and Development Plan (MIDP) hope to create the city of tomorrow and will be governed by Waterfront Toronto, a publicly-funded organization. In a move to seemingly appease the data concerns of Waterfront Toronto, Google has now stated all the systems would be run by analysing data, but Sidewalk Labs will not disclose personal information to third parties without explicit consent and will not sell personal information.

This is the first bit of insight we’ve had on this initiative for a while. Having secured the project in 2017, Sidewalk Labs has been in R&D mode. The team is attempting to prove the business case and the products, though it won’t be long before work is underway. Assuming of course Google is able to duck and weave through the red-tape which is going to be presented over the next 12-18 months.

The most recent development is a series of white papers which are addressing numerous topics from sustainable production plans, mobility, data protection and privacy and the envisioned usecases. If you have a spare few hours, you can find all the documentation here.

Of course, there are plenty of smart city initiatives around the world but what makes this one interesting is that the concept of ‘smart’ is being built from the foundations. This is a greenfield project not brownfield, which is substantially easier. Buildings, street furniture and infrastructure can be built with connectivity in mind.

This is the challenge which other cities are facing, lets take London as an example. Construction on the London Underground system started in 1863, while the London sewage system was plumbed in between 1859 and 1865. The city itself, and the basic layout, was established in 50 AD. Although there are creative solutions to enhance connectivity, most cities were built in the days before most could even conceive of the internet.

The Quayside and Villiers West neighbourhoods will be home to almost 7,000 residents and offer jobs to even more, anchored by the new Google campus. The buildings will offer ‘adaptable’ spaces, including floor plates and sliding walls panels to accelerate renovations and reduce vacancies. It will also be incredibly energy friendly, featuring a thermal energy grid which could heat and cool homes using the natural temperature of the earth.

But onto the areas which most people in the industry will be interested in; the introduction of new technologies and access to data.

High-speed internet connections will be promised to all residents and businesses, intelligent traffic lights and curbs will be deployed to better regulate traffic, smart awnings will be introduced for those into gimmicky technology and the neighbours will be designed to allow for an army of underground delivery robots to function.

Autonomous driving is one technology area which fits perfectly into the greenfield advantage. The complications of creating a landscape for autonomous vehicles in older cities are great, but by building up the regions with connectivity in mind many of these challenges can be averted. Not only can the introduction of self-driving vehicles be accelerated, but ride-sharing (Zipcar) or hailing (Uber) alternatives can be assisted while other options such as e-scooters are more realistic.

Such is the ambition nurtured in the Google business, if there is a crazy idea which can be applied to the smart city concept, Sidewalk Labs have probably factored it into the design and build process.

And now onto the data. This is where the project has drawn criticism as Google does not necessarily have the most glistening record when it comes to data privacy and protection. Small print littered throughout various applications has ensured Google is never too far away from criticism. In fairness, this is a problem which is industry wide, but a cloud of scepticism has been placed over any initiative which has data as the fuel.

The latest announcement from Google/Sidewalk Labs focuses on this very issue. Sidewalk Labs will not sell any personal information, this data will not be used to fuel the advertising mechanisms and it will not disclose this insight to third-parties. Explicit consent would have to be provided in any of these circumstances.

Whether these conditions will be up to the standards defined by Waterfront Toronto remains to be seen. This body has the final say and may choose to set its own standards at a higher or lower level. Anonymity might be called into play as many activists have been pushing. This is not a scenario which Google would want to see.

While expanding into new services might seem like an attractive idea, if this expansion can be coupled with additional access to data to fuel the Google data machine, it is a massive win for the internet giant. Let’s not forget, everything which Google has done to date (perhaps excluding Loon and the failed Fiber business) has paid homage to the advertising mechanisms.

Fi offers it interesting data on customer locations, the smart speakers are simply an extension of the core advertising business through a new user interface and Android allowed Google to place incredibly profitable products as default on billions of phones and devices. If Google can start to access new data sets it can offer new services, engage new customers and create new revenues for investors.

Let’s say it can start collecting data on traffic flow, this could become important insight for traffic management and city planners when it comes to adding or altering bus routes. This data could also be used to reduce energy consumption on street lights or traffic lights; if there is no-one there, do they actually need to be on? It could also help retailers forecast demand for new stores and aid the police with their work.

These ideas might not sound revolutionary or that they would bring in billions, but always remember, Google never does anything for free. This is a company which seems to see ideas before anyone else and can monetize them like few others. If Google is paying this much attention to an idea or project, there must be money to be made and we bet there is quite a bit.

But this is where Google is facing the greatest opposition. Because it is so good at extracting insight and value from data, it is one of the companies which is facing the fiercest criticism. This will be the most notable the further afield Google spreads its wings. It seems the world is content with Google sucking value out of personal data when it comes to search engines or mobile apps, but pavements, lampposts and bus stops might be a step too far for some.

Of course, criticism might disappear when jealousy emerges. The hardcore privacy advocates will never rest, but most simply don’t care that much. Privacy violations will of course cause uproar, but if there is a fair trade-off, most will accept Google’s role. If Google can prove these neighbourhoods not only improve the quality of life, but also offer advantages to entertainment and business (for example), this initiative could prove to be very popular with the general public, governments and businesses.

Maine gets tough on telcos over data economy

Maine Governor Janet Mills has signed new privacy rules into law, demanding more proactive engagement from broadband providers in the data-sharing economy.

While the rules are tightening up an area of the digital world which is under-appreciated at the moment, it will have its critics. The law itself is targeting those companies who delivering connectivity solutions to customers, the telcos, not the biggest culprits of data protection and privacy rights, the OTTs and app developers.

The rules are applicable to broadband providers in the state, both mobile and fixed, and force a more proactive approach in seeking consent. Telcos will now be compelled to seek affirmative consent from customers before being allowed to use, disclose, sell or permit access to customer personal information, except in a few circumstances.

As is on-trend with privacy rules, the ‘opt-out’ route, used by many to ensure the lazy and negligent are caught into the data net, has been ruled out.

There are also two clauses included in the legislation which block off any potential coercing behaviour from the telcos also:

  • Providers will not be allowed to refuse service to a customer who does not provide consent
  • Customers cannot be penalised or offered a discount based on that customer’s decision to provide or not provide consent

This is quite an interesting inclusion in the legislation. Other states, California for example, are building rules which will offer freedoms to those participating in the data-sharing economy if the spoils are shared with those providing the data (i.e. the customer), though the second clause removes the opportunity to offer financial incentives or penalties based on consent.

This is not to say rewards will not be offered however. There is wiggle room here, zero-rating offers on in-house services or third-party products for example, which does undermine the rules somewhat.

It is also worth noting that these rules only pertain to what the State deems as personal data. Telcos can continue to monetize data which is not considered personal without seeking affirmative consent, unless the customer has written to the telco to deny it this luxury. Personal data is deemed as the following categories:

  • Web browsing history
  • Application usage history
  • Geolocation
  • Financial
  • Health
  • Device identifiers
  • IP Address
  • Origin and destination of internet access service
  • Content of customer’s communications

What is worth noting is this is a solution to a problem, but perhaps not the problem which many were hoping would be addressed.

Firstly, the telcos are already heavily regulated, with some suggesting already too much so. There are areas which need to be tightened up, but this is not necessarily the problem child of the digital era. The second point is the issue which we are finding hard to look past; what about the OTTs, social media giants and app community?

The communications providers do need to be addressed, though the biggest gulf in regulation is concerning the OTTs and app developers. These are companies which are operating in a relative light-touch regulatory environment and benefiting considerably from it. There are also numerous examples of incidents which indicate they are not able to operate in such a regulatory landscape.

Although it is certainly a lot more challenging to put more constraints on these slippery digital gurus, these companies are perhaps the biggest problem with the data-sharing economy. Maine might grab the headlines here with new privacy rules, which are suitably strict in fairness, but the rule-makers seem to have completely overlooked the biggest problem.

These rules do not add any legislative or regulator restraints on the OTTs or app developers, therefore anyone who believes Maine is taking a notable step in addressing the challenges of the data-sharing economy is fooling themselves. This is a solution, but not to the question which many are asking.

Ambulance chasers are readying themselves for GDPR assault

While getting a firm ready for the introduction of GDPR was a frantic period, the last 12 months have been relatively quiet period for the rules. However that might all be about to change.

At the European Data Protection Summit in London, a few points were raised which should put the fear back into executives. It does appear the ‘sex appeal’ of data protection and privacy has been eroded, but just wait until the summer is over. It might well be dominating the headlines again.

There seem to be four developments bubbling away at the moment, each of which could have a significant impact on the data protection and privacy landscape; Brexit, the UK’s 2018 Data Protection Act and ambulance chasers.

Ditching PPI for GDPR

Although it is not necessarily the most flattering of terms, the ambulance chasers are readying themselves for an assault on the GDPR negligent.

The Financial Conduct Authority (FCA) has set a deadline of August 29 for consumers to complain about the sale of PPI products in the UK. This effectively means all the firms set-up to manage the complaints on behalf of consumers will become redundant. Most will evolve however, the legal world is simply too profitable, and GDPR seems a prime opportunity.

While it might not be the most common practice for the moment, there are certainly examples. Numerous law firms, Hayes Connor Solicitors for example, are already advertising their services for the British Airways data breach, impacting roughly 400,000 people. This is an on-going investigation, though the financial penalty for this breach could be as much as €918 million.

As more PPI lawyers find themselves at the mercy of free time, more will turn their attentions to new fields of expertise. Due to the headline-worth nature of data breaches and privacy violations, as well as the potential consequence to the individual, this is an area which is primed for the legal buzz.

Big fines have been promised

So far, there is only one example of a Data Protection Authority (DPA) swinging the heavy stick of GDPR at a major firm. France’s watchdog fined Google €50 million for numerous offenses, and while there have been other significant breaches over the last few years, most occurred at a time prior to the heavy fines of GDPR.

“Serious fines are coming in the summer, including to some of the big companies,” said Paul Breitbarth, Director of Strategic Research and Regulator Outreach at Nymity. “The DPAs [Data Protection Authorities] are taking this very seriously and so should we.”

The Irish DPA is an example of one regulator taking control of the situation, and quite rightly so. Despite the fact its economy is heavily reliant on the internet giants, the Irish watchdog is Europe’s lead GDPR authority; it should be leading the charge.

In a recent PR defence plea, Commissioner for Data Protection Helen Dixon pointed out the authority has already opened 54 investigation, 19 of which were cross border. According to Breitbarth, we should expect some pretty heavy fines which will also bring data protection and privacy back into public debate.

One of the big challenges being faced by the industry is apathy from the general public and any considered concern from executives. Enforcement of GDPR rules will not only highlight the potential risks to the general public, but also make data protection and privacy a priority for those running the firms.

Executives might want to ignore data protection and privacy, but one way to get the attention is to hit them in their wallets. Both the enforcement of GDPR and the emergence of ambulance chasers will ensure this is a topic of conversation in the board rooms.

New rules, new considerations

The 2018 Data Protection Act is something which has not really generated many headlines, but there is a monumental opportunity for headaches.

“It’s a bit of a minefield to go through,” said Ian Evans, MD of OneTrust.

The Data Protection Act is the UK’s own version of GDPR, required due to the fact we are divorcing the European Union, but it does actually go a lot further than the European rules. This is perhaps worst-case scenario for those wanting to remain compliant, as it creates more work ensuring compliance to two different sets of rules.

New clauses have been introduced creating new grey areas when it comes to confidentiality agreements, while the approach in the immigration department has received criticism. Those who are seeking official residential status in the UK will not be able to force the government into providing insight into the data which has been collected, analysed and actioned. This is the first time a data moat has been embedded into law, and there are come people who are not happy about it.

One area which is very useful is the standardization of usecases. In four areas, the ICO will effectively produce standards to ensure companies can remain compliant. This is the first time an authority has taken such an approach, and we hope it will be replicated by other authorities. The first example, ‘Age-appropriate design’, will be released in the coming weeks.

The groans of Brexit

Brexit is a tricky topic to bring up. People either disagree with it, hate it or are bored of it, but the matter of the fact is, it is crucially important in numerous areas.

Brexit changes the status quo. The UK will no-longer be in the European Union, therefore fundamentally changing the relationship companies have with governments, customers and supply chains.

With the Brexit deadline fast approaching, and little concrete information being offered, the risk is running quite high. This will have to be a major factor in any companies approach to data protection and privacy moving forward.

The risk of a boring conversation

“Everyone is saying they are trying more for data protection, but does anyone actually believe it,” said Ian West, COO of the GDPR Institut.

GDPR was critically important when it was introduced, and it remains critically important today. However, you have to question whether the organizations involved, or the general public, are actually taking it seriously. The last 12 months has seen GDPR fall down the agenda, though it will rise again.

Enforcement is key, and it is coming. GDPR investigations are painfully slow processes due to the vast amount of information and the complexities of the business models in the data-sharing economy. However, many investigations will be finalised over the next few months. With these final decisions come the fines.

This will propel data protection and privacy back into the public debate, and ensure the general public is becoming more aware to the dangers of the digital world.

There is currently a risk of negligence, but soon enough data protection and privacy principles will form part of the buying decision-making process. The companies which are taking data protection and privacy seriously, will become more appealing to those customers, both consumer and enterprise.

Another factor to consider is recruitment. More graduates nowadays want to work for ethically sound organizations, and soon enough this definition will be expanded to include data protection and privacy principles.

GDPR is a topic which is not ‘sexy’ at the moment, but the next couple of months could ensure these conversations are firmly set back into the board room. The question is whether these will be fleeting, defensive discussions, or whether these executives will take the challenge seriously and create a culture which encourages data protection and privacy principles.