Researchers point to 1,300 apps which circumnavigate Android’s opt-in

Research from a coalition of professors has suggested Android location permissions mean little, as more than 1,300 apps have developed ways and means around the Google protections.

A team of researchers from the International Computer Science Institute (ICSI) has been working to identify short-comings of the data privacy protections offered users through Android permissions and the outcome might worry a few. Through the use of side and covert channels, 1,300 popular applications around the world extracted sensitive information on the user, including location, irrelevant of the permissions sought or given to the app.

The team has informed Google of the oversight, which will be addressed in the up-coming Android Q release, receiving a ‘bug bounty’ for their efforts.

“In the US, privacy practices are governed by the ’notice and consent’ framework: companies can give notice to consumers about their privacy practices (often in the form of a privacy policy), and consumers can consent to those practices by using the company’s services,” the research paper states.

This framework is a relatively simple one to understand. Firstly, app providers provide ‘notice’ to inform the user and provide transparency, while ‘consent’ is provided to ensure both parties have entered into the digital contract with open eyes.

“That apps can and do circumvent the notice and consent framework is further evidence of the framework’s failure. In practical terms, though, these app behaviours may directly lead to privacy violations because they are likely to defy consumers’ expectations.”

What is worth noting is while this sounds incredibly nefarious, it is no-where near the majority. Most applications and app providers act in accordance with the rules and consumer expectations, assuming they have read the detailed terms and conditions. This is a small percentage of the apps which are installed en-mass, but it is certainly an oversight worth drawing attention to.

Looking at the depth and breadth of the study, it is pretty comprehensive. Using a Google Play Store scraper, the team downloaded the most popular apps for each category; in total, more than 88,000 apps were downloaded due to the long-tail of popularity. To cover all bases however, the scraper also kept an eye on app updates, meaning 252,864 different versions of 88,113 Android apps were analysed during the study.

The behaviour of each of these apps were measured at the kernel, Android-framework, and network traffic levels, reaching scale using a tool called Android Automator Monkey. All of the OS-execution logs and network traffic was stored in a database for offline analysis.

Now onto how these apps developers can circumnavigate the protections put in place by Google. For ‘side channels’, the developer has discovered a path to a resource which is outside the security perimeters, perhaps due to a mistake during design stages or a flaw in applying the design. With ‘covert channels’ these are more nefarious.

“A covert channel is a more deliberate and intentional effort between two cooperating entities so that one with access to some data provides it to the other entity without access to

the data in violation of the security mechanism,” the paper states. “As an example, someone could execute an algorithm that alternates between high and low CPU load to pass a binary message to another party observing the CPU load.”

Ultimately this is further evidence the light-touch regulatory environment which has governed the technology industry over the last few years can no-longer be allowed to persist. The technology industry has protested and quietly lobbied against any material regulatory or legislative changes, though the bad apples are spoiling the harvest for everyone else.

As it stands, under Section 5 of the Federal Trade Commission (FTC) Act, such activities would be deemed as non-compliant, and we suspect the European Commission would have something to say with its GDPR stick as well. There are protections in place, though it seems there are elements of the technology industry who consider these more guidelines than rules.

Wholesale changes should be expected in the regulatory environment and it seems there is little which can be done to prevent them. These politicians might be chasing PR points as various elections loom on the horizon, but the evolution of rules in this segment should be considered a necessity nowadays.

There have simply been too many scandals, too much abuse of grey areas and too numerous examples of oversight (or negligence, whichever you choose) to continue on this path. Of course, there are negative consequences to increased regulation, but the right to privacy is too important a principle for rule-makers to ignore; the technology industry has consistently shown it does not respect these values therefore will have to be forced to do so.

This will be an incredibly difficult equation to balance however. The technology industry is leading the growth statistics for many economies around the world, but changes are needed to protect consumer rights.

ICO gets serious on British Airways over GDPR

The UK’s Information Commissioner Officer has swung the sharp stick of GDPR at British Airways and it looks like the damage might be a £183.39 million fine.

With GDPR inked into the rule book in May last year, the first investigations under the new guidelines will be coming to a conclusion in the near future. There have been several judgments passed in the last couple of months, but this is one of the most significant in the UK to date.

What is worth noting is this is not the final decision; this is an intention to fine £183.39 million. We do not imagine the final figure will differ too much, the ICO will want to show it is serious, but BA will be giving the opportunity to have its voice heard with regard to the amount.

“People’s personal data is just that – personal,” said Information Commissioner Elizabeth Denham.

“When an organisation fails to protect it from loss, damage or theft it is more than an inconvenience. That’s why the law is clear – when you are entrusted with personal data you must look after it. Those that don’t will face scrutiny from my office to check they have taken appropriate steps to protect fundamental privacy rights.”

The EU’s GDPR, General Data Protection Regulation, offers regulators the opportunity to fine guilty parties €20 million or as much as 3% of total revenues for the year the incident occurred. In this case, BA will be fined 1.5% of its total revenues for 2018, with the fine being reduced for several reasons.

In September 2018, user traffic was directed towards a fake British Airways site, with the nefarious actors harvesting the data of more than 500,000 customers. In this instance, BA informed the authorities of the breach the defined window, co-operated during the investigation and made improvements to its security systems.

While many might have suggested the UK watchdog, or many regulators around the world for that matter, lack teeth when it comes to dealing with privacy violations, this ruling should put that preconception to rest. This is a weighty fine, which should force the BA management team to take security and privacy seriously; if there is one way to make executives listen, its hit them in the pocket.

This should also be seen as a lesson for other businesses in the UK. Not only is the ICO brave enough to hand out fines for non-compliance, it is mature enough to reduce the fine should the effected organization play nice. £183.39 million is half of what was theoretically possible and should be seen as a win for BA.

Although this is a good start, we would like to see the ICO, and other regulatory bodies, set their sight on the worst offenders when it comes to data privacy. Companies like BA should be punished when they end up on the wrong side of right, but the likes of Facebook, Google and Amazon have gotten an easy ride so far. These are the companies who have the greatest influence when it comes to personal information, and the ones which need to be shown the rod.

This is one of the first heavy fines implemented in the era of GDPR and the difference is clear. Last November, Uber was fined £385,000 for a data breach which impacted 2.7 million customers and drivers in the UK. The incident occurred prior to the introduction of GDPR, the reason the punishment looks so measly compared to the BA fine here.

The next couple of months might be a busy time in the office of the ICO as more investigations conclude. We expect some heavy fines as the watchdog bears its teeth and forces companies back onto the straight and narrow when it comes to privacy and data protection.

FBI and London Met land in hot water over facial recognition tech

The FBI and London Metropolitan Police force will be facing some awkward conversations this week over unauthorised and potentially illegal use of facial recognition technologies.

Starting in the US, the Washington Post has been handed records dating back almost five years which suggest the FBI and ICE (Immigration and Customs Enforcement) have been using DMV databases to build a surveillance network without the consent of citizens. The emails were obtained by Georgetown Law researchers through public records requests.

Although law enforcement agencies have normalised biometrics as part of investigations nowadays, think finger print or DNA evidence left at crime scenes, the traces are only useful when catching repeat offenders. Biometric databases are built by obtaining data from those who have been previously charged, but in this case, the FBI and ICE have been accessing data on 641 million individuals, the vast majority of which are innocent and would not have been consulted for the initiative.

In the Land of the Free, such hypocrisy is becoming almost second nature to national security and intelligence forces, who may well find themselves in some bother from a privacy perspective.

As it stands, there is no legislative or regulatory guidelines which authorise the development of such a complex surveillance system, or any public consultation with the citizens of the US. This act first, tell later mentality is something which is becoming increasingly common in country’s the US has designated as national enemies, though there is little evidence authorities in the US have any respect for the rights of their own citizens.

Heading across the pond to the UK, a report from the Human Rights, Big Data & Technology Project has identified ‘significant flaws’ with the way live facial recognition has been trialled in London by the Metropolitan Police force. The group, based out of the University of Essex Human Rights Centre, suggests it could be found to be illegal should it be challenged in court.

“The legal basis for the trials was unclear and is unlikely to satisfy the ‘in accordance with the law’ test established by human rights law,” said Dr Daragh Murray, who authored the report alongside Professor Peter Fussey.

“It does not appear that an effective effort was made to identify human rights harms or to establish the necessity of LFR [live facial recognition]. Ultimately, the impression is that human rights compliance was not built into the Metropolitan Police’s systems from the outset and was not an integral part of the process.”

The main gripe from the duo here seems to be how the Met approached the trials. LFR was approached in a manner similar to traditional CCTV, failing to take into the intrusive nature of facial recognition, and the use of biometric processing. The Met did not consider the ‘necessary in a democratic society’ test established by human rights law, and therefore effectively ignored the impact on privacy rights.

There were also numerous other issues, including a lack of public consultation, the accuracy of the technology (8 out of 42 tests were actually correct), criteria for using the technology was not clearly defined and accuracy and relevance of the ‘watchlist’ of suspects. However, the main concern from the University’s research team was that only the technical aspects of the trial were considered, not the impact on privacy.

There is a common theme in both of these instances; the authorities supposedly in place to protect our freedoms pay little attention to the privacy rights which are granted to us. There seems to be a ‘ends justify the means’ attitude with little consideration to the human right to privacy. Such attitudes are exactly what the US and UK aim to eradicate when ‘freeing’ citizens of oppressive regimes abroad.

What is perhaps the most concerned about these stories is the speed at which they are being implemented. There has been little public consultation to the appropriateness of these technologies or whether the general public is prepared to sacrifice privacy rights in the pursuit of national security. With the intrusive nature of facial recognition, authorities should not be allowed to make this decision on behalf of the general public, especially when there is so much precedent for abuse and privacy is a hot-topic following scandals in private industry.

Of course, there are examples of the establishment slowing down progress to give time for these considerations. In San Francisco, the city’s Board of Supervisors has made it illegal for forces to implement facial recognition technologies unless approval has been granted. The police force would have to demonstrate stringent justification, accountability systems and safeguards to privacy rights.

In the UK, Dr Murray and Professor Fussey are calling for a pause on the implementation or trialling of facial recognition technologies until the impact on and trade-off of privacy rights have been fully understood.

Facial recognition technologies are becoming incredibly useful when it comes to access and authentication, though there needs to be some serious conversations about the privacy implications of using the tech in the world of surveillance and police enforcement. At the moment, it seems to be nothing but an after-thought for the police forces and intelligence agencies, an incredibly worrying and dangerous attitude to have.

US Senators want public disclosures on the value of personal data

Two US Senators have suggested an interesting, if currently very currently ill-defined, idea for companies in the digital economy: list the value of data on the financial spread sheets during earning season.

Senators Mark Warner and Josh Hawley are reportedly readying themselves to introduce the Designing Accounting Safeguards to Help Broaden Oversight and Regulations on Data Act, or DASHBOARD for short. This bill will attempt to force companies into disclosing the financial value of the data which they collect, analyse and action, to the SEC once a quarter.

Although this is an incredibly wide net to cast, the rules would only apply to companies that generate a material impact on revenues from the data and have more than 100 million users. This would also include data which is bundled in through relationships with third-parties.

“…I think we need debates there and enhanced privacy, but we also need a lot more transparency, because if it defaults then to status prerogatives based on how much data is worth, that may spur another debate,” Warner said on ‘Axios on HBO’ this weekend. “But we don’t know any of that right now.”

That is the big issue which Warner is addressing during his prolonged crusade against the tech giant of Silicon Valley; there are still far too many unknowns.

It appears the objective of Warner and Hawley is to create greater understanding of how the digital economy, based on the concept of sharing data, functions. Consumers are seemingly happy to trade away their personal information, but you have to wonder how much of an informed decision this is today.

This is the challenge in addressing a rapidly growing and evolving segment. Not only are we as consumers dealing with challenges for the first time, but so are the regulators and legislators. Rules need to be created which are contextually relevant. Today, the regulatory and legislative landscape is dated, but this looks like one step in the right direction.

Warner and Hawley are seemingly trying to address two issues; firstly, raising awareness and creating a greater understanding of how much information is collected on individuals. And secondly, some more clarity on how much data is actually worth.

The second issue is an interesting one, as there does not seem to be a great level of consistency when it comes to the commercial value of data to an organization. Some might suggest value is more of a nuanced term, with these companies using data sets to improve products, but others have a more direct link. Facebook is a company which directly monetizes user data, suggesting it is worth in the region of $20 a month per user.

As part of the Bill, the SEC will be instructed to develop models to identify the value of the data. There would be several different models, each accounting for different use cases, business models and the vertical segments in which the businesses operate. This might prove to be a difficult aspect of the Bill, as the SEC would have to go on a recruitment drive to hire people capable of understanding the nuances and complexities of the digital economy.

Of course, with every step made with legislation and regulation, you have to take into account the rule of unintended consequences. Once users know the value of their data, they might ask to be compensated for it. This is not what the Bill is intended to do, and the Senators will have to be sure to put concrete protections in place to ensure business models are not undermined.

Although identifying the value of personal information will most likely, and quite rightly, inspire future public debate, lawyers should not be able to hold the companies who monetize personal information to ransom. If users are not happy about the situation, they can close their accounts and ask for personal information to be deleted. You wouldn’t ask for a refund on an umbrella because you found out the manufacturer was making more money than you originally thought using cheaper materials.

Some users might be upset or angered by the fact these companies are making money off their personal information, but they should always remember they are being offered a service for ‘free’. How many people would pay for subscriptions to Facebook or Twitter, or a one-off rate for any of the currently free apps which are being downloaded today? If you remove the commercial incentive for these companies, some (if not the majority) will cease to exist.

And while there should be protections for these companies, the two Senators are perfectly justified in suggesting this Bill. The user should be sufficiently educated in the ways and means of the digital economy to make an informed decision before entering into a contract with any service, product or platform, and irrelevant whether it is ‘free’ or not, normal rules should apply. Users need to have all the information available and this include the commercial value of their personal data.

Ultimately this is a Bill which is littered with potential pitfalls and hurdles for the digital economy. Warner and Hawley will have to be incredibly careful they do not stutter the promising progress of this segment. Transparency and privacy are two ideals which should be enhanced, but this should be done in a way which also encourages businesses to thrive, or at the very least, does not inhibit valid operations.

HMD moves Nokia phone user data storage to Finland

HMD Global, the maker of Nokia-branded smartphones, announced that it is moving the storage of user data to Google Cloud servers located in Finland, to ease concerns about data security.

The phone maker announced the move in the context of its new partnership with CGI, a consulting firm that specialises in data collection and analytics, and Google Cloud, which will provide HMD Global with its machine learning technologies. The new models, Nokia 4.2, Nokia 3.2 and the Nokia 2.2, will be the first ones to have the user data stored in the Google Cloud servers in Hamina, southern Finland. Older models that will be eligible for upgrading to Android Q will move the storage to Finland at the upgrade, expected to take place from late 2019 to early 2020. HMD Global commits to two years’ OS upgrades and three years’ security upgrades to its products.

HMD Global claims the move will support its target to be the first Android OEMs to bring OS updates to its users, and to improve its compliance with European security measures and legislation, including GDPR. “We want to remain open and transparent about how we collect and store device activation data and want to ensure people understand why and how it improves their phone experience,” said Juho Sarvikas, HMD Global’s Chief Product Officer. “This change aims to further reinforce our promise to our fans for a pure, secure and up to date Android, with an emphasis on security and privacy through our data servers in Finland.”

Sarvikas denied to the Finnish news outlet Ilta-Sanomat that the move was a direct response to privacy concerns triggered by the controversy earlier this year when Nokia-branded phones sold in Norway were sending activation data to servers in China. At that time HMD Global told Telecoms.com that user data of phones purchased outside of China is stored in AWS servers in Singapore, which, the company said, “follows very strict privacy laws.” However, according to GDPR, to take user data outside of the EU, the company would have had to obtain explicit consent from its EU-based users.

Sarvikas claimed that the latest decision to move storage to Finland has been a year in the making and is part of the company’s overall cloud service vendor swap from Amazon to Google. “Staying true to our Finnish heritage, we’ve decided to partner with CGI and Google Cloud platform for our growing data storage needs and increasing investment in our European home,” Sarvikas added in the press release.

Francisco Jeronimo, Associate VP at IDC, saw this move a positive action by HMD Global, calling it a good move “to address concerns about data privacy” on Twitter.

UK consumers are resigned to poor data security, research finds

The new EY research in UK’s digital households found over four in ten consumers believed their data would never be fully secure, despite the recent regulatory changes including GDPR.

The consulting firm EY has published the security section of its annual survey of UK households about their digital lives. The good news is the majority of consumers are aware of the new privacy data protection regulations. Close to seven out of ten consumers know GDPR and “what this means for how their data is stored, managed and used”. The bad news is the confidence in the effectiveness of the legal measures is low. Only 43% of consumers “believe that the changes resulting from GDPR will significantly improve the security of their personal data”. Worse still, almost equal number of consumers (41%) have almost given up, thinking it “impossible to keep their personal data secure when using the internet or internet-enabled devices”.

When it comes to who to trust to keep personal data secure, broadband providers and utility companies came on top, winning the trust of 28% and 21% of the households surveyed. On the other end, mobile app developers and social networks fared the worst, being trusted by only 2% and 3% of all households. Mobile operators and pay-TV providers also came closer to the bottom of the table than to the top.

EY digital household trust in data security 2019

EY thinks at least three lessons can be learned from the findings:

  1. Businesses should put trust at the heart of all the customer interactions;
  2. Businesses should communicate about security with purpose, clarity, and consistency;
  3. Businesses should ensure that their innovation agenda should be built on an ethical data management system.

This report is part of the overall “Decoding the digital home” project and was made on the survey of 2,500 UK households.

Enhanced privacy protection is now at the core of Apple

At its 2019 developer conference Apple introduced new measures to strengthen user privacy protection, as a point of differentiation from other big tech companies.

Apple is hosting its 2019 edition of Worldwide Developer Conference (WWDC) in California. On the first day the company announced a number of new products including the iOS13, new version of MacOS (called “Catalina”), the first version of iPadOS, and WatchOS6. At the same time, iTunes, which has been around for nearly two decades and has been at the vanguard of Apple’s adventure into the music industry, is finally retired. At the event, Apple also unveiled the radically revamped Mac Pro. Instead of looking like a waste basket (as the 2nd generation did), the new top end desktop computer looks more like a cheese grater.

One key feature that stood out when the new software was introduced was Apple’s focus on privacy, in particular the new “Sign in with Apple”.  It will be mandatory for apps which support 3rd-party log in to also include this new option, in addition to, or as Apple would like it, instead of, Facebook and Google. Although Tim Cook, in a post-event interview with CBS claimed “we’re not really taking a shot at anybody”, Craig Federighi, Apple’s software chief, was pulling no punch when introducing the feature. After showing the current two options to sign in apps or websites, he declared Apple wanted to offer a better option, which will be “fast, easy sign-in without all the tracking.”

In practice this means Apple will act as a privacy interlocutor. A user can log in to an app or a website with his or her Apple ID. Apple will then verify the email addresses, make dual-factor authentication, then send developers a unique random ID, which Apple asks developers to trust. Users can also choose to use TouchID or FaceID for authentication. In addition to the Apple products (iPhone, iPad, Watch, etc.), and it can also work on browsers built on other platforms (Windows, Chrome, etc.).

In addition to Sign in with Apple, the company also updated its Maps, so that apps that track users’ location would need to ask for permission every time it is activated. On MacOS, all apps need to request permission to access the user’s files on the computer, while Watch users can approve security requests by tapping the button on the side.

Although both Facebook and Google have been talking up about their focus on privacy, these companies have an intrinsic conflict of interest: their business model is built on monetising user data. Apple, on the other hand, makes money by selling products and services. Therefore, it is in Apple’s own interest to guard user privacy as close as possible, to enhance current and future consumers’ trust. By making privacy protection its differentiator, or as TechCrunch called it, delivering “privacy-as-a-service”, Apple is elevating the match to a level Google, Facebook, and other internet companies will be challenged to match.

Tech giants hit back against GCHQ’s ‘Ghost Protocol’

GCHQ’s new proposal to supposedly increase the security and police force’s ability to keep us safe has been slammed by the technology industry, suggesting the argument contradicts itself.

In an article for Lawfare, GCHQ’s Technical Director Ian Levy and Head of Cryptanalysis Crispin Robinson presented six principles to guide ethical and transparent eavesdropping, while also suggesting intelligence officers can be ‘cc’d’ into group chats without compromising security or violating the privacy rights of the individuals involved.

The ‘Exceptional Access Debate’ is one way in which GCHQ is attempting to undermine the security and privacy rights offered to consumers by some of the world’s most popular messaging services.

Responding in an open letter, the likes of the Electronic Frontier Foundation, the Center for Democracy & Technology, the Government Accountability Project, Privacy International, Apple, Google, Microsoft and WhatsApp have condemned the proposal.

“We welcome Levy and Robinson’s invitation for an open discussion, and we support the six principles outlined in the piece,” the letter states. “However, we write to express our shared concerns that this particular proposal poses serious threats to cybersecurity and fundamental human rights including privacy and free expression.”

Levy and Robinson suggest that instead of breaking the encryption software which is placed on some of these messaging platforms, the likes of Signal and WhatsApp should place virtual “crocodile clips” onto the conversation, effectively adding a ‘ghost’ spook into the loop. The encryption protections would remain intact and the users would not be made aware of the slippery eavesdropper.

In justifying this proposal, Levy and Robinson claim this is effectively the same practice undertaken by the telco industry for years. During the early days, physical crocodile clips were placed on telephone wires to intercept conversations, which later evolved to simply copying call data. As this is an accepted practice, Levy and Robinson see no issue with the encrypted messaging platforms offer a similar service to the spooks.

However, the coalition of signatories argue there are numerous faults to the argument. Firstly, technical and secondly, from an ethical perspective.

On the technical side, the way in which keys are delivered to authenticate the security of a conversation would have to be altered. As it stands, public and private keys are delivered to the initiator and recipients of the conversation. Both of these keys match, are assigned to specific individuals and only change when new participants are added to the conversation. To add a government snooper into the conversation covertly, all the keys would have to be changed without notifying the participants.

Not only would this require changes to the way encryption technologies are designed and implemented, but also it would undermine the trust users place in the messaging platform. Levy and Robinson are asking the messaging platforms to suppress any notifications to the participants of the conversation, effectively breaking the trust between the user and the brand.

While GCHQ can think it is presenting a logical and transparent case, prioritising responsible and ethical use of technology, the coalition also argues it is contradicting its own principles laid out in its initial article. Those principles are as follows:

  1. Privacy and security protections are critical to public confidence, therefore authorities would only request access to data in exceptional cases
  2. Law enforcement and intelligence agencies should evolve with technologies and the technology industry should offer these agencies greater insight into product development to help aid this evolution
  3. Law enforcement and intelligence agencies should not expect to be able to gain access to sensitive data every time a request is made
  4. Targeted exceptional access capabilities should not give governments unfettered access to user data
  5. Any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users
  6. Transparency is essential

Although the coalition of signatories are taking issue with all six points, for us, it’s the last two which are the most difficult to grasp.

Firstly, if ‘Ghost Protocol’ is accepted by the industry and implemented, there is no way not to undermine or fundamentally change the trust relationship between the platform and the user. The platform promises a private conversation, without exception, and the GCHQ proposal requires data interception without knowledge of the participants. These are two contradictory ideas.

“…if users were to learn that their encrypted messaging service intentionally built a functionality to allow for third-party surveillance of their communications, that loss of trust would understandably be widespread and permanent,” the letter states.

The sixth principle is another one which is difficult to stomach, as there is absolutely nothing transparent about this proposal. In fact, the open letter points out that under the Investigatory Powers Act, passed in 2016, the UK Government can force technology service providers to hold their tongue through non-disclosure agreements (NDA). These NDAs could bury any intrusion or interception for decades.

It’s all very cloak and dagger.

Another big issue for the coalition is that of creating intentional vulnerabilities in the encryption software. To meet these demands, providers would have to rewrite software to create the opportunity for snooping. This creates two problems.

Firstly, there are nefarious individuals everywhere. Not only in the deep, dark corners of the internet, but also working for law enforcement and intelligence agencies. Introducing such a vulnerability into the software opens the door for abuse. Secondly, there individuals who are capable of hacking into the platforms that developed said vulnerability.

At the moment, encryption techniques are incredibly secure because not even those who designed the encryption software them can crack them. If you create a vulnerability, the platforms themselves become a hacker target because of said vulnerability. Finding the backdoor would be the biggest prize in the criminal community, the Holy Grail of the dark web, and considerable rewards would be offered to those who find it. The encryption messaging platforms could potentially become the biggest hacking target on the planet. No-one or no organization is 100% secure, therefore this is a very real risk.

After all these considerations to security vulnerabilities and breach of user trust, another massive consideration which cannot be ignored is the human right to privacy and freedom of expression.

Will these rights be infringed if users are worried there might be someone snooping on their conversation? The idea creates the fear of a surveillance state, though we will leave it up to the readers as to whether GCHQ has satisfied the requirements to protect user security, freedom of expression and privacy.

For us, if any communications provider is to add law enforcement and intelligence agencies in such an intrusive manner, there need to be deep and comprehensive obligations that these principles will be maintained. Here, we do not think they have.

Microsoft starts ruffling privacy feathers in the US

This weekend will mark the one-year anniversary of Europe’s GDPR and Microsoft has made the bold suggestion of bringing the rules over the pond to the US.

Many US businesses would have been protected from the chaos that was the European Union’s General Data Protection Regulation (GDPR), with the rules only impacting those which operated in Europe. And while there are benefits to privacy and data protection rights for consumers, that will come as little compensation for those who had to protect themselves from the weighty fines attached to non-compliance.

Voicing what could turn out to be a very unpopular opinion, Microsoft has suggested the US should introduce its own version.

“A lot has happened on the global privacy front since GDPR went into force,” said Julie Brill, Deputy General Counsel at Microsoft. “Overall, companies that collect and process personal information for people living in the EU have adapted, putting new systems and processes in place to ensure that individuals understand what data is collected about them and can correct it if it is inaccurate and delete it or move it somewhere else if they choose.

“This has improved how companies handle their customers’ personal data. And it has inspired a global movement that has seen countries around the world adopt new privacy laws that are modelled on GDPR.

“Now it is time for Congress to take inspiration from the rest of the world and enact federal legislation that extends the privacy protections in GDPR to citizens in the United States.”

The rules themselves were first introduced in an attempt to force companies to be more responsible and transparent in how customer data is handled. The update reflected the new sharing economies the world had sleepwalked into; the new status quo had come under criticism and new protections had to be put in place while also offering more control to the consumer of their personal data.

GDPR arrived with little fanfare after many businesses scurried around for the weeks prior despite having almost 18 months’ notice. And while these regulations were designed for the European market, such is the open nature of the internet, the impact was felt worldwide.

While this might sound negative, GDPR has proved to be an inspiration for numerous other countries and regions. Brazil, Japan, South Korea and India were just a few of the nations which saw the benefit of the rules, and now it appears there are calls for the same position to be adopted in the US.

As Brill points out in the blog post stating the Microsoft position, California has already made steps forward to create a more privacy-focused society. The California Consumer Privacy Act (CCPA) will go into effect on January 1 2020. Inspired by GDPR, the new law will provide California residents with the right to know what personal information is being collected on them, know whether it is being sold or monetized, say no to monetization and access all the data.

This is only one example, though there are numerous states around the US, primarily Democrat, which have similar pro-privacy attitudes to California. However, this is a law which stops short of the strictness of GDPR. Companies are not on the stopwatch to notify customers of a breach, as they are under GDPR, while the language around punishment for non-compliance is very vague.

This is perhaps the issue Microsoft will face in attempting to escalate such rules up to federal law; the only attempt which we have seen so far in the US is a diluted version of GDPR. Whereas GDPR is a sharp stick for the regulators to swing, a fine of 3% of annual turnover certainly encourages compliance, the Californian approach is more like a tickling feather; it might irritate a little bit.

At the moment, US privacy laws are nothing more than ripples in the technology pond. If GDPR-style rules were to be introduced in the US, the impact would be significant. GDPR has already shifting the privacy conversation and had notable impacts on the way businesses operate. Google, for example, has introduced an auto-delete function for users while Facebook’s entire business rhetoric has become much more privacy focused. It is having a fundamental impact on the business.

We are not too sure whether Microsoft’s call is going to have any material impact on government thinking right now, but privacy laws in the US (and everywhere for that matter) are going to need to be brought up-to-date. With artificial intelligence, personalisation, big data, facial recognition and predictive analytics technologies all gaining traction, the role of personal data and privacy is going to become much more significant.

Apple recognised as ‘Privacy Champion’ by techies

An anonymous survey of people working in the technology industry has crowned Apple as the privacy champion of FANG, while 78% believe it is a top priority at their own organization.

The survey was run by Blind, an anonymous social network for the workplace​, which has a userbase in the hundreds of thousands, many of whom work at the world’s largest technology companies. Asking whether they believed their own organization prioritised user privacy, the results might shock a few.

Employees of technology companies were given a simple statement and offered the opportunity to add an explanation. The statement was “My company believes customer data protection is a top priority”.

Sitting at the top of the table was Apple with 73.6% and 19.8% answering the statement they strongly agreed or agreed respectively. LinkedIn and Salesforce also featured highly on the list, while Google and Amazon were also above the industry average. Facebook was below the industry average while Adobe, Intuit and SAP fell way below the average with only 44.6%, 40% and 39% respectively stating they strongly agree with the statement.

Such low numbers should be a major concern, especially with lawmakers and regulators attempting to reconfigure rules to take a stronger tone with data privacy. Irrelevant whether the likes of Apple is taking privacy seriously, rules will be written for the industry as a whole; the laggards will ensure everyone has to face the sharp stick of the law.

On the FANG front, Blind users were asked whether Apple should be considered the privacy champion. 67.9% agreed with the statement, with some suggesting the business model is not based on the transfer of personal information therefore it is more secure or less of a threat. That said, Apple is fast evolving with the software and services business becoming more of a focus. It might well evolve to include some of these practises in the future.

That said, while Apple is seemingly keeping its hands clean, one person feels the company is nothing more than an enabler for the more nefarious.

“I feel Apple is no better for creating the technology that enables companies like Facebook to become no more than spying tools,” said one Intuit employee.

Although scores in the 70s could be viewed as positive, this means 20-30% of an organization’s own employees do not believe the privacy rhetoric which is being reeled off in the press by executives of the tech giants. If a company is unable to create an internal belief in privacy, it might be viewed as a worrying sign.