Microsoft starts ruffling privacy feathers in the US

This weekend will mark the one-year anniversary of Europe’s GDPR and Microsoft has made the bold suggestion of bringing the rules over the pond to the US.

Many US businesses would have been protected from the chaos that was the European Union’s General Data Protection Regulation (GDPR), with the rules only impacting those which operated in Europe. And while there are benefits to privacy and data protection rights for consumers, that will come as little compensation for those who had to protect themselves from the weighty fines attached to non-compliance.

Voicing what could turn out to be a very unpopular opinion, Microsoft has suggested the US should introduce its own version.

“A lot has happened on the global privacy front since GDPR went into force,” said Julie Brill, Deputy General Counsel at Microsoft. “Overall, companies that collect and process personal information for people living in the EU have adapted, putting new systems and processes in place to ensure that individuals understand what data is collected about them and can correct it if it is inaccurate and delete it or move it somewhere else if they choose.

“This has improved how companies handle their customers’ personal data. And it has inspired a global movement that has seen countries around the world adopt new privacy laws that are modelled on GDPR.

“Now it is time for Congress to take inspiration from the rest of the world and enact federal legislation that extends the privacy protections in GDPR to citizens in the United States.”

The rules themselves were first introduced in an attempt to force companies to be more responsible and transparent in how customer data is handled. The update reflected the new sharing economies the world had sleepwalked into; the new status quo had come under criticism and new protections had to be put in place while also offering more control to the consumer of their personal data.

GDPR arrived with little fanfare after many businesses scurried around for the weeks prior despite having almost 18 months’ notice. And while these regulations were designed for the European market, such is the open nature of the internet, the impact was felt worldwide.

While this might sound negative, GDPR has proved to be an inspiration for numerous other countries and regions. Brazil, Japan, South Korea and India were just a few of the nations which saw the benefit of the rules, and now it appears there are calls for the same position to be adopted in the US.

As Brill points out in the blog post stating the Microsoft position, California has already made steps forward to create a more privacy-focused society. The California Consumer Privacy Act (CCPA) will go into effect on January 1 2020. Inspired by GDPR, the new law will provide California residents with the right to know what personal information is being collected on them, know whether it is being sold or monetized, say no to monetization and access all the data.

This is only one example, though there are numerous states around the US, primarily Democrat, which have similar pro-privacy attitudes to California. However, this is a law which stops short of the strictness of GDPR. Companies are not on the stopwatch to notify customers of a breach, as they are under GDPR, while the language around punishment for non-compliance is very vague.

This is perhaps the issue Microsoft will face in attempting to escalate such rules up to federal law; the only attempt which we have seen so far in the US is a diluted version of GDPR. Whereas GDPR is a sharp stick for the regulators to swing, a fine of 3% of annual turnover certainly encourages compliance, the Californian approach is more like a tickling feather; it might irritate a little bit.

At the moment, US privacy laws are nothing more than ripples in the technology pond. If GDPR-style rules were to be introduced in the US, the impact would be significant. GDPR has already shifting the privacy conversation and had notable impacts on the way businesses operate. Google, for example, has introduced an auto-delete function for users while Facebook’s entire business rhetoric has become much more privacy focused. It is having a fundamental impact on the business.

We are not too sure whether Microsoft’s call is going to have any material impact on government thinking right now, but privacy laws in the US (and everywhere for that matter) are going to need to be brought up-to-date. With artificial intelligence, personalisation, big data, facial recognition and predictive analytics technologies all gaining traction, the role of personal data and privacy is going to become much more significant.

Apple recognised as ‘Privacy Champion’ by techies

An anonymous survey of people working in the technology industry has crowned Apple as the privacy champion of FANG, while 78% believe it is a top priority at their own organization.

The survey was run by Blind, an anonymous social network for the workplace​, which has a userbase in the hundreds of thousands, many of whom work at the world’s largest technology companies. Asking whether they believed their own organization prioritised user privacy, the results might shock a few.

Employees of technology companies were given a simple statement and offered the opportunity to add an explanation. The statement was “My company believes customer data protection is a top priority”.

Sitting at the top of the table was Apple with 73.6% and 19.8% answering the statement they strongly agreed or agreed respectively. LinkedIn and Salesforce also featured highly on the list, while Google and Amazon were also above the industry average. Facebook was below the industry average while Adobe, Intuit and SAP fell way below the average with only 44.6%, 40% and 39% respectively stating they strongly agree with the statement.

Such low numbers should be a major concern, especially with lawmakers and regulators attempting to reconfigure rules to take a stronger tone with data privacy. Irrelevant whether the likes of Apple is taking privacy seriously, rules will be written for the industry as a whole; the laggards will ensure everyone has to face the sharp stick of the law.

On the FANG front, Blind users were asked whether Apple should be considered the privacy champion. 67.9% agreed with the statement, with some suggesting the business model is not based on the transfer of personal information therefore it is more secure or less of a threat. That said, Apple is fast evolving with the software and services business becoming more of a focus. It might well evolve to include some of these practises in the future.

That said, while Apple is seemingly keeping its hands clean, one person feels the company is nothing more than an enabler for the more nefarious.

“I feel Apple is no better for creating the technology that enables companies like Facebook to become no more than spying tools,” said one Intuit employee.

Although scores in the 70s could be viewed as positive, this means 20-30% of an organization’s own employees do not believe the privacy rhetoric which is being reeled off in the press by executives of the tech giants. If a company is unable to create an internal belief in privacy, it might be viewed as a worrying sign.

The private power of the edge

One of conundrums which has been quietly emerging over the last couple of months concerns how to maintain privacy when attempting to improve customer experience, but the power of the edge might save the day.

If telcos want to be able to improve customer experience, data needs to be collected and analysed. This might sound like a very obvious statement to make, but the growing privacy movement across the world, and the potential of new regulatory restraints, might make this more difficult.

This is where the edge could play a more significant role. One of the more prominent discussions from Mobile World Congress in Barcelona this year was the role of the edge, and it does appear this conversation has continued through to Light Reading’s Big 5G Event in Denver.

Some might say artificial intelligence and data analytics are solutions looking for a problem, but in this instance, there is a very real issue to address. Improving customer experience though analytics will only be successful if implemented quickly, some might suggest in real-time, therefore the models used to improve performance should be hosted on the edge. This is an example of where the latency business model can directly impact operations.

It also addresses another few issues, firstly, the cost of sending data back to a central data centre. As it was pointed out today, telcos cannot afford to send all customer data back to be analysed today, it is simply an unreasonable quantity, therefore the more insight which can be actioned on the edge, with only the genuinely important insight being sent back to train models, the more palatable customer experience management becomes.

Secondly, the privacy issue is partly addressed. The more which is actioned on the edge, as close to the customer as possible, the lesser the concerns of the privacy advocates. Yes, data is still being collected, analysed and (potentially) actioned upon, but as soon as the insight is realised the sooner it can be deleted.

There are still sceptics when it comes to the edge, the latency business case, artificial intelligence and data analytics, but slowly more cases are starting to emerge to add credibility.

Google introduces auto-delete

Privacy is proving to be one of the long-standing themes of 2019 and Google latest move perhaps should be considered an industry standard.

Starting with its Search and Maps products, Google will introduce an auto-delete option for users in the privacy settings. While users will be able to continue to manually delete location and search data held by the internet giant, a new option will soon be available which will automatically delete data after three or 18 months.

“You should always be able to manage your data in a way that works best for you and we’re committed to giving you the best controls to make that happen,” the team said in a blog post.

This is certainly an interesting approach, which could satisfy numerous concerns from all corners of digital society.

Firstly, for the privacy conscious, Google is offering different options for the user to regain control of their personal data. The idea looks simple enough, and relatively transparent. Sceptics will be hunting for a loophole, and quite rightly so, the technology industry has lost the right to credibility when it comes to privacy matters.

Secondly, the retention of data for a short-period of time ensures the Google products can work better. Although popular opinion is turning against hyper-scale personalisation, the advertising machines are making personalisation a dirty word, it is what makes Google’s search engine and mapping product so successful. If Google wasn’t able to train these products to be individualised, they would be pretty generic and awful.

Finally, it still affords Google the opportunity to make money. Privacy concerns aside for the moment, Google still has to be given the opportunity to make money otherwise the products which we have become so reliant on over the last decade will cease to exist. Google is not a registered charity, if it isn’t making money it will no-longer be.

Perhaps the most important factor in this update is the reasonable nature of it. Google is offering terms to the value exchange. It is placing a time limit on its ability to make money from personal data in exchange for offering free services. Admittedly, time constraints are supposedly included in GDPR, though such is the complex and confusing nature of the rules, there are plenty of loopholes and grey areas to expose.

As far as we’re concerned, this is a good move for Google and the digital society on the whole. Yes, Google is perhaps making the best of a difficult situation, claiming PR points by appearing to voluntarily promote privacy in the face of regulatory reform, but it would be nice to see such approaches as industry standard.

On the surface, its reasonable, transparent and fulfils the promise of the digital economy, where Google offers services in exchange for data. Here’s hoping more of the internet giants follow suit.

Apple is facing complaints from developers for removing competing apps

Apps that help users control screen time have been removed or been demanded to curtail their features after Apple rolled out similar features.

Many app makers have claimed that their parental control and screen time alert apps have either been removed by Apple or have been asked to change the features, shortly after Apple rolled out similar features on iOS, reported The New York Times. 11 out of the 17 most downloaded apps of this category have been taken down, according to the research by the app analytics firm Sensor Tower and the NYT.

Apple included screen time control tools when iOS 12 was unveiled at the WWDC event in June last year, integrated in the Settings menu when the new OS was officially launched. They enabled parents to control how much time their kids can spend on iPhones and iPads, as well as alert users the time they spend on their iOS devices. But they are not as feature rich as some specialised 3rd party apps, the developers told the NYT. They were also not terribly robust. Only a few days after the new iOS was released to the public, many kids already found ways to bypass the control, according to the parents who shared their experiences on Reddit.

Apple’s official response claimed that these apps were removed to help “protect our children from technologies that could be used to violate their privacy and security.” Its spokesperson also denied that the apps were removed for competition reasons, saying, “we treat all apps the same, including those that compete with our own services.”

However, both the timing and the reasons given by Apple would raise some eyebrows. While its defence of limiting the device management features for enterprise use is plausible, as was detailed in the response to MacRumor by Phil Schiller, Apple’s SVP for Worldwide Marketing, some other key features that have been in place for years and have been repeatedly approved by Apple are being asked to be removed, some developers told the newspaper. For example, these apps support device level blocking of certain content while Apple’s tool only blocks content inside the Safari browser.

At least three of the app developers, Kidslox, Qustodio, and Kaspersky Lab have filed complaints at the EU’s competition commission.

It is less likely that Apple purges the competing apps for the revenue. On one hand, Apple does not directly get revenue from their screen time apps, it is included in the phone price. On the other hand, by taking down these apps Apple is losing its share of the payment the apps receive (30%). A more plausible reason to trigger the Apple action is these apps can be used cross-platform, which means parents on iPhone can control their kids’ screen time on Android. It is not entirely out of the question that Apple may be using some feeble excuses to lock in as many users as possible.

This is another example that Apple is taking its role as platform and curator of apps too far, and inadvertently lending support to the rhetoric of Elizabeth Warren, the Democratic presidential candidate for 2020, when she said, without naming Apple, that “either they run the platform or they play in the store. They don’t get to do both at the same time.” These complaints also sound similar to Spotify’s accusation that Apple is being both the referee and a player.

Apple boss wants more state intervention in tech business

Tim Cook, CEO of the world’s largest tech company Apple, has once more called for greater regulation of the sector.

Speaking at an event organised by Time magazine, Cook said “We all have to be intellectually honest, and we have to admit that what we’re doing isn’t working. Technology needs to be regulated. There are now too many examples where the no rails have resulted in a great damage to society.”

Now it must be stressed that Cook was referring to privacy and data protection, which happen to be far greater concerns for Apple competitors such as Google, Facebook and Amazon than Apple itself. On the matter of gadgets he was much less strident, noting only that it isn’t Apple’s aim for people to be glued to their devices all the time, which could be interpreted as another dig at its internet competitors.

Cook seems to consider himself a deeply moral person, saying thins like “I’m not sure this is the right thing but I focus on what’s right,” and “At the end of the day we’ll be judged more by did we stand up for what we believed in, not necessarily do they agree with me on everything.” On this basis he seems to reconcile himself to the growing dependence on devices such as those sold by his company by blaming that on the services rather than the devices themselves.

Having called for greater state intervention in the activities of his competitors Cook was quick to stress he doesn’t think companies should get directly involved in politics. “Apple is probably one of the only large companies that doesn’t have a PAC (political action committee). I refuse to have one because it shouldn’t exist. I think the people that should be able to donate are people that can vote.” Those are good points but, as his previous points indicate, there are many ways for tech companies to behave politically.

 

Turns out real people sometimes hear what you say to smart speakers

The revelation that Amazon employs people to listen to voice recordings captured from its Echo devices has apparently surprised some people.

The scoop comes courtesy of Bloomberg and seems to have caught the public imagination, as it has been featured prominently by mainstream publications such as the Guardian and BBC News. Apparently Amazon employs thousands of people globally to help improve the voice recognition and general helpfulness of its smart speakers. That means they have to listen to real exchanges sometimes.

That’s it. Nothing more to see here folks. One extra bit of spice was added by the detail that sometimes workers use internal chatrooms to share funny audio files such as people singing in the shower. On a more serious note some of them reckon they’ve heard crimes being committed but were told it’s not their job to interfere.

Amazon sent Bloomberg a fairly generic response amounting to a justification of the necessity of human involvement in the AI and voice recognition process but stressing that nothing’s more important to it than privacy.

Bloomberg’s main issue seems to be that Amazon doesn’t make it explicit enough that another person may be able to listen into your private stuff through an Echo device. Surely anyone who knowingly installs and turns on a devices that is explicitly designed to listen to your voice at all times must be at least dimly aware that there may be someone else on the other end of the line, but even if they’re not it’s not obvious how explicit Amazon needs to be.

An underlying fact of life in the artificial intelligence era is that the development of AI relies on the input of as much ‘real life’ stuff as possible/ Only be experiencing loads of real interactions and scenarios can a machine learn to mimic them and participate in them. In case there is any remaining doubt, if you introduce a device into your house that is designed to listen at all times, that’s exactly what it will do.

FTC launches investigation for privacy practices in US

The Federal Trade Commission (FTC) has issued orders to seven US broadband providers seeking non-public information to assess privacy practises.

Although this investigation is relatively broad, this might be another attempt from the US Government to get a handle on the privacy practices of the fast-evolving digital economy. Several scandals over the last 18 months have demonstrated current rules are not fit for purpose, containing too many loopholes and inadequately governing an industry which has progressed beyond the reach of bureaucracy.

The FTC has been under pressure in recent months to get a better handle on the data machines which power the digital economy, bringing in billions for the likes of Amazon and Google, but increasingly the telcos. While many fingers have been pointed at the residents of Silicon Valley, the telcos have been making money through the transfer of personal information also.

This investigation is an important step forward in creating a better understanding of the data and sharing economy, a foundation to create resilient and future-proof regulations. Some might suggest this sort of investigation should have happened years ago, but hindsight is always 20/20; who would have predicted the scale of scandals we have witnessed recently.

AT&T, AT&T Mobility, Comcast Cable Communications, Google Fiber, T-Mobile US, Verizon, and Cellco Partnership are the firms which have received the demands.

As part of the investigation, the FTC is requesting:

  • The categories of personal information collected about consumers or their devices
  • Purpose of collecting data for each of the categories
  • Methods of collecting the data
  • Policies for employees to access this data
  • Retention policies
  • What information is transferred to third-parties
  • How the data is the information is aggregated, anonymized or deidentified
  • Disclosures to customers about data collection and transfer to third-parties
  • What choices are offered to the customer
  • How accessible personal data is to the customer

As you can see, this is an incredibly broad and in-depth request, with a lot of the information being non-public. Many of the telcos who have been sent the orders will be uncomfortable releasing this information, though they’ll have no choice.

Although this is a good first step for the FTC, we would hope the investigation is broadened further in the future. More information and insight needs to be collected from the OTTs, the masters of manipulating the data-sharing economy. The telcos are small fish in this expedition, but it is progress.

All eyes from the data-sharing community will be keenly directed towards the FTC over the next couple of months. While this investigation is nothing more than a virtual pebble dropped into the digital pond for the moment, there is the potential for those ripples to grow into waves. This could be the first step towards major regulatory reform, an overdue revolution to gain a better handle on the wild-west internet economy.

Nick Clegg defends Facebook’s business model from EU’s privacy regulation

Facebook’s head of PR reportedly had a series of meetings with EU and UK officials aiming to safeguard the social network’s business model heavily relying on targeted advertising.

Sir Nick Clegg, the former UK Deputy Prime Minister, now Facebook’s VP for Global Affairs and Communications, met three EU commissioners during the World Economic Forum in Davos and shortly after the event in Brussels, according to a report by the Telegraph. These commissioners’ portfolios include Digital Single Market (Andrus Ansip), Justice, Consumers and Gender Equality (Věra Jourová), and Research, Science and Innovation (Carlos Moedas). Clegg’s mission, according to the Telegraph report, was to present Facebook’s case to defend its ads-based business model in the face of new EU legislation related to consumer privacy.

According to a meeting minutes from the Ansip meeting, seen by the Telegraph, “Nick Clegg stated as main Facebook’s concern the fact that the said rules are considered to call into question the Facebook business model, which should not be ‘outlawed’ (e.g. Facebook would like to measure the effectiveness of its ads, which requires data processing). He stated that the General Data Protection Regulation is more flexible (by providing more grounds for processing).”

In response, Ansip defended the proposed ePrivacy Regulation as a complement to GDPR and it is primarily about protecting the confidentiality of consumers’ communications. In addition, the ePrivacy Regulation will be more up to date and will provide more clarity and certainty, compared with the current ePrivacy Directive, which originated in 2002 and last updated in 2009. Member states could interprete and implement the current Directive more restrictively, Ansip warned.

Facebook’s current security setup makes it possible to access users’ communication and able to target them with advertisements based on the communications. Under the proposed Regulation, platforms like Facebook need to get explicit consent from account holders to access the content of their communications, for either advertisement serving, or effectiveness measuring.

There are two issues with Facebook’s case. The first one is, as Ansip put it, companies like Facebook would still be able to monetise data after obtaining the consent of users. They just need to do it in a way more respectful of users’ privacy, which 92% of EU consumers think important, according to the findings of Eurobarometer, a bi-annual EU wide survey.

Another is Facebook’s own strategy announced by Zuckerberg recently. The new plan will make it impossible for Facebook to read users’ private communications with its end-to-end WhatsApp-like encryption. This means, even if consumers are asked and do grant consent, Facebook in the future will not be able to access the content for targeted advertising. Zuckerberg repeatedly talked about trade-offs in his message. This would be one of them.

On the other hand, last November the EU member states’ telecom ministers agreed to delay the vote on ePrivacy Regulations, which means it will be highly unlikely that the bill will be passed and come into effect before the next European Parliament election in May.

The office of Jeremy Wright, the UK’s Secretary of State for Digital, Culture, Media and Sport, did not release much detail related to the meeting with Clegg, other than claiming “We are at a crucial stage in the formulation of our internet safety strategy and as a result we are engaging with many stakeholders to discuss issues pertinent to the policy. This includes discussions with social media companies such as Facebook. It is in these crucial times that ministers, officials and external parties need space in which to develop their thinking and explore different options in a free and frank manner.”

The Telegraph believed Clegg’s objective was to minimise Facebook’s exposure to risks from the impending government proposals that could “place social media firms under a statutory duty of care, which could see them fined or prosecuted” if they fail to protect users, especially children, from online harms.

It is also highly conceivable that the meeting with the UK officials was related to influence post-Brexit regulatory setup in the country, when it will not longer be governed by EU laws. Facebook may want to have its voice heard before the UK starts to make its own privacy and online regulations.

Zuckerberg’s vision for Facebook: as privacy-focused as WhatsApp

The Facebook founder laid out his plan for the next steps how Facebook will evolve with a focus on privacy and data security, and promised more open and transparency in the transition.

In a long post published on Facebook, Mark Zuckerberg first recognised that going forward, users may prefer more private communication than socialising publicly. He used the analogy of town squares vs. living rooms. To facilitate this, he aims to use the technologies of WhatsApp as the foundation to build the Facebook ecosystem.

Zuckerberg laid out principles for the next steps, including:

  • Private interactions: this is largely related to users’ control over who they communicate with, safeguarded by measures like group size control and limiting public stories being share;
  • End-to-end encryption: this is about encrypting messages going through Facebook’s platforms. An interesting point here is that Zuckerberg admitted that Facebook’s security systems can read the content of users messages sent over Messenger. WhatsApp is already implementing end-to-end encryption and is not storing encryption keys, which makes it literally impossible for it share content of communication between individuals with any other third parties including the authorities. Zuckerberg recalled the case of the Facebook’s VP for Latin America being jailed in Brazil to illustrate his point.
  • Reducing Permanence: this is mainly about giving users the choice to decide how long they like their content (messages, photos, videos, etc.) to be stored, to ensures what they said many years ago would not come back to haunt them.
  • Safety: Facebook will guard the data safe against malignant attacks
  • Interoperability: Facebook aims to make its platforms interoperable and may extend to be interoperable with SMS too.
  • Secure data storage: one of the most important point here is Zuckerberg vowed not to save user data in countries which “have a track record of violating human rights like privacy or freedom of expression”.

To do all these right, Zuckerberg promised, Facebook is committed to “consulting with experts, advocates, industry partners, and governments — including law enforcement and regulators”.

None of these principles are new or surprising, and are an understandable reaction to recent history when Facebook has been battered by scandals of both data leaking and misuse of private data for monetisation purpose. However there are a couple of questions that are not answered:

  1. What changes Facebook needs to make to its business model: in other words, when Facebook limits its own ability to penetrate user data it weakens its value for targeted advertisers. How will it convince the investors this is the right step to take, and how will it to compensate the loss?
  2. Is Facebook finally giving up its plan to re-enter markets like China? Zuckerberg has huffed and puffed over the recent years without bringing down the Great Wall. While his peers in Apple have happily handed over the keys to iCloud and Google has working hard, secretly or not so secretly to re-enter China, how will the capital market react to Facebook’s public statement that “there’s an important difference between providing a service in a country and storing people’s data there”?