Silicon Valley has often pushed the boundaries in pursuit of progress, but the it deserves everything it gets if it continues to try the patience of consumers and regulators with privacy.
‘It is easier to ask for forgiveness, than beg for permission’ is a common, if largely unattributable, phrase which seems to apply very well to the on-going position of Silicon Valley. It is certainly easier to act and face the consequences later, but it should not be right or allowed. This is the approach the internet giants are taking on a weekly basis, and someone will have to find the stomach and muscle to stop this abuse of power, influence and trust.
The most recent chapter in this on-going tale of deceit and betrayal concerns the voice assistants which are becoming increasingly popular with consumers around the world.
Apple is the latest company to test the will of the general public as it has now officially ended an internal process which is known as ‘grading’. In short, humans listen to Siri interactions with customers, transcribing the interaction in certain cases, to help improve the accuracy of the digital assistant.
“We know that customers have been concerned by recent reports of people listening to audio Siri recordings as part of our Siri quality evaluation process — which we call grading,” Apple said in a blog entry. “We heard their concerns, immediately suspended human grading of Siri requests and began a thorough review of our practices and policies. We’ve decided to make some changes to Siri as a result.”
Of course, it is perfectly reasonable for Apple to want to improve the performance of Siri, though it must ask for permission. This is the vital step in the process which Apple decided to leave out.
The new process will seek consent from users through an ‘opt-in’ system, making it compliant, while the default position for all Siri interactions will be to not store information. For those consumers who do opt-in to aid Apple in training Siri, the audio will only be transcribed and reviewed by permanent Apple employees.
This process should have been in-place prior to the ‘grading’ system being implemented. It is inconceivable that Apple did not realise this would break privacy regulations or breach the trust it has been offered by the customer. It decided not to tell the consumer or authorities this practice was in place. It muddied the waters to hide the practice. It lied to the user when it said it respects privacy principles and rights.
Apple acted irresponsibly, unethically and underhandedly. And there is almost no plausible explanation that it did so without knowledge and understanding of the potential impact of these actions. If it did not understand how or why this practice violated privacy principles or regulations, there must be an epidemic of incompetence spreading through the ranks at Cupertino.
What is worth noting is Apple is not alone; Google and Facebook are just as bad at misleading or lying to the user, breaking the trust which has been offered to these undeserving companies.
Google is currently under investigation for the same abuse of trust and privacy principles, this time for the Google Assistant.
“We have made it clear to Google’s representatives that essential requirements for the operation of the Google Assistant are currently not fulfilled,” said Johannes Caspar, Hamburg Commissioner for Data Protection and Freedom of Information. “This not only applies to the practice of transcribing, but to the overall processing of audio data generated by the operation of the language assistance system.”
The investigation from the Hamburg data protection authority has pressured Google into changing the way it trains its digital assistant. Earlier this month, Belgian news outlet VRT NWS revealed 0.2% of conversations with Google Assistant were being listened to by external contractors. At least one audio clip leaked to the news outlet included a couple’s address and personal information about their family.
Google has now said it has stopped the practice in the EU, but not necessarily elsewhere, and the Hamburg DPA has said it will have to seek permission from users before beginning anything remotely similar.
At the same regulator, Facebook has been dragged into the drama.
“In a special way, this also applies to Facebook Inc., where as part of the Facebook Messenger to improve the transcription function offered there a scheduled manual evaluation was not only the human-to-machine communication, but also the human-to-human communication,” said Caspar. “This is currently the subject of a separate investigation.”
Two weeks ago, reports emerged Facebook had hired external contractors to transcribe audio from calls made across the Messenger platform. Once again, users were not informed, while consent was not obtained, but what makes this incident even worse, is there does not appear to be any logical reason for Facebook to need this data.
The only reason we can see why Facebook would want this data to improve algorithms is to take the insight to feed the big-data, hyper-targeted advertising machine. However, this is a massive no-no and a significant (and illegal) breach of trust.
All of these examples are focused on transcription of audio data, though there are many other instances of privacy violations, and demonstrate the ‘easier to ask for forgiveness than permission attitude’ which has engulfed Silicon Valley.
We cannot believe there is any way these companies did not understand or comprehend these actions and practices were a breach of trust and potentially breaking privacy rules. These companies are run by incredibly smart and competent people. Recruitment drives are intense, offices and benefits are luxurious, and salaries are sky-high for a very good reason; Silicon Valley wants to attract the best and brightest talent around.
And it works. The likes of Google, Facebook and Apple have the most innovative engineers, data scientists who can spot the wood for the trees, the savviest businessmen, accountants who are hide-and-seek champions and the slipperiest lawyers. They consider and contemplate all potential gains and consequences from any initiative. We cannot believe there is any conceivable explanation as to why these incredibly intelligent people did not recognise these initiatives were either misleading, untransparent or non-compliant.
The days of appearing before a committee, cap in hand, begging for forgiveness with a promise it will never happen again cannot be allowed to continue. The judges, politicians and consumers who believe these privacy violations are done by accident are either incredibly naïve, absurdly short-sighted, woefully ill-informed or, quite frankly, moronic.
Silicon Valley must be forced to act responsible and ethically, because it clearly won’t do it on its own.