Internet pioneer Tim Berners-Lee is on a hiring spree

Inrupt, the disruptive internet start-up founded by Tom Berners-Lee, has announced it is expanding its operational team as it pursues the redistribution of power in the internet era.

After inventing the world wide web in 1991, Berners-Lee (pictured) has been on somewhat of a crusade in recent years, heavily criticising the corporate nature of an invention which was intended to empower society. Inrupt is Berners-Lee’s answer to the unsatisfactory position.

“I’ve always believed the web is for everyone,” Berners-Lee said when launching the business. “That’s why I and others fight fiercely to protect it.

“The changes we’ve managed to bring have created a better and more connected world. But for all the good we’ve achieved, the web has evolved into an engine of inequity and division; swayed by powerful forces who use it for their own agendas.”

With the creation of an open-source project known as Solid, the Inrupt team hope to give the user a choice about where data is stored, who can access this information and what applications are used. The objective is to give the user the defining voice in how data is used, and in turn, eroding the power and influence of the corporations who have benefitted so greatly from the rise of the internet.

With Inrupt providing commercial energy and an ecosystem to help develop Solid, Berners-Lee has now announced a string of new hires to help drive the company forward.

Bruce Schneier has joined as Chief of Security Architecture, while Davi Ottenheimer has been appointed as VP of Trust and Digital Ethics. Osmar Olivo and Emmet Townsend will act as the VP’s of Product and Engineering respectively, adding a significant amount of weight to the operational team.

“Joining Inrupt is one of those rare opportunities to build something that will change the everyday lives of billions of people,” said Olivo. “The world is changing, and existing technologies aren’t designed to solve these kinds of problems. Everyone else is retrofitting for a safer world, Inrupt is building one.”

While the objectives of Inrupt might be considered aggressive by some in the industry, there is certainly some interest in the work. Glasswing Ventures, an early stage venture capital firm based in Massachusetts, has invested in Inrupt, while the Greater Manchester Combined Authority is working alongside Inrupt to create an ‘Early Years’ app that digitises the paper-based assessments currently used to review a child’s development up to the age of 2.5 years.

Inrupt certainly has the cash and the PR potential to make a dent in the technology status quo, and now it seems to have the muscle with these new employees. The issue which remains is whether this project can make the transition from an academic dream through to a commercial reality.

This is where critics have focused their attention to date. Berners-Lee’s criticism of the status quo is of course very timely, GDPR and California’s privacy laws pay homage to the same issues, but the question is whether an idea which could be viewed as revolutionary gains traction in the real world. Universities are full of blue-sky thinking innovators who have an idea which can change the course of history, but the truth is few are designed to accommodate the nuances of reality. Only time will tell to which column Inrupt falls into.

Congress asks Amazon whether it is becoming a police snitch

The Subcommittee on Economic and Consumer Policy has written to Amazon asking the internet to explain partnerships between surveillance company Ring and local police departments.

Home security and surveillance products are becoming increasingly popular with the consumer, though it appears the subcommittee is asking Amazon to explain the fine print. As with most products and services launched by Silicon Valley residents, Ring seems to be accompanied with legal jargon few will understand and may well compromise privacy and data protection principles.

“The Subcommittee on Economic and Consumer Policy is writing to request documents and information about Ring’s partnerships with city governments and local police departments, along with the company’s policies governing the data it collects,” the letter states.

“The Subcommittee is examining traditional constitutional protections against surveilling Americans and the balancing of civil liberties and security interests.”

The question which the politicians seem to be asking is how compliant Ring will in handing over information to law enforcement agencies or local government authorities, as well as the fundamentals of the partnerships themselves. Once again it appears the technology industry is revelling in the grey lands of nuance and half-statements.

Ring currently has partnerships with more than 900 law enforcement and local government agencies, it is critically important that everything is above board. This isn’t just a quirky product adopted by a few individuals anymore, this is potentially a scaled-surveillance programme. The opportunity for abuse is present once again, offering validity for Congress to wade into the situation and start splashing.

Optimists might suggest Ring is being a good corporate citizen, aiding police and security forces where possible. Cynics, on the other hand, would question whether Amazon is attempting to create a private, for-profit surveillance network.

One area which the Subcommittee would like some clarification on is to do with how compliant Ring would be when offering data to government agencies. Ring has said it would not turn over data unless it is “required to do so to comply with a legally valid and binding order”, though the wording of the terms of service seem to undermine this firm stance.

Ring may access, use, preserve and/or disclose your Content to law enforcement authorities, government officials and/or third parties, if legally required to do so or if we have a good faith belief that such access, use, preservation or disclosure is reasonably necessary to: (a) comply with applicable law, regulation, legal process or reasonable governmental request.

The final point of this clause, reasonable government request, is what should be considered worrying. This is unnecessarily vague and flexible language which can be used for a wide range of justifications or explanations for wrongdoing.

More often than not, politicians on such subcommittees are usually chasing a headline, but this seems to be a case where proper investigation is warranted. Law enforcement agencies and the internet giants have shown themselves on numerous occasions not to be trustworthy with minimal oversight. And when you are talking about a topic as sensitive as data privacy, no blind trust should be afforded at all.

Benign brother has got your back: China launches coronavirus app

China’s government bodies and businesses have jointly launched a mobile app to help detect if people have been in close contact with those suspected of carrying the novel coronavirus.

The app has access to multiple official holders of private data. By registering with his or her name and Chinese ID number, a smartphone user can use the app, called “Close Contact Detector” to check if he or she has been in proximity of those who are later either confirmed or suspected to have the virus. Such close contacts include travelling in the same train carriage or sitting within three rows on the same flight with those carrying the virus.

One registered user can check the status of up to three users by inputting their ID numbers and names. One ID number is limited to one check per day. The app will then return an assessment of which category the individual in question falls into: Confirmed case, Suspected case, Close contact, Normal. Xinhua, one of the major official propaganda outlets, reported that over 105 million checks have been made by users three days after the app was launched.

The app development was led by the government organisations responsible for health which was joined by China Electronics Technology Group, one of the country’s largest state-owned enterprises, as well as the leading smartphone makers Huawei, Xiaomi, OPPO, and Vivo. The backend data comes out of the National Health Commission, the Ministry of Transport, China State Railway Group Company, the state owned enterprise that operates all the rail transport in China, and the Civil Aviation Administration, the aviation regulator.

The fact that private travel data is made readily available to business entities without explicit consent from the individuals involved may raise plenty of eyebrows in places like Europe, but the attitude in China is different. “From a Chinese perspective this is a really useful service for people… It’s a really powerful tool that really shows the power of data being used for good,” Carolyn Bigg, a Hong Kong-based lawyer, told the BBC.

“Close Contact Detector” has been pushed out by the smartphone brands as a priority app to their users in China. It is unclear how or if promoting to users of other smartphone brands, iOS users, or non-smartphone users, will be conducted. Nor is it clear if there are plans to extend the coverage to residents without a Chinese ID number, such as foreign nationals staying in China.

Telecoms.com has learned that over the last few weeks there have been other online tools to help concerned users check if they had unknowingly come into contact with confirmed victims of the new coronavirus. The key difference from the new contact detector is that, in the earlier attempts, backend data was crowdsourced from publicly available information including the flight and train numbers of the confirmed cases published in the media.

Neither is contact detector the only use case where user data is playing a role. A recent video clip making rounds on social media shows a drone flying a blown-up QR code that drivers can scan to register before they enter Shenzhen after the long Chinese New Year break. The method is deployed presumably to prevent cars and drivers registered to the major disease hit regions from going through, as well as reduce human-to-human interaction. Xinhua reported that the Shenzhen Police, which is responsible for managing the local traffic and owns the automobile and driver data, is behind this measure.

Tinder comes under the scope of Irish GDPR watchdog

Dating apps have forever changed the way millennials find relationships (for however long they last…) but Tinder has found itself under the scrutiny of the Irish regulator.

The dating trailblazer has found itself alongside serial privacy offender Google as the focal point of an investigation from lead-European GDPR regulator the Irish Data Protection Commission. The question is whether MTCH Technology Services, the parent-company of Tinder, complies with GDPR in terms of processing user data.

“The identified issues pertain to MTCH Technology Services Limited’s ongoing processing of users’ personal data with regard to its processing activities in relation to the Tinder platform, the transparency surrounding the ongoing processing, and the company’s compliance with its obligations with regard to data subject right’s requests,” a statement from the regulator said.

Interestingly enough, a recent investigation from the Norwegian Consumer Council (NCC) suggested several dating apps such as Grindr, OkCupid, and Tinder might be breaking GDPR. The investigation suggested nine out of ten of the most popular dating apps were transmitting data to ‘unexpected third-parties’ without seeking consent from users, potentially violating GDPR.

As these applications collect sensitive information, sexual preferences, behavioural data, and location, there could be quite the backlash. The Irish Data Protection Commission will investigate how this information is processed, whether it then transmitted onto third parties and if the developers are being transparent enough with their users.

Alongside the Tinder investigation, the Irish watchdog is also investigating a regular for the privacy enforcement community, Google.

Once again, transparency is the key word here, as it so often is when one of the Silicon Valley residents are placed under the microscope. The authority will hope to understand how Google collects and processes location data, while also seeing whether it has been effectively informing users prior to collecting consent.

Google is seemingly constantly under the scrutiny of one regulator or another due to the complex web that is its operations. No-one outside of Google genuinely understands every aspect of the business, therefore a new potential privacy scandal emerges every so often as the layers of complexity are pulled back. In this investigation, it is not entirely clear what product or service is the focal point.

What is worth bearing in mind that any new privacy investigations are most likely to focus on timelines which were initiated following the introduction of GDPR in 2018. Anything prior to this, for example the Equifax leak or Yahoo hack, would not have been subject to the same financial penalties.

For the Tinder and Google investigations, any wrongdoing could be punished with a fine up to €2 million or 4% of total annual revenues, whichever is greater. We haven’t seen many of these fines to date because of the timing of the incidents or investigations, but regulators might well be looking for a case to prove there is a bite behind the regulatory bark, a means to scare corporates into action and proactive security measures.

An excellent example of this enforcement concerns Facebook and the Cambridge Analytica scandal. The investigation into potential GDPR violations takes into account several different things; the incident itself, security procedures and features, transparency with the user and assistance with the investigation, to name a few. Facebook did not cover itself with glory and was not exactly helpful during the investigation, CEO Mark Zuckerberg refused to appear in front of a Parliamentary Committee in the UK when called upon.

As this incident occurred prior to the introduction of GDPR, the Information Commissioner’s Office in the UK was only permitted to fine the social media giant £500,000. Facebook’s annual revenue for 2013, when the incident occurred, was $7.87 billion. The maximum penalty which could have been applied under GDPR would have been $314 million.

Although the potential fines have been well-documented, until there is a case to point to most companies will push the boundary between right and wrong. Caution is generally only practised when the threat of punishment is followed through to make an example.

US Government and Big tech on collision course over backdoor entry

Attorney General William Barr has suggested Apple has not offered ‘material’ assistance as authorities investigate the deadly shooting which took place at a Pensacola naval base last month.

Although Apple disputes the claim from Barr, the conflict between the firm and the Attorney General’s office sets the technology industry on a collision course with the Government. Barr seems to be calling for backdoors to be build into digital products and services, a move which has been robustly opposed by the technology industry.

“We have asked Apple for their help in unlocking the shooter’s iPhones,” Barr said during a press conference. “So far Apple has not given us any substantive assistance.

“This situation perfectly illustrates why it is critical that investigators be able to get access to digital evidence once they have obtained a court order based on probable cause. We call on Apple and other technology companies to help us find a solution so that we can better protect the lives of Americans and prevent future attacks.”

Apple rejects the statement and has claimed it has assisted in the investigation.

“We reject the characterization that Apple has not provided substantive assistance in the Pensacola investigation,” Apple said in a statement.

“Our responses to their many requests since the attack have been timely, thorough and are ongoing. We responded to each request promptly, often within hours, sharing information with FBI offices in Jacksonville, Pensacola and New York. The queries resulted in many gigabytes of information that we turned over to investigators. In every instance, we responded with all of the information that we had.”

Apple has not unlocked the devices, but there are ways and means to access some information without doing so. The firm has assisted authorities through data taken from the iCloud (for example) in other cases.

Over the first six months of 2019, Apple received numerous requests from the US Government for customer information and data. The table below outlines the requests.

Request type Requests received Percentage where data was provided
Device 4,796 84%
Financial Identifier 918 81%
Account Identifier 3,619 90%
Emergency 206 90%

For devices, the Government is requesting device identifiers such as serial number or IMEI number. Examples of financial identifiers are credit card or gift card information. The account identifier could be the customers Apple ID or email address. And ‘Emergency’ describes requests received from a government agency seeking customer data in an emergency matter.

The Apple statement also reiterated its position on privacy:

“We have always maintained there is no such thing as a backdoor just for the good guys. Backdoors can also be exploited by those who threaten our national security and the data security of our customers.”

This is an argument which has reared its head numerous times, and it does appear the pieces are falling into place for it to do so once again.

Apple has regularly been a critic of Governments for refused to enable police and intelligence agencies access to phones. In 2015, Apple defied a court order to assist the FBI by unlocking an iPhone which belonged to one of two terrorists who killed 14 people in San Bernardino. The firm has regularly used the argument of privacy in defending its actions, seemingly not wanting to create precedent for future cases.

And while these two cases have focused on the security measures embedded on devices, the services industry has also found itself in conflict in a very similar fashion.

Over the course of 2017, the then-Home Secretary Amber Rudd launched a sustained attack on the technology industry in an attempt to force the creation of backdoors into messaging services such as WhatsApp. The prevention of terrorism and paedophilia was used as justification to break down the defences offered by end-to-end encryption, but industry refused demands to create backdoors to circumnavigate the security features.

Rudd even went as far as to state users do not care about security, but use these messaging applications for simplicity and convenience.

Barr is not taking the same simple-minded and short-sighted approach as Rudd, but this could be viewed as a challenge. What we could see over the coming months is the US Government heading into conflict with the technology industry once again over access to data on secured products and in encrypted services.

What is worth noting is that there are very valid arguments on both sides of the fence. Governments and regulators should be entitled to enlist the assistance of the technology industry in combatting crime, whereas the technology industry should also be able to draw a line through ideas which would create collateral damage.

The creation of backdoors and designed weaknesses in security features is not something which should be considered. Technology companies, whether software or hardware, have designed security features to be robust enough that not even the manufacturer or developer can circumnavigate them. This ensures security but also prevents abuse.

If backdoors are inserted, this is vulnerability by design. It is effectively waving a red-flag in front of the hacker community, inviting them to find the weakness. Accessing an individual’s phone or WhatsApp account will offer reward for hackers, and whether by accident or design, the vulnerability will be eventually found and exploited.

This is not a viable solution for the sustained health of the digital economy, but this fact directs Big Tech and the US Government on another collision course over access. This is a battle which has been fought before and won by no-one, but it is once again on th

Privacy International leads revolt over Android ‘bloatware’

Privacy International is leading a coalition of more than 50 organisations demanding Android owner Google offers users the opportunity to delete any and every app from their device.

On almost every device, there are several apps which are relatively redundant and useless. Unfortunately for the user, these applications are known as ‘bloatware’ and there is no-way to get rid of the squatting app. The open-letter spearheaded by Privacy International is calling for Google to end the practice, allowing users complete control over what applications are kept on the device.

“We, the undersigned, agree with you [Google CEO Sundar Pichai]: privacy cannot be a luxury offered only to those people who can afford it,” the letter states.

“And yet, Android Partners – who use the Android trademark and branding – are manufacturing devices that contain pre-installed apps that cannot be deleted (often known as ‘bloatware’), which can leave users vulnerable to their data being collected, shared and exposed without their knowledge or consent.”

‘Bloatware’ applications are largely harmless on the surface. Generally, they sit there not doing much, but the issue being raised by Privacy International and its followers is what is going on in the background.

Quoting a paper written by several academics, the coalition claim these applications collect data in the background, largely without the knowledge of the user, and also have ‘privileged custom’ permissions which would not usually be granted by the Android security framework. These permissions include access to the devices microphone and camera.

Interestingly enough, the paper also claims the devices carry the ‘Google Play Protect’ badge but 91% of these applications do not appear in the Google Play Store. This could be a way to get around the strict privacy protections which are implemented by Google and therefore undermines the integrity of the ‘Google Play Protect’ credentials.

The letter is calling for several changes to the dynamic, most notably:

  • Users should be able to permanently delete any application
  • Pre-installed apps should face the same scrutiny as other apps
  • Pre-installed apps should have some sort of update mechanism
  • Google should refuse to certify devices unless manufacturers make changes to reinforce privacy credentials and protections

What is worth noting is that Privacy International and other such organisations are lobby groups which often paints an apocalyptic view of the digital economy. Google can never do anything right in the eyes of this community.

That said, Google is often in hot water over privacy concerns.

Numerous executives have penned blog posts and opinion articles to push the importance of privacy both as a concept and an internal company value of Google. However, the odd scandal often emerges to undermine these PR efforts.

In November, Amnesty International suggested Google was implementing strategies to abuse privacy rights of individuals. Its virtual assistant is under investigation after it emerged humans were reviewing transcripts of conversations recorded by its smart speaker without the consent of the user. In July, International Computer Science Institute (ICSI) researchers said numerous apps could easily circumnavigate Android’s privacy protections. The Google smart city initiative, Sidewalk, has also come under some intense privacy criticism.

What is clear is that Google’s actions and the relationships which it has in place are always of benefit to it as an organisation. The presence of ‘bloatware’ is by design not an oversight, therefore Google will begrudgingly back-pedal on this current dynamic. It may well be forced to under the weight of public criticism, but there will be plenty rolls of the dice before it.

You don’t need to understand AI to trust it, says German politician

The minister for artificial intelligence at the German government has spoken about the European vision for AI, especially how to grow and gain trust from non-expert users.

Prof. Dr. Ina Schieferdecker, a junior minister in Germany’s Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF), who has artificial intelligence in her portfolio, recently attended an AI Camp in Berlin (or KI-Camp in German, for “künstliche Intelligenz”). She was interviewed there by DW (Deutsche Welle, Germany’s answer to the BBC World Service) on how the German government and the European Union can help alleviate concerns about AI among ordinary users of the internet and information technologies.

When addressing the question that AI is often seen as a “black box”, and the demand for algorithms to be made transparent, Schieferdecker said she saw it differently. “I don’t believe that everyone has to understand AI. Not everyone can understand it,” she said. “Technology should be trustworthy. But we don’t all understand how planes work or how giant tankers float on water. So, we have learn (sic.) to trust digital technology, too.”

Admittedly not all Europeans share this way of looking at AI and non-expert users. Finland, the current holder of the European presidency, believes that as many people as possible should understand what AI is about, not only to alleviate the concerns but also unleash its power more broadly. So it decided to give 1% of its population AI training.

Schieferdecker also called for a communal approach to developing AI, which should involve science, technology, education, and business sectors. She also demanded that AI developers should consider users’ safety concerns and other basic principles from the beginning. This is very much in line with what has been outlined in the EU’s “Ethics guidelines for trustworthy AI” published in April this year, where, as guideline number one, it is stated “AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.” As we subsequently reported, those guidelines are too vague and lack tangible measurements of success.

Schieferdecker was more confident. She believed when Germany, which has presumable heavily shaped the guidelines, assumes the European presidency in the second half of 2020, it “will try to pool Europe’s strengths in an effort to transform the rules on paper into something real and useable for the people.”

The interview also touched upon how user data, for example shopping or browsing records, are being used by AI in an opaque way and the concerns about privacy this may raise. Schieferdecker believed GDPR has “made a difference” while also admitting there are “issues here and there, but it’s being further developed.” She also claimed the government is working to achieve a data sovereignty in some shape and “offer people alternatives to your Amazons, Googles, Instagrams” without disclosing further details.

The camp took place on 5 December in Berlin as part of the Science Year 2019 programme (Wissenschaftsjahr 2019) and was co-organised by the BMBF and the Society for Information Technology (Gesellschaft für Informatik, GI), an industry organisation. The interview was subjected to a vetting process by the BMBF before it could be published. As DW put it, “the text has been redacted and altered by the BMBF in addition to DW’s normal editorial guidelines. As such, the text does not entirely reflect the audio of the interview as recorded”.

Europe needs better, not more regulation – A1 Telecom CEO

It certainly isn’t unusual for telcos to have a swipe at regulators, and they never miss the opportunity to do so.

The FT-ETNO Conference in Brussels is the perfect environment for passive-aggressive duelling between telcos and regulators. ETNO is of course an industry lobby group, so you have to take statements with a pinch of salt, but occasionally there are some valid points made.

“At the moment, I see quite an imbalance,” said Thomas Arnoldner, CEO of the A1 Telecom Austria Group. “I have full empathy with the complexity of the process, I do not envy you [European Commission] at all. The process is a very complex one.”

This is the conundrum being faced by regulators in certain regions. Technology is moving at a pace which is almost impossible for authorities to track. It is very easy to say regulations should be implemented more rapidly, but the democratic process is a stumbling block.

As Roberto Viola, Director General of DG Connect at the European Commission pointed out, European states operate as democracies. In dictatorships, regulation and legislation can be passed on a whim, but in more reasonable societies, rules need to be evidence-based. Once the building blocks of the rules have been established, the next step is the democratic process, taking the rules through the relevant parliamentary mechanism.

This creates a difficult equation to balance. Creating world-leading regulation takes time, GDPR took six years for example, but today’s society demands speed and to be proactive from the early stages of development.

“We don’t need less or more regulation, but we need better regulation which adapts to the society,” said A1 Telecoms Arnoldner.

This is a consistent gripe for the telcos. Most regulation which dictates today’s state-of-play for this industry was written decades ago. It is restrictive in its nature and designed for an analogue era. Arnoldner believes some of the current clauses in the ePrivacy regulation, set to be introduced over the coming months, are equally restrictive. They are not designed as open rules, allowing regulation to adapt to the evolution of technology.

Not only does this create the same awkward regulator landscape which we are living in today, where rules are from a bygone era, but it might also inhibit innovation. Flexibility is key if Europe is to compete with the likes of the US, Korea and China in the global digital economy.

“We have to be there from the start to set the standards which have to be followed,” said Petra De Sutter, a member of European Parliament and the Chair of Internal Market and Consumer Protection (IMCO) working group. “We cannot set standards and regulation if the technology is already there.”

This is another perfectly valid point, and perhaps demonstrates the issue with regulation. The vast majority of regulations are designed for technologies which are already climbing the hype-curve. The foundations of these breakthroughs have been developed and innovators are focusing on fine-tuning; this is too late to have any material impact on the fundamentals of the technology.

These points present an interesting question; if this dynamic continues, will regulation ever be fit-for-purpose?

Amnesty calls out Google and Facebook on privacy abuses

Amnesty International has unveiled a new report heavily criticising Google and Facebook, and the alleged strategies employed to abuse privacy rights of individuals.

The report, which is downloadable here, claims the likes of Google and Facebook force the general public into a Faustian bargain. Users are effectively asked to forgo certain human rights in order to access the digital society which we are now so dependent on.

“The internet is vital for people to enjoy many of their rights, yet billions of people have no meaningful choice but to access this public space on terms dictated by Facebook and Google,” said Kumi Naidoo, Secretary General of Amnesty International.

“To make it worse this isn’t the internet people signed up for when these platforms started out. Google and Facebook chipped away at our privacy over time.

“We are now trapped. Either we must submit to this pervasive surveillance machinery – where our data is easily weaponised to manipulate and influence us – or forego the benefits of the digital world. This can never be a legitimate choice.”

The extensive report outlines the business models which allegedly trap the general public into forgoing privacy rights, and calls governments to create more comprehensive privacy frameworks to prevent the harvesting of data. The key issue is the conditions placed on accessing these prominent services, Amnesty International does not believe Google and Facebook should be able to deny access if a user does not consent to data collection.

What is worth noting is that opting-out of certain services is an option. Google’s mapping products do have the opt-out option for example, though this is only a scratch on the surface. Not only are these opt-outs limited, we suspect few in the general public would actually realise this is an alternative.

While this is certainly one of the more comprehensive attacks on the global dominance of two of Silicon Valley’s most prominent residents, this is of course not the first. Amnesty International are quite late to the party as various politicians, including Presidential hopeful Elizabeth Warren, and non-profits, Electronic Frontier Foundation for example, have been protesting Big Tech for some time.

In fairness to the critics, there are some valid points. Firstly, on the market dominance of these two technology giants, and secondly, on the way we as society have sleep-walked into a position where the landscape has been artificially manufactured to compound this dominance.

Some of the more radical critics of Big Tech have been pushing for divestments of certain assets. This will be incredibly difficult, if not impossible, to deliver though the idea does force regulators to think more proactively about approving acquisitions and mergers in the first place.

For example, if regulators knew then what we know now, would Google have been allowed to acquire Android and YouTube? Equally, would Facebook have been allowed to absorb Instagram and WhatsApp? These six different platforms account for such a monstrous amount of internet traffic, opinion, news and debate, it seems irresponsible for such power to be concentrated into two companies.

Whether this can be fixed is still up for debate, though we are sceptical. Those who are under-threat of divestment are working to integrate the under-fire assets in such a complex manner with other areas of the business, it would be an operational and financial nightmare. If these companies can make it look disastrous to pluck apart the operations, politicians will likely back-off. The Government does not want to destroy one of the main drivers of the economy after all.

The second valid point is the creation of the digital public square. The means by which we share opinion and debate ideas has fundamentally shifted in recent years. People might be afraid of confrontation in real life, but they certainly aren’t online. Some might question whether this is healthy, but it is a reality of today’s society.

However, in accessing the digital public square, Amnesty International argues too many rights are waivered. Privacy is the central cog and Big Tech has been gradually eroding the concept of privacy for years. In 2010, we would have never dreamed of sharing some of the information we do today, but like the boiling frog, we have allowed the environment to change without protest.

The issue at the heart of this on-going debate is of course the treasure trove of data which is being horded by Big Tech. These are companies where the very life blood is information, hence why services are offered to the consumer for free. These services have become critical to the way in which we communicate, learn and debate; avoiding the platforms is an impossible task for some.

It is always worth pointing out that while Amnesty International is highly critical of the dominance of Facebook and Google, it is enjoying the benefits. The report has been circulated on the various platform to draw more eyeballs to the issue, while the organization does run ads through both companies to attract more attention and donations.

Like many other of the critical voices, Amnesty International is calling for greater protections to the consumer. The organisation hasn’t gone as far as to call for a break-up of Big Tech, perhaps realising this is an unachievable goal, but further restrictions should be placed on the companies who are so easily influencing every aspect of our lives.

Facebook and Google are here to stay, primarily because they make incredibly intelligent and forward-looking investment decisions, though how much influence they have on the future is open to debate. Today, these companies have scarily detailed profiles on users, though whether the political rhetoric to limit these powers is anything more than campaign promises remains to be seen.

Indian state says it can intercept any communications and hack any device it wants

In response to a question about WhatsApp hacking in parliament, the Indian home affairs Minister revealed the apparently limitless snooping power at his disposal.

The information comes courtesy of TechCrunch, which also helpfully linked to the source material. The Indian government was asked to comment on the following:

  • Whether the Government does tapping of WhatsApp calls and messages and if so, the details thereof;
  • The protocol being followed in getting permissions before tapping WhatsApp calls and messages;
  • Whether it is similar to that of mobile phones/telephones;
  • Whether the Government uses Pegasus software of Israel for this purpose;
  • Whether the Government does tapping of calls and messages of other platforms like Facebook Messenger, Viber, Google and similar platforms and if so, the details thereof.

While it didn’t address each point individually the Indian home affairs Minister, Kishan Reddy, answered with the following statement:

Section 69 of the Information Technology Act, 2000 empowers the Central Government or a State Government to intercept, monitor or decrypt or cause to be intercepted or monitored or decrypted, any information generated, transmitted, received or stored in any computer resource in the interest of the sovereignty or integrity of India, security of the State, friendly relations with foreign States or public order or for preventing incitement to the commission of any cognizable offence relating to above or for investigation of any offence.

There followed some vague stuff about government agencies not having blanket permission to hack electronic communications and devices, and that they would have to ask really nicely before they were allowed to do what they want. But the long and short of it is that anything you say or do in India can be viewed by the government whenever it fancies it.

Pegasus software refers to spyware made by NSO Group, which WhatsApp has openly accused of hacking its service. The government response didn’t address that question at all but it’s beyond question that there is a growing industry around the production of malware designed to help governments spy on their citizens.

Five years ago the India based Software Law and Freedom Centre said the Indian government was issuing over 100,000 telephone interception orders per year. It seems safe to assume that number has grown considerably since then and when you factor in all the other agencies that have a piece of this action you’re looking at a lot of state spying.

In India, as elsewhere, claimed interference in the electoral process, be that through misinformation or more sinister means, is being used as the justification for state interference in private matters. Any time a government claims it needs to spy in its citizens in the name of safety, the correct response is to ask whose safety it has in mind.