Expats voted Estonia to the top of their digital life quality list in a new survey.
InterNations, a social network for expats, recently conducted a global survey to gauge the perception of digital lives enjoyed by those living in a foreign country. 68 countries were featured. Although most of the findings confirmed the conventional wisdom, the report also threw up a couple of surprises.
Overall, the Nordic countries ranked high, with Finland, Norway, and Denmark all in the top 5 best countries for digital life table. But topping the list is Estonia, which ranked exceptionally high on the e-government index, with 94% of all expats surveyed feeling satisfied with the availability of the country’s administrative services. Estonia also topped the table of unrestricted access to online services. The country, similar to other Baltic and Nordic countries, adopts a light-touch approach towards Internet. Following Estonia on the e-government satisfaction list is Singapore, with Norway coming second on the unrestricted access to online service table.
Unsurprisingly, South Korea, which leads the world in broadband access, also tops the league of high-speed internet at home, followed by Taiwan and Finland. Expats were also asked to rate their experience of cashless payment. The four Nordic countries took the top 4 positions, with Estonia rounding off the top 5. Finland was ranked in the first place, with 96% expats saying they are happy with the experience.
A question that is particularly relevant to expats is how easy it is to get a local mobile number. Here we see a bit surprise. Myanmar, which ranked at the bottom of the overall Digital Life table, came on top in this list, followed by New Zealand and Israel.
On the other end of the tables, China was only beaten by Myanmar to the bottom of the overall Digital Life table and sat comfortably at the bottom of “Unrestricted Access to Internet”, thanks to the all powerful Great Firewall. This is particularly pertinent for expats who would have a stronger need for the global social networks more than the local residents, to communicate with their home countries. 83% of all expats were unsatisfied with their access to social networks from China, followed in the second from bottom by Saudi Arabia, where 46% said they were unsatisfied.
The ranking may not be a big surprise, but the margin between the bottom two countries may be. The only table that China was not in the bottom 10 was the one on cashless payment. But, maybe surprisingly, with all the fanfare about the contactless payment experience enabled by companies like Alibaba and Tencent, expats living in China did not manage to take the country to the top 10 table either.
With much of public discussion now taking place on the three main social platforms the time has come to take editorial control away from their owners.
Even before the Cambridge Analytica scandal it had become apparent what a major role social media was already playing in public life. Politicians use Twitter to communicate directly with the electorate and spend billions on Facebook’s targeted advertising. Meanwhile a new generation of political and social commentators have been given their voice by YouTube and now attract audience numbers mainstream media can only dream of.
But all three of these platforms are commercial operations with obligations to maximise returns to their investors. In all three cases the business model is the classic media one of charging for access to their audience, which means they rely on advertisers for their revenue. This in turn can lead to conflicts of interest.
These have always existed in traditional media too. It is far more common than you might imagine for publications to receive pressure from advertisers to change editorial decisions under threat of advertising revenue being taken away if they don’t. They then face a simple choice: the short-term fix of capitulation to blackmail or the long-term investment in the trust of their audience and the credibility of their title.
The dilemma is different for social media, however, since they don’t produce the content they sell advertising against. Instead their business model has been to make is as easy as possible for anyone in the world to publish on their platforms, a model so successful that much of the advertising traditional media used to rely on has now moved to social media, such that digital ad spend is forecast to overtake traditional spend in the US this year.
Inevitably social media are facing the same kind of advertiser pressure traditional media always have, but their response is usually to capitulate. The reason for this is simple: they have no investment whatsoever in the content they host and no specific editorial theme or angle to protect. Because of this they seem much more ready to remove content and even ban users if they think is will keep the ad money flowing.
One other by-product of caving in to commercial pressure is that it sets a precedent, with advertisers emboldened to be ever more demanding with their requests. In the case of social media this has resulted in increasing pressure to ban from social media any contributors advertisers fear may harm their brands by association. This capitulation has also emboldened activists to call for bans of anyone they disagree with, sometimes even alerting advertisers to the PR danger to increase the pressure further.
This PR pressure came to a head for Facebook last week when it decided to ban several accounts it had unilaterally decided were ‘dangerous’ and to pre-brief a number of media about it before even notifying the users themselves. While opponents of the people bans applauded the move, there has been wider concern about the arbitrary nature of the action and the power of Facebook to decide who gets to take part in the public conversation.
A common argument at times like this, often made by people otherwise deeply suspicious of the motives of big corporations, is to insist that private companies like Facebook are free to police their platforms as they see fit. But the fact that those platforms is where most public discussion takes place and that those companies tend to just buy competitors when they get too big means this is a public concern both from a freedom of speech and a competition perspective.
Probably the most famous social media user is also arguably the most significant politician in the world: US President Donald Trump. He was deeply concerned by Facebook’s actions and, appropriately enough wasted little time Tweeting about it.
I am continuing to monitor the censorship of AMERICAN CITIZENS on social media platforms. This is the United States of America — and we have what’s known as FREEDOM OF SPEECH! We are monitoring and watching, closely!!
He went on to refer to one of the banned people – Paul Joseph Watson, a UK citizen – directly in subsequent tweets and retweeted a number people objecting to the move, noting it appeared to target people on the conservative side of the political spectrum. Watson responded by calling for him to revoke the protection internet platforms have from the consequences of what is posted on them, since it was now acting as a publisher.
Thank you Mr. President. Now hopefully Facebook will be stripped of its immunity under section 230 of the Communications Decency Act because it is clearly acting as a partisan publisher and not a platform. This is election meddling. — Paul Joseph Watson (@PrisonPlanet) May 3, 2019
Elsewhere one of the founders of Facebook published an op-ed calling for the break-up of Facebook on the grounds that it had grown too powerful and too much of that power is held by Mark Zuckerberg alone, who personally holds the overall majority of voting power in the company. Others have argued, however, that since Facebook isn’t a monopoly in any of its markets any attempt to break it up would be illegal and that better regulation would be much more effective.
Again it’s common for accusations of hypocrisy to be levelled at those who call for regulation to protect freedom of speech, but in this case the position is entirely consistent. If speech is being restricted by a private oligopoly then public intervention may be the only way to combat that. As any telecoms company could tell you, regulation of oligopolies in markets with high barriers to entry is commonplace and vital to ensure consumers aren’t held to ransom.
The father of the free market Adam Smith famously wrote the following in his definitive book ‘The nature and causes of the wealth of nations’, in reference to the necessity of regulating cartels: “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices.”
In this case we’re not talking about cartel behaviour, although sometimes their activities can seem suspiciously coordinated, nor is the primary concern a contrivance to raise prices. The commodity at stake is not money but the ability to take part in public discussion. This is arguably no less important a utility then water, electricity or telephony, but to date the companies that control it have faced far less scrutiny than utilities.
Such is this burden of responsibility that even Zuckerberg himself has publicly called for increased regulation. His underlying motives may be self-preservatory, but the logic is sound. Nobody thinks access to public conversation should be controlled by private companies, but currently there are now regulation in place to take that decision away from them. Zuckerberg seems to have concluded that if there were that would take a lot of the heat off him.
Decisive regulation may also pre-empt the litigation that is bound to hit social media companies as they continue to restrict their users. Watson indicated he’s tempted to take legal action, especially since the discovery phase would require Facebook to reveal the rationale behind his banning, possibly exposing the political bias he suspects is behind it. Watson also recently tweeted about a possible legal precedent that may be set in Poland, that would prohibit social media companies from acting against anything legal.
“If upheld by the courts, first in Poland and then on the European Union level, it would force the platforms to leave all lawful speech alone and stop taking down posts, profiles and pages simply because it feels like it.” https://t.co/yuBlmDW9i1
This would appear to be the best solution for everyone. Social media companies would be able to tell pushy advertisers that such decisions have been taken out of their hands, while users would have the law is their sole guide to acceptable public speech. There would still be the matter of different laws in different countries and deliberately censorious and ill-defined legal terms such as ‘hate speech’, but things would be a lot clearer than they are now.
Essentially this would mean that, in order to retain the protections afforded to platforms, social media would not be able to censor anything legal. Alternatively , if they want to take a more active editorial role they should be treated as publishers and be liable for all content published on them, right now they’re somewhere in between and that’s unsustainable. Here’s independent Journalist Tim Pool with a US take on the matter.
Social media giant Facebook has significantly stepped-up its censorship efforts by banning seven accounts, six of which are often described as ‘far-right’.
Alex Jones, his publication Infowars, Milo Yiannopoulos, Paul Joseph Watson, Laura Loomer, Paul Nehlen and Louis Farrakhan. Jones and Infowars, of which Watson is an editor, are known more for general conspiracy theories than any specific political stance, while Yiannopoulos is a notorious provocateur, Loomer a political activist, Nehlen a fringe US politician and Farrakhan the leader of The Nation of Islam.
“We’ve always banned individuals or organizations that promote or engage in violence and hate, regardless of ideology,” said a Facebook spokesperson in response to our query. The process for evaluating potential violators is extensive and it is what led us to our decision to remove these accounts.” The ban also applies to Facebook-owned social networking service Instagram.
Further enquiries revealed that all were banned for violating Facebook’s policies against dangerous individuals & organizations. Among the things Facebook considers to be violations include:
Calling for violence against people based on factors like race, ethnicity or national origin
Following a ‘hateful’ ideology
Use of ‘hate speech’ or ‘slurs’, even on other social media sites
Whether they’ve had stuff removed from Facebook or Instagram before
There are also two tiers of ban. Facebook told us it usually censors all other users from even praising these banned people and organisations, regardless of context, which implies it does allow criticism of them or indeed neutral commentary. But there’s another tier that covers people who haven’t transgressed according to the criteria above but are still considered ‘dangerous’ by Facebook according to unspecified criteria. They get banned but everyone else is still allowed to say nice things about them if they want and it’s unclear which category each of the banned people fall into, hence whether or not users should avoid saying nice things about them.
Facebook did indicate to us some of the signals that prompted it to take action in these cases. It looks like many of them are being punished for associating with Gavin McInnes, founder of Vice magazine and a provocateur in the Yiannopoulos mould. Jones recently interviewed him, Loomer ‘appeared with’ him and also praised another banned person, Faith Goldy, while Yiannopoulos himself also praised McInnes as well as banned activist Tommy Robinson. Farrakhan has been banned for multiple public statements disparaging Jews.
While all of the people banned have doubtless broken the state rules at some time or other, questions remain about the specificity of those rules and how indiscriminately they’re enforced. According to Wikipedia (not necessarily the most authoritative source but we have to start somewhere) hate speech is defined as ‘a statement intended to demean and brutalize another’.
On the surface this would seem to apply to the majority of discourse over social media, but the definition is typically narrowed to such statements that are deemed to be influenced by race, religion, ethnic origin, national origin, sex, disability, sexual orientation, or gender identity. As the Wikipedia page illustrates, every country has its own hate speech legislation, but Facebook has decided to draft its own.
‘A hate organization is defined as: Any association of three or more people that is organized under a name, sign, or symbol and that has an ideology, statements, or physical actions that attack individuals based on characteristics, including race, religious affiliation, nationality, ethnicity, gender, sex, sexual orientation, serious disease or disability,’ explains the Facebook community standards page.
If we take these guidelines literally, therefore, you can be abusive on Facebook, as people frequently are, so long as you don’t call for violence or make any reference to the person’s identity. This is obviously a very difficult thing to enforce, leading to concerns that there may be a degree of political or other bias in doing so.
A common example of this cited by those who perceive political bias is the case of Antifa. The name is an abbreviation of ‘anti-fascist’ and it’s a group set up to counter perceived far-right activity. There are, however, numerous reports of this activity involving violence, especially against a group founded by McInnes called the Proud Boys. Antifa has even been labelled a domestic terrorist group in the US and yet many of its Facebookpages remain unbanned.
The matter of actively campaigning politicians is another hot-button issue. Nehlen seems to be the only member of this newly-banned group to describe themselves as a politician, but in the UK at least two candidates standing in the imminent European elections have had their campaign accounts banned from Twitter due to the individuals in question having already been banned from that platform.
A common response to concerns about selective banning by social media platforms is that they’re private (although publicly-listed) companies and are thus free to ban whoever they please. The biggest problem with this argument is that they have also become the new public square, and the platform from which political campaigns are now largely based.
The Cambridge Analytica scandal hinged on concerns that Facebook had been used to manipulate elections and US President Donald Trump famously uses Twitter as his primary means of public communication. By selectively banning certain accounts social media companies not only open themselves up to accusations of political bias, they also run the risk of directly undermining the entire political process.
Google holding company Alphabet saw its share price fall by 8% after it announced disappointing Q1 numbers.
The company was fairly elusive about the reasons for a deceleration in its revenue growth on the subsequent revenue call, but many commentators picked up on comments around YouTube as significant.
“YouTube’s top priority is responsibility,” said Google CEO Sundar Pichai. “As one example, earlier this year YouTube announced changes that reduce recommendations of content that comes close to violating our guidelines or that misinforms in harmful ways.”
“…while YouTube Clicks continue to grow at a substantial pace in the first quarter, the rate of YouTube Click growth decelerated versus what was a strong Q1 last year reflecting changes that we made in early 2018, which we believe are overall additive to the user and advertiser experience,” said CFO Ruth Porat.
At least one analyst on the call expressed frustration at the lack of further clarity on the reasons why Google’s revenues aren’t what they expected them to be, but the YouTube stuff seems fairly self-explanatory. Pichai made it clear that YouTube is all about reducing perceived harmful content on the platform, while Porat referred to changes made at YouTube in early 2018.
So what were those changes? To YouTubers they were one of many wholesale restrictions on the types of content that are monetised (i.e. have ads served on them), broadly referred to as ‘adpocalypse’. Every now and then a big brand found its ads served against content it disapproved of, resulting in it pulling its ads from YouTube entirely. In the resulting commercial panic YouTube moved to demonetize broad swathes of content.
While this is an understandable immediate reaction to a clear business threat, it also undermines the central concept of YouTube, which is to provide a platform for anyone to publish video. On top of that YouTube increasingly censors its platform, including closing comments, tweaking the recommendation algorithm and sometimes even banning entire channels. These restrictions must surely have contributed to the amount of ad revenue coming in, but Google seems to have decided it’s worth it to keep the big brands sweet.
Here’s some further analysis from prominent YouTuber Tim Pool followed by an example of just the kind of indie creativity YouTube has built its success on (which, to be fair, doesn’t seem to have been demonetised this time). A move towards favouring big corporates over independent producers is a much bigger risk than you might imagine for YouTube, as Google’s disappointing Q1 numbers imply.
France has appointed a new minister for digital, while the UK wants to set up a new regulator for the internet. Both governments want to play more active roles in controlling the online world.
Emmanuel Macron, the French President, nominated his political advisor Cédric O to the position of Secretary of State for the Digital (Secrétaire d’État chargé du Numérique), a post vacated when the predecessor quit to prepare for next year’s municipal election. O has been instrumental in running the president’s agenda to engage the digital heavyweights, including arranging the meeting for Zuckerberg, and organising the French senior civil servants to observe in Facebook’s headquarters.
O opened his story to the journalists from AFP and L’Express by claiming that he was in “100% agreement” with Zuckerberg regarding the stronger role the states should play in regulating the internet. “There is the demand from citizens, ‘please guarantee that when I’m on the Internet, my right is respected’. But the right should not be defined by the platforms (e.g. Facebook, Google, etc.), ” O said. The government will also update laws to give it the legal foundation to play such a role, including bringing the current regulations on audio-visual sectors to the digital age, O told the interviewers.
The French government has recently revived the traditional measures to play a more assertive role in the economy, and has extended the approach into the digital domains in particular. Recently it decided to go ahead with the 3% tax on the internet heavyweights, the nicknamed “GAFA tax”, without waiting for the EU to reach consensus on the common digital market.
On the other side of the Channel, the British government, already having a department overseeing digital as its portfolio, is mulling over the set-up of a new regulator, either being part of the existing government structure or a new government body altogether.
This is necessary to consider the current problems surrounding the internet giants. On one hand, these companies have not been regulated properly either as a platform or content publisher. On the other hand, these platforms have been used to facilitate crimes including terrorist attacks. However, there is also the danger that the government is overstepping the lines to become a moral arbiter. The first “problem” of internet identified in the “Online Harms White Paper”, jointly endorsed by the Digital Secretary and the Home Secretary, states that “illegal and unacceptable content and activity is widespread online”. While “illegal” can be properly defined, “unacceptable” is a subjective judgment and a judgment that should not made by the government.
To couple such subjective assessments with the government’s demand that ISPs and ICPs should have the obligation to block content or face heavy fines smells similar to the measures adopted by the censorship regimes of China, Russia, Iran, and a few other countries. A side effect of such assertive measures could be driving some internet users down the route to evading government monitoring, for example this could be a boost for the VPN business.
As we said when Zuckerberg asked the governments to share his burden and blame, having governments control internet content, be it French or Chinese, would be a double-edged sword, and one edge would run against the internet’s spirit of liberating access to information and freedom of expression, and against what Sir Tim Berners-Lee demanded that governments should “keep all of the internet available, all of the time; and respect people’s fundamental right to privacy.”
This almost rolls back the years to what the late Christopher Hitchens once called “an all-out confrontation between the ironic and the literal mind: between every kind of commissar and inquisitor and bureaucrat and those who know that, whatever the role of social and political forces, idea and books have to be formulated and written by individuals.” (“Siding with Rushdie”, 1989) It would be the biggest irony of internet’s brief history if, after beleaguering the Chinese government for its heavy-handed approach towards internet, the western governments are all going down the China route, albeit 20 years later.
The Silicon Valley search giant has decided to dissolve its AI ethical council, one week after it was created, in response to opposition from its own employees. But it’s not always so responsive to their concerns.
A week after the Advanced Technology External Advisory Council (ATEAC) was created, Google told VOX that it has decided to cancel the project. Controversy has been following the project from the start, especially surrounding one council member Google enlisted. This prompted an internal petition that attracted the signatures of more than 2,300 employees and the resignation of one Council member. The sole purpose of ATEAC, with its members unpaid and the body without any decision-making power, seems to generate good PR. In that respect it represents a spectacular own-goal, so Google has bravely run away.
“It’s become clear that in the current environment, ATEAC can’t function as we wanted. So we’re ending the council and going back to the drawing board. We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics.” Google sent this statement to VOX.
This is not the first time that Google has “listened to employees”. In June 2018, Google famously “ditched contract with the US military” after more than 3,000 employees protested the company’s AI technology being used for military surveillance, the so-called project Maven.
But Google has not always respected its employees’ views. After almost exactly a year after he disclosed that Google was secretly working on a censored version of search engine for China, Ryan Gallagher, the reporter for The Intercept, kept the interested readers updated with the news that Google was closer to readiness with the so-called project Dragonfly. Some senior executives were said to be doing a secret “performance review” of the product, contrary to Google’s normal practice of involving large numbers of employees when assessing upcoming products.
Despite that more than 1,400 employees have condemned project Dragonfly and some have resigned, in addition to Google’s CEO having to testify in front of the Congress, Google looks to be rather determined to push forward with its China re-entry strategy. The Financial Times reported that the search and online advertising giant has recently suspended serving ads on two Chinese websites that evaluate VPNs, which would have helped users inside the Great Firewall to bypass the blocking. A local research firm told the FT that, considering the acrimonious nature of Google’s departure from China nine years ago, the company “may feel compelled to make additional efforts to curry favour and get back in the good graces to get approval to re-enter the market.”
So it is not clear whether it was due to the number of employees protesting against project Dragonfly being smaller or the resignations lower-profile that Google has decided not to back down, or it is simply more convenient to disband a rubber-stamp council or to discontinue a contract with the American military than resisting the temptation of the Chinese market and standing up to the censorial demands of the Chinese authorities.
The latest mutiny at Google illustrates what a political game technology has become and it’s only going to get more so.
Last week Google announced the creation of ‘An external advisory council to help advance the responsible development of AI.’ In so doing Google was acknowledging a universal concern about the ethics of artificial intelligence, automation, social media and technology in general. It also seemed to be conceding that the answers to these concerns need to be universal too.
The Silicon Valley tech giants are frequently accused of having a political bias in their thinking that is hostile to conservative perspectives. Normally this wouldn’t matter, but since the likes of Google, Facebook and Twitter have so much control over how everyone gets their information and opinion, any possible bias in the way they do so becomes a matter of public concern.
In an apparent attempt to demonstrate diversity of viewpoints in this new Advanced Technology External Advisory Council (ATEAC), Google included Kay Coles James, who it describes as ‘a public policy expert with extensive experience working at the local, state and federal levels of government. She’s currently President of The Heritage Foundation, focusing on free enterprise, limited government, individual freedom and national defense.’
This decision has upset over a thousand Google employees, however, who made their feelings publicly known yesterday via a an article titled Googlers Against Transphobia and Hate. The piece accuses James of being ‘anti-trans, anti-LGBTQ, and anti-immigrant’ and linked to three recent tweets of hers as evidence.
Beyond those tweets it’s hard to fully test the veracity of those allegations, but it does seem clear that they are largely political. The Equality Act is a piece of US legislation currently being debated in the US House of Representatives, sponsored almost entirely by members of the Democrat Party. The legal status of transsexual people is intrinsically political, as is immigration policy, and attitudes towards them tend to be similarly polarised.
The Googlers Against Transphobia certainly seem to fall into that category, but what makes them noteworthy, apart from their numbers, is that they expect their employer to adhere to their political positions. Google has attempted to defend the appointment of James to one of the eight ATEAC positions by stressing the importance of diversity of thought.
Here’s what the Google dissidents think of that argument. “This is a weaponization of the language of diversity,” they wrote. “By appointing James to the ATEAC, Google elevates and endorses her views, implying that hers is a valid perspective worthy of inclusion in its decision making. This is unacceptable.”
This is just the latest internal insurrection Google has faced from its passionately political workforce. Every time a story emerges about Google working on a special search engine for China there is considerable disquiet among the rank and file, ironically opposed to censorship in this case. And then there was the case of James Damore, sacked by Google for trying to start an internal conversation about gender diversity at the company.
Google’s struggles pale when compared to those of Facebook, however. Every time it seems to have just about recovered from the last crisis it finds itself in a new one. The latest was catalysed by the atrocity committed in New Zealand, in which a gunman killed 50 people praying in two mosques in Christchurch and live-streamed himself on Facebook.
Understandably questions were immediately asked about how Facebook could have allowed that streaming to happen. While it acted quickly to ensure the video and any copies of it were taken down, Facebook was under massive pressure to implement measures to ensure such a thing couldn’t happen again. Its response has been to announce ‘a ban on praise, support and representation of white nationalism and white separatism on Facebook and Instagram.’
These kinds of ideologies are largely rejected by mainstream society for many good reasons, but ideologies they remain. Facebook is also moving against claimed ‘anti-vaxxers’, i.e. people who fear the side-effects of vaccines. They may well be misguided in this fear but it is nonetheless an opinion and, so far, as legal one.
Finding itself under pressure to police ideologies and opinions on its platforms, Facebook seems to have realised this is an impossible task. For every ‘unacceptable’ position it acts against there are thousands waiting in the wings and an obvious extrapolation reveals a future Facebook in which very few points of view are permitted. In apparent acknowledgment of that dilemma Facebook recently called on governments to make a call on censorship, but it should be careful what it wishes for.
Another type of content facing increasing calls for censorship is claimed ‘conspiracy theories, with a recent leak revealing how Facebook agonises over such decisions. Google-owned Facebook is also acting against such content, but seems to prefer sanctions short of outright banning, such as the recent removal of videos published by activist Tommy Robinson from all search results.
Again this puts technology companies in the position of censors of content that often has a political nature. How do you define a conspiracy theory anyway and should all of them be censored? Should, for example, the MSNBC network in the US be sanctioned for aggressively pursuing a narrative of President Trump colluding with Russia to win his general election when a two-year investigation has revealed it to be false? Is that not a conspiracy theory too? Politics and technology collide once more.
The current era of political interference in internet platforms was probably started by the Cambridge Analytica and subsequent allegations that the democratic process had been corrupted. As technology increasingly determines how we view and interact with the world this problem is only going to get bigger and it’s hard to see how technology companies can possibly please all of the people all of the time.
Which brings us back to the start of this piece: AI. The only hope internet they have of monitoring the billions of interactions they host per day is through AI-driven automation. But even that has to be programmed by people with their own personal views and ethics and will need to be responsive to public sentiment as it in turn reacts to events.
As the US President has done so much to demonstrate, technology platforms are now the places much of politics and public discussion take place. At the same time they’re owned by commercial organizations with no legal requirement to serve the public. They have to balance pressure from both professional politicians and the politics of their own employees with the dangers of alienating their users if they’re seen to be biased. Something’s got to give.
This dilemma was illustrated well in a recent Joe Rogan podcast featuring Twitter, which you can see below. In it Twitter CEO Jack Dorsey and his head of content moderation Vijaya Gadde defend themselves from accusations of bias from independent journalist Tim Pool.
Google engineers have found that the search giant has continued with its work on the controversial search engine customised for China.
It looks that our conclusion that Google has “terminated” its China project may have been premature. After the management bowed to pressure from both inside and outside of the company to stop the customised search engine for China, codenamed “Dragonfly”, some engineers have told The Intercept that they have seen new codes being added to the products meant for this project.
Despite that the engineers on Dragonfly have been promised to be reassigned to other tasks, and many of them are, Google engineers said they noticed around 100 engineers are still under the cost centre created for the Dragonfly project. Moreover, about 500 changes were made to the code repositories in December, and over 400 changes between January and February of this year. The codes have been developed for the mobile search apps that would be launched for Android and iOS users in China.
There is the possibility that these may be residuals from the suspended project. One source told The Intercept that the code changes could possibly be attributed to employees who have continued this year to wrap up aspects of the work they were doing to develop the Chinese search platform. But it is also worth noting that the Google leadership never formally rang the dead knell of Dragonfly.
The project, first surfaced last November, has angered quite a few Google employees that they voiced their concern to the management. This was also a focal point of Sundar Pichai’s Congressional testimony in December. At that time, multiple Congress members questioned Pichai on this point, including Sheila Jackson Lee (D-TX), Tom Marino (R-PA), David Cicilline (D-RI), Andy Biggs (R-AZ), and Keith Rothfus (R-PA), according to the transcript. Pichai’s answers were carefully worded, when he repeated stated “right now there are no plans for us to launch a search product in China”. When challenged by Tom Marino, the Congressman from Pennsylvania, on the company’s future plan for China, Pichai dodged the question by saying “I’m happy to consult back and be transparent should we plan something there.”
On learning that Google has not entirely killed off Dragonfly, Anna Bacciarelli of Amnesty International told The Intercept, “it’s not only failing on its human rights responsibilities but ignoring the hundreds of Google employees, more than 70 human rights organizations, and hundreds of thousands of campaign supporters around the world who have all called on the company to respect human rights and drop Dragonfly.”
While Sergei Brin, who was behind Google’s decision to pull out of China in 2010, was ready to stand up to censorship and dictatorship, which he had known too well from his childhood in the former Soviet Union, Pichai has adopted a more mercantile approach towards questionable markets since he took over the helm at Google in 2015. In a more recent case, Google (and Apple) has refused to take down the app Absher from their app stores in Saudi Arabia, with Goolge claiming that the app does not violate its policies. The app allows men to control where women travel and offers alerts if and when they leave the country.
This has clearly irritated the lawmakers. 14 House members wrote to Tim Cook and Sundar Pichai, “Twenty first century innovations should not perpetuate sixteenth century tyranny. Keeping this application in your stores allows your companies and your American employees to be accomplices in the oppression of Saudi Arabian women and migrant workers.”
In recent days another round of restrictions have been imposed across YouTube and Facebook, with social media companies increasingly being used as proxies in a culture war.
Most recently YouTube announced several new measures related to the safety of minors on YouTube. The main driver seems to be the comments people post on videos, which anyone who uses YouTube knows often range from unsavoury to downright deranged. The specific issue regards those comments on videos that feature minors, so YouTube has disabled all comments on tens of millions of such videos.
On top of that millions of existing comments have been deleted, and a bunch of channels judged to have produced content that could be harmful to minors have been banned, which indicates this is not a new issue. YouTube tends to take its most strident action when its advertising revenues are threatened and a recent exposé on this topic prompted major advertisers, including AT&T, to cancel their deals, hence this announcement.
While YouTube has always been quick to protect its ad revenues, it has historically been less keen to censor comments or ban creators outright, so this definitely marks an escalation. The same can’t be said for Facebook, which seems to be the major platform most inclined to censor at the first sign of trouble. An endless stream of scandals over the past year or two have taken their toll on the company, whichis now in a constant state of fire-fighting.
Facebook’s most recent piece of censorship concerns Tommy Robinson, a controversial UK public figure who concerns himself largely with investigating the negative effects of mass immigration. He recently published a documentary criticising the BBC on YouTube, and presumably promoted it via Facebook and Facebook-owned Instragram, because the latter two platforms decided that was enough to earn him a permanent ban.
In a press release entitled ‘Removing Tommy Robinson’s Page and Profile for Violating Our Community Standards’, published the day after Robinson released his video, Facebook explained that he had repeatedly violated its Ts and Cs by indulging in ill-defined activities such a ‘organised hate’. This seems to be a neologism for some kind of rabble-rousing combined with perceived bigotry.
“This is not a decision we take lightly, but individuals and organizations that attack others on the basis of who they are have no place on Facebook or Instagram,” concludes the press release. This sets an interesting precedent for Facebook as a significant proportion of the content generally found on social media seems to match that description. As is so often the case with any censorship decision, one is left wondering why some people are punished and others aren’t, as YouTuber Argon of Akkad, recently kicked off micro-payments platform Patreon, explores below.
The piece detailed some statistical analysis undertaken by the author to see if there is any solid evidence of bias. Using stated preference for a candidate in the 2016 US presidential election, it was concluded that Trump supporters are four times more likely to be banned that Clinton ones. The piece also highlights some examples of apparently clear braking of Twitter’s rules that nonetheless went unpunished, once more calling into question the consistency of these censorship decisions.
Investigative group Project Veritas, which had previously claimed to have uncovered evidence of ‘shadowbanning’ – i.e. making content from certain accounts harder to find without banning them entirely – has now moved on to Facebook. From apparently the same inside source comes the allegation that Facebook indulges in ‘deboosting’, which seems to amount to much the same thing. You can watch an analysis of this latest report from independent journalist Tim Pool below.
Nick Monroe, another independent journalist whose preferred platform is Twitter, recently reported that “A UK group called Resisting Hate is trying to target my twitter account.” Resisting Hate apparently compiles lists of people it thinks should be banned from various platforms and then coordinates its members to send complaints to the platforms about them.
A UK group called Resisting Hate is trying to target my twitter account.
It seems likely that this mechanism is a major contributing factor to any imbalance in the censoring process. All social media platforms will have algorithms that identify certain stigmatised words and phrases and automatically censor content that contains them, but as even the UK police have shown, that is a very crude without the ability to understand context. They therefore rely heavily on their reporting mechanisms, a process that intrinsically open to abuse by groups with a clear agenda.
And it looks like calls for censorship are starting to spread beyond just single-issue activist groups into the mainstream media – the one set of people you would previously have imagined would be most opposed to censorship. Tim Pool, once more, flags up a piece published by tech site Wired that, quite rightly, flags up the inconsistency of the censorship process, but then takes the step of calling out some other ‘far right activists’ it thinks Facebook should ban while it’s at it.
Twitter CEO Jack Dorsey recently did the interview rounds, including with several independent podcasters. While he was generally viewed as being a bit too evasive, he did concede that a censorship process which relies heavily on third party is flawed and open to abuse. The problem is there is now so much commercial, regulatory and political scrutiny on the big social media platforms that they have to be seen to act when ‘problematic’ content is flagged up.
You don’t need to spend much time on social media to realise that it’s the battle ground for a culture war between those in favour of (selective) censorship and those who want speech to be as free as possible. There is unlikely to ever be a clear winner, but there is little evidence that censorship ever achieves the outcomes it claims to desire: protecting people from harm.
Nobody is forced to consume any content they don’t like and censorship never changes anyone’s mind – it just drives speech and ideas underground and, if anything, entrenches the positions of those who hold them. To sign off we must give a nod to the hugely popular podcaster Joe Rogan, who recently conducted a 4-5 hour live stream with Alex Jones, a polarising figure that has been kicked off pretty much every platform. You can watch it below or not – it’s your choice.