Social media censorship is a public concern and needs a public solution

With much of public discussion now taking place on the three main social platforms the time has come to take editorial control away from their owners.

Even before the Cambridge Analytica scandal it had become apparent what a major role social media was already playing in public life. Politicians use Twitter to communicate directly with the electorate and spend billions on Facebook’s targeted advertising. Meanwhile a new generation of political and social commentators have been given their voice by YouTube and now attract audience numbers mainstream media can only dream of.

But all three of these platforms are commercial operations with obligations to maximise returns to their investors. In all three cases the business model is the classic media one of charging for access to their audience, which means they rely on advertisers for their revenue. This in turn can lead to conflicts of interest.

These have always existed in traditional media too. It is far more common than you might imagine for publications to receive pressure from advertisers to change editorial decisions under threat of advertising revenue being taken away if they don’t. They then face a simple choice: the short-term fix of capitulation to blackmail or the long-term investment in the trust of their audience and the credibility of their title.

The dilemma is different for social media, however, since they don’t produce the content they sell advertising against. Instead their business model has been to make is as easy as possible for anyone in the world to publish on their platforms, a model so successful that much of the advertising traditional media used to rely on has now moved to social media, such that digital ad spend is forecast to overtake traditional spend in the US this year.

Inevitably social media are facing the same kind of advertiser pressure traditional media always have, but their response is usually to capitulate. The reason for this is simple: they have no investment whatsoever in the content they host and no specific editorial theme or angle to protect. Because of this they seem much more ready to remove content and even ban users if they think is will keep the ad money flowing.

One other by-product of caving in to commercial pressure is that it sets a precedent, with advertisers emboldened to be ever more demanding with their requests. In the case of social media this has resulted in increasing pressure to ban from social media any contributors advertisers fear may harm their brands by association. This capitulation has also emboldened activists to call for bans of anyone they disagree with, sometimes even alerting advertisers to the PR danger to increase the pressure further.

This PR pressure came to a head for Facebook last week when it decided to ban several accounts it had unilaterally decided were ‘dangerous’ and to pre-brief a number of media about it before even notifying the users themselves. While opponents of the people bans applauded the move, there has been wider concern about the arbitrary nature of the action and the power of Facebook to decide who gets to take part in the public conversation.

A common argument at times like this, often made by people otherwise deeply suspicious of the motives of big corporations, is to insist that private companies like Facebook are free to police their platforms as they see fit. But the fact that those platforms is where most public discussion takes place and that those companies tend to just buy competitors when they get too big means this is a public concern both from a freedom of speech and a competition perspective.

Probably the most famous social media user is also arguably the most significant politician in the world: US President Donald Trump. He was deeply concerned by Facebook’s actions and, appropriately enough wasted little time Tweeting about it.

  He went on to refer to one of the banned people – Paul Joseph Watson, a UK citizen – directly in subsequent tweets and retweeted a number people objecting to the move, noting it appeared to target people on the conservative side of the political spectrum. Watson responded by calling for him to revoke the protection internet platforms have from the consequences of what is posted on them, since it was now acting as a publisher.  

Elsewhere one of the founders of Facebook published an op-ed calling for the break-up of Facebook on the grounds that it had grown too powerful and too much of that power is held by Mark Zuckerberg alone, who personally holds the overall majority of voting power in the company. Others have argued, however, that since Facebook isn’t a monopoly in any of its markets any attempt to break it up would be illegal and that better regulation would be much more effective.

Again it’s common for accusations of hypocrisy to be levelled at those who call for regulation to protect freedom of speech, but in this case the position is entirely consistent. If speech is being restricted by a private oligopoly then public intervention may be the only way to combat that. As any telecoms company could tell you, regulation of oligopolies in markets with high barriers to entry is commonplace and vital to ensure consumers aren’t held to ransom.

The father of the free market Adam Smith famously wrote the following in his definitive book ‘The nature and causes of the wealth of nations’, in reference to the necessity of regulating cartels:  “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices.”

In this case we’re not talking about cartel behaviour, although sometimes their activities can seem suspiciously coordinated, nor is the primary concern a contrivance to raise prices. The commodity at stake is not money but the ability to take part in public discussion. This is arguably no less important a utility then water, electricity or telephony, but to date the companies that control it have faced far less scrutiny than utilities.

Such is this burden of responsibility that even Zuckerberg himself has publicly called for increased regulation. His underlying motives may be self-preservatory, but the logic is sound. Nobody thinks access to public conversation should be controlled by private companies, but currently there are now regulation in place to take that decision away from them. Zuckerberg seems to have concluded that if there were that would take a lot of the heat off him.

Decisive regulation may also pre-empt the litigation that is bound to hit social media companies as they continue to restrict their users. Watson indicated he’s tempted to take legal action, especially since the discovery phase would require Facebook to reveal the rationale behind his banning, possibly exposing the political bias he suspects is behind it. Watson also recently tweeted about a possible legal precedent that may be set in Poland, that would prohibit social media companies from acting against anything legal.

This would appear to be the best solution for everyone. Social media companies would be able to tell pushy advertisers that such decisions have been taken out of their hands, while users would have the law is their sole guide to acceptable public speech. There would still be the matter of different laws in different countries and deliberately censorious and ill-defined legal terms such as ‘hate speech’, but things would be a lot clearer than they are now.

Essentially this would mean that, in order to retain the protections afforded to platforms, social media would not be able to censor anything legal. Alternatively , if they want to take a more active editorial role they should be treated as publishers and be liable for all content published on them, right now they’re somewhere in between and that’s unsustainable. Here’s independent Journalist Tim Pool with a US take on the matter.

 

Facebook selectively bans ‘dangerous’ users

Social media giant Facebook has significantly stepped-up its censorship efforts by banning seven accounts, six of which are often described as ‘far-right’.

Alex Jones, his publication Infowars, Milo Yiannopoulos, Paul Joseph Watson, Laura Loomer, Paul Nehlen and Louis Farrakhan. Jones and Infowars, of which Watson is an editor, are known more for general conspiracy theories than any specific political stance, while Yiannopoulos is a notorious provocateur, Loomer a political activist, Nehlen a fringe US politician and Farrakhan the leader of The Nation of Islam.

“We’ve always banned individuals or organizations that promote or engage in violence and hate, regardless of ideology,” said a Facebook spokesperson in response to our query. The process for evaluating potential violators is extensive and it is what led us to our decision to remove these accounts.” The ban also applies to Facebook-owned social networking service Instagram.

Further enquiries revealed that all were banned for violating Facebook’s policies against dangerous individuals & organizations. Among the things Facebook considers to be violations include:

  • Calling for violence against people based on factors like race, ethnicity or national origin
  • Following a ‘hateful’ ideology
  • Use of ‘hate speech’ or ‘slurs’, even on other social media sites
  • Whether they’ve had stuff removed from Facebook or Instagram before

There are also two tiers of ban. Facebook told us it usually censors all other users from even praising these banned people and organisations, regardless of context, which implies it does allow criticism of them or indeed neutral commentary. But there’s another tier that covers people who haven’t transgressed according to the criteria above but are still considered ‘dangerous’ by Facebook according to unspecified criteria. They get banned but everyone else is still allowed to say nice things about them if they want and it’s unclear which category each of the banned people fall into, hence whether or not users should avoid saying nice things about them.

Facebook did indicate to us some of the signals that prompted it to take action in these cases. It looks like many of them are being punished for associating with Gavin McInnes, founder of Vice magazine and a provocateur in the Yiannopoulos mould. Jones recently interviewed him, Loomer ‘appeared with’ him and also praised another banned person, Faith Goldy, while Yiannopoulos himself also praised McInnes as well as banned activist Tommy Robinson. Farrakhan has been banned for multiple public statements disparaging Jews.

While all of the people banned have doubtless broken the state rules at some time or other, questions remain about the specificity of those rules and how indiscriminately they’re enforced. According to Wikipedia (not necessarily the most authoritative source but we have to start somewhere) hate speech is defined as ‘a statement intended to demean and brutalize another’.

On the surface this would seem to apply to the majority of discourse over social media, but the definition is typically narrowed to such statements that are deemed to be influenced by race, religion, ethnic origin, national origin, sex, disability, sexual orientation, or gender identity. As the Wikipedia page illustrates, every country has its own hate speech legislation, but Facebook has decided to draft its own.

‘A hate organization is defined as: Any association of three or more people that is organized under a name, sign, or symbol and that has an ideology, statements, or physical actions that attack individuals based on characteristics, including race, religious affiliation, nationality, ethnicity, gender, sex, sexual orientation, serious disease or disability,’ explains the Facebook community standards page.

If we take these guidelines literally, therefore, you can be abusive on Facebook, as people frequently are, so long as you don’t call for violence or make any reference to the person’s identity. This is obviously a very difficult thing to enforce, leading to concerns that there may be a degree of political or other bias in doing so.

A common example of this cited by those who perceive political bias is the case of Antifa. The name is an abbreviation of ‘anti-fascist’ and it’s a group set up to counter perceived far-right activity. There are, however, numerous reports of this activity involving violence, especially against a group founded by McInnes called the Proud Boys. Antifa has even been labelled a domestic terrorist group in the US and yet many of  its Facebook pages remain unbanned.

The matter of actively campaigning politicians is another hot-button issue. Nehlen seems to be the only member of this newly-banned group to describe themselves as a politician, but in the UK at least two candidates standing in the imminent European elections have had their campaign accounts banned from Twitter due to the individuals in question having already been banned from that platform.

A common response to concerns about selective banning by social media platforms is that they’re private (although publicly-listed) companies and are thus free to ban whoever they please. The biggest problem with this argument is that they have also become the new public square, and the platform from which political campaigns are now largely based.

The Cambridge Analytica scandal hinged on concerns that Facebook had been used to manipulate elections and US President Donald Trump famously uses Twitter as his primary means of public communication. By selectively banning certain accounts social media companies not only open themselves up to accusations of political bias, they also run the risk of directly undermining the entire political process.

The politics of technology

The latest mutiny at Google illustrates what a political game technology has become and it’s only going to get more so.

Last week Google announced the creation of ‘An external advisory council to help advance the responsible development of AI.’ In so doing Google was acknowledging a universal concern about the ethics of artificial intelligence, automation, social media and technology in general. It also seemed to be conceding that the answers to these concerns need to be universal too.

The Silicon Valley tech giants are frequently accused of having a political bias in their thinking that is hostile to conservative perspectives. Normally this wouldn’t matter, but since the likes of Google, Facebook and Twitter have so much control over how everyone gets their information and opinion, any possible bias in the way they do so becomes a matter of public concern.

In an apparent attempt to demonstrate diversity of viewpoints in this new Advanced Technology External Advisory Council (ATEAC), Google included Kay Coles James, who it describes as ‘a public policy expert with extensive experience working at the local, state and federal levels of government. She’s currently President of The Heritage Foundation, focusing on free enterprise, limited government, individual freedom and national defense.’

This decision has upset over a thousand Google employees, however, who made their feelings publicly known yesterday via a an article titled Googlers Against Transphobia and Hate. The piece accuses James of being ‘anti-trans, anti-LGBTQ, and anti-immigrant’ and linked to three recent tweets of hers as evidence.

Beyond those tweets it’s hard to fully test the veracity of those allegations, but it does seem clear that they are largely political. The Equality Act is a piece of US legislation currently being debated in the US House of Representatives, sponsored almost entirely by members of the Democrat Party. The legal status of transsexual people is intrinsically political, as is immigration policy, and attitudes towards them tend to be similarly polarised.

The Googlers Against Transphobia certainly seem to fall into that category, but what makes them noteworthy, apart from their numbers, is that they expect their employer to adhere to their political positions. Google has attempted to defend the appointment of James to one of the eight ATEAC positions by stressing the importance of diversity of thought.

Here’s what the Google dissidents think of that argument. “This is a weaponization of the language of diversity,” they wrote. “By appointing James to the ATEAC, Google elevates and endorses her views, implying that hers is a valid perspective worthy of inclusion in its decision making. This is unacceptable.”

This is just the latest internal insurrection Google has faced from its passionately political workforce. Every time a story emerges about Google working on a special search engine for China there is considerable disquiet among the rank and file, ironically opposed to censorship in this case. And then there was the case of James Damore, sacked by Google for trying to start an internal conversation about gender diversity at the company.

Google’s struggles pale when compared to those of Facebook, however. Every time it seems to have just about recovered from the last crisis it finds itself in a new one. The latest was catalysed by the atrocity committed in New Zealand, in which a gunman killed 50 people praying in two mosques in Christchurch and live-streamed himself on Facebook.

Understandably questions were immediately asked about how Facebook could have allowed that streaming to happen. While it acted quickly to ensure the video and any copies of it were taken down, Facebook was under massive pressure to implement measures to ensure such a thing couldn’t happen again. Its response has been to announce ‘a ban on praise, support and representation of white nationalism and white separatism on Facebook and Instagram.’

These kinds of ideologies are largely rejected by mainstream society for many good reasons, but ideologies they remain. Facebook is also moving against claimed ‘anti-vaxxers’, i.e. people who fear the side-effects of vaccines. They may well be misguided in this fear but it is nonetheless an opinion and, so far, as legal one.

Finding itself under pressure to police ideologies and opinions on its platforms, Facebook seems to have realised this is an impossible task. For every ‘unacceptable’ position it acts against there are thousands waiting in the wings and an obvious extrapolation reveals a future Facebook in which very few points of view are permitted. In apparent acknowledgment of that dilemma Facebook recently called on governments to make a call on censorship, but it should be careful what it wishes for.

Another type of content facing increasing calls for censorship is claimed ‘conspiracy theories, with a recent leak revealing how Facebook agonises over such decisions. Google-owned Facebook is also acting against such content, but seems to prefer sanctions short of outright banning, such as the recent removal of videos published by activist Tommy Robinson from all search results.

Again this puts technology companies in the position of censors of content that often has a political nature. How do you define a conspiracy theory anyway and should all of them be censored? Should, for example, the MSNBC network in the US be sanctioned for aggressively pursuing a narrative of President Trump colluding with Russia to win his general election when a two-year investigation has revealed it to be false? Is that not a conspiracy theory too? Politics and technology collide once more.

The current era of political interference in internet platforms was probably started by the Cambridge Analytica and subsequent allegations that the democratic process had been corrupted. As technology increasingly determines how we view and interact with the world this problem is only going to get bigger and it’s hard to see how technology companies can possibly please all of the people all of the time.

Which brings us back to the start of this piece: AI. The only hope internet they have of monitoring the billions of interactions they host per day is through AI-driven automation. But even that has to be programmed by people with their own personal views and ethics and will need to be responsive to public sentiment as it in turn reacts to events.

As the US President has done so much to demonstrate, technology platforms are now the places much of politics and public discussion take place. At the same time they’re owned by commercial organizations with no legal requirement to serve the public. They have to balance pressure from both professional politicians and the politics of their own employees with the dangers of alienating their users if they’re seen to be biased. Something’s got to give.

This dilemma was illustrated well in a recent Joe Rogan podcast featuring Twitter, which you can see below. In it Twitter CEO Jack Dorsey and his head of content moderation Vijaya Gadde defend themselves from accusations of bias from independent journalist Tim Pool.

 

Social media censorship continues to escalate

In recent days another round of restrictions have been imposed across YouTube and Facebook, with social media companies increasingly being used as proxies in a culture war.

Most recently YouTube announced several new measures related to the safety of minors on YouTube. The main driver seems to be the comments people post on videos, which anyone who uses YouTube knows often range from unsavoury to downright deranged. The specific issue regards those comments on videos that feature minors, so YouTube has disabled all comments on tens of millions of such videos.

On top of that millions of existing comments have been deleted, and a bunch of channels judged to have produced content that could be harmful to minors have been banned, which indicates this is not a new issue. YouTube tends to take its most strident action when its advertising revenues are threatened and a recent exposé on this topic prompted major advertisers, including AT&T, to cancel their deals, hence this announcement.

While YouTube has always been quick to protect its ad revenues, it has historically been less keen to censor comments or ban creators outright, so this definitely marks an escalation. The same can’t be said for Facebook, which seems to be the major platform most inclined to censor at the first sign of trouble. An endless stream of scandals over the past year or two have taken their toll on the company, whichis now in a constant state of fire-fighting.

Facebook’s most recent piece of censorship concerns Tommy Robinson, a controversial UK public figure who concerns himself largely with investigating the negative effects of mass immigration. He recently published a documentary criticising the BBC on YouTube, and presumably promoted it via Facebook and Facebook-owned Instragram, because the latter two platforms decided that was enough to earn him a permanent ban.

 

In a press release entitled ‘Removing Tommy Robinson’s Page and Profile for Violating Our Community Standards’, published the day after Robinson released his video, Facebook explained that he had repeatedly violated its Ts and Cs by indulging in ill-defined activities such a ‘organised hate’. This seems to be a neologism for some kind of rabble-rousing combined with perceived bigotry.

“This is not a decision we take lightly, but individuals and organizations that attack others on the basis of who they are have no place on Facebook or Instagram,” concludes the press release. This sets an interesting precedent for Facebook as a significant proportion of the content generally found on social media seems to match that description. As is so often the case with any censorship decision, one is left wondering why some people are punished and others aren’t, as YouTuber Argon of Akkad, recently kicked off micro-payments platform Patreon, explores below.

 

There is a growing body of research that points to many of these decisions having a political or cultural bias. Quillette, an independent site that publishes analytical essays and research, recently ran with a series entitled ‘Who controls the platform’, which culminated in a piece headlined ‘It Isn’t Your Imagination: Twitter Treats Conservatives More Harshly Than Liberals’.

The piece detailed some statistical analysis undertaken by the author to see if there is any solid evidence of bias. Using stated preference for a candidate in the 2016 US presidential election, it was concluded that Trump supporters are four times more likely to be banned that Clinton ones. The piece also highlights some examples of apparently clear braking of Twitter’s rules that nonetheless went unpunished, once more calling into question the consistency of these censorship decisions.

Investigative group Project Veritas, which had previously claimed to have uncovered evidence of ‘shadowbanning’ – i.e. making content from certain accounts harder to find without banning them entirely – has now moved on to Facebook. From apparently the same inside source comes the allegation that Facebook indulges in ‘deboosting’, which seems to amount to much the same thing. You can watch an analysis of this latest report from independent journalist Tim Pool below.

 

Nick Monroe, another independent journalist whose preferred platform is Twitter, recently reported that “A UK group called Resisting Hate is trying to target my twitter account.” Resisting Hate apparently compiles lists of people it thinks should be banned from various platforms and then coordinates its members to send complaints to the platforms about them.

 

It seems likely that this mechanism is a major contributing factor to any imbalance in the censoring process. All social media platforms will have algorithms that identify certain stigmatised words and phrases and automatically censor content that contains them, but as even the UK police have shown, that is a very crude without the ability to understand context. They therefore rely heavily on their reporting mechanisms, a process that intrinsically open to abuse by groups with a clear agenda.

And it looks like calls for censorship are starting to spread beyond just single-issue activist groups into the mainstream media – the one set of people you would previously have imagined would be most opposed to censorship. Tim Pool, once more, flags up a piece published by tech site Wired that, quite rightly, flags up the inconsistency of the censorship process, but then takes the step of calling out some other ‘far right activists’ it thinks Facebook should ban while it’s at it.  

Twitter CEO Jack Dorsey recently did the interview rounds, including with several independent podcasters. While he was generally viewed as being a bit too evasive, he did concede that a censorship process which relies heavily on third party is flawed and open to abuse. The problem is there is now so much commercial, regulatory and political scrutiny on the big social media platforms that they have to be seen to act when ‘problematic’ content is flagged up.

You don’t need to spend much time on social media to realise that it’s the battle ground for a culture war between those in favour of (selective) censorship and those who want speech to be as free as possible. There is unlikely to ever be a clear winner, but there is little evidence that censorship ever achieves the outcomes it claims to desire: protecting people from harm.

Nobody is forced to consume any content they don’t like and censorship never changes anyone’s mind – it just drives speech and ideas underground and, if anything, entrenches the positions of those who hold them. To sign off we must give a nod to the hugely popular podcaster Joe Rogan, who recently conducted a 4-5 hour live stream with Alex Jones, a polarising figure that has been kicked off pretty much every platform. You can watch it below or not – it’s your choice.

 

Three UK’s guerrilla marketing strategy backfires

Challenger brands need to try harder to get noticed, but this approach can sometimes backfire, as Three UK found out this week.

Three is hoping to build on its #PhonesAreGood campaign, launched in October last year, that took a tongue-in-cheek look at all the negative press around smartphone addiction by imagining some historical scenarios that would have been changed for the better with the involvement of a smartphone.

One of those scenarios concerned King Henry the Eighth, who notoriously got through six wives by the time he called it a day. The joke is that if he’d had some kind of dating app at the time he might have been able to make up his mind about them prior to marriage and thus a couple of them could have been spared the chop.

In the build up to Valentine’s Day some bright spark in Three UK’s marketing department thought it might be a laugh to promote this part of the campaign with a tweet entitled ‘Shag, marry or behead’. This was presumably a nod to the ‘kiss, marry, kill’ game and maybe even the phrase used by history students to remember what happened to Henry’s wives: ‘Divorced, beheaded, died; Divorced beheaded survived’.

Now you don’t have to be the most committed social justice warrior to know that Twitter is not the place for nuance or humour and anything that can be taken offence to will be. While the tweet was clearly a joke, there was always the possibility that it could be perceived as some kind of trivialising or even endorsement of domestic violence by someone.

That someone was apparently mumsnet member Jeanhatchet, who flagged up the tweet on the site’s forum. “The 3 mobile network are laughing at domestic homicide in this tweet, Jeanhatchet wrote. “In many of the women killed as a result of intimate partner violence blunt force trauma to or being stabbed in the head is a feature.

“The most worrying thing is …. how did this marketing meeting go? What views were expressed about killing women? How that was funny and would sell more contracts and phones? Imagine those men who sanctioned this and how common their views are that it never registered as a marketing disaster? https://twitter.com/threeuk/status/1095740892919541761?s=21” Three seems to have taken down the offending tweet by here’s a screenshot of it courtesy of Jeanhatchet. Google is also still acknowledging the original tweet.

3 UK deleted tweet

 

3 UK deleted tweet Google

That was enough for the Manchester Evening News to get involved, which committed not one but two dogged hacks to the job of writing up this mumsnet post and the rest of the claimed ‘flurry of complaints’ around Three’s tweet. Even their combined efforts weren’t enough to ensure the faithful representation of the discussion thread they were copying-and-pasting, however, with one of Jeanhatchet’s early comments wrongly attributed to  Lolkittens5, to whom she was responding.

Hot on their heels was Labour MP James Frith, who apparently spent most of yesterday working himself up in to an impressive froth of righteous indignation. “This is a disgraceful ad,” he opened. “Misogynistic and violent towards women. The disgrace of it. I hope you’re fined big time for this with proceeds sent to women’s refuges. Utterly shameful @ThreeUK”

Three, of course, tried the standard ‘we are sorry for any offence caused’ defence but, as is nearly always the case, it was too little too late. Apparently emboldened by the 30 likes and four retweets his initial Twitter salvo received Frith doubled down, including the textbook move of calling for an apology and then rejecting it when it was made. He concluded by vowing to grass Three up to the ASA before wrapping up his busy day by retweeting a story about how great his constituency is.

Unperturbed Three announced a new marketing initiative today around London Fashion Week. It’s claiming to have launched the world’s first 5G mixed reality catwalk, It ‘uses innovative start up Rewind and its Magic Leap mixed reality technology alongside Three’s 5G network, which will see the designer’s inspirations come to life on the catwalk,’ according to the press release.

Somehow this involves the son of singer Liam Gallagher and actress Patsy Kensit, who is apparently a world-renowned model and, as you can see below, has inherited his father’s petulant resting face. Perhaps this is intended to distract from Three’s rather weak 5G claims, with vague talk of IoT and AR the only substantiation offered.

3 UK Lennon Gallagher

“Today we are turning up the volume on 5G and bringing it to life for the first time in the UK, right here in the heart of the fashion world,” said Shadi Halliwell, chief marketing officer at Three. “By giving students access to the next generation of mobile technology, they will be able to push the boundaries of learning, innovation and sustainability to create in a way that’s never been possible.”

Halliwell, who presumably signed-off on the problematic tweet, will be hoping this new initiative will be free from controversy, or maybe not. It’s possible that the tweet was made in the hope of a bit of viral exposure, but that seems unlikely when you consider how quickly it was deleted. One thing, at least, Three will have gained from the experience is the knowledge that guerrilla marketing is a very high-risk strategy these days.

Europe pats US internet giants on the head for being good censors

In just the third year of the EU’s Orwellian online speech purge it looks like the major platforms are largely submitting to its will.

The EU Code of Conduct on countering illegal hatespeech online has been going since 2016 as “an effort to respond to the proliferation of racist and xenophobic hate speech online.” The EU seemed to have decided that if you stop people saying horrid things online then you’ll also stop them having horrid thoughts and doing horrid things.

To implement this theory the EU needed the cooperation of the major platforms run by Facebook, Microsoft, Twitter and Google. It will have done the usual thing of threatening vindictive regulatory action if they didn’t comply so sensibly they have. They are now assessing 89% of content flagged as hatespeech within 24 hours and removing 72% of it.

Definitions of hatespeech seem to be pretty consistent across the EU, which is presumably no coincidence. Here’s the European Commission’s one:

Certain forms of conduct as outlined below, are punishable as criminal offences:

  • public incitement to violence or hatred directed against a group of persons or a member of such a group defined on the basis of race, colour, descent, religion or belief, or national or ethnic origin;
  • the above-mentioned offence when carried out by the public dissemination or distribution of tracts, pictures or other material;
  • publicly condoning, denying or grossly trivialising crimes of genocide, crimes against humanity and war crimes as defined in the Statute of the International Criminal Court (Articles 6, 7 and 8) and crimes defined in Article 6 of the Charter of the International Military Tribunal, when the conduct is carried out in a manner likely to incite violence or hatred against such a group or a member of such a group.

Instigating, aiding or abetting in the commission of the above offences is also punishable.

With regard to these offences listed, EU countries must ensure that they are punishable by:

  • effective, proportionate and dissuasive penalties;
  • a term of imprisonment of a maximum of at least one year.

With regard to legal persons, the penalties must be effective, proportionate and dissuasive and must consist of criminal or non-criminal fines. In addition, legal persons may be punished by:

  • exclusion from entitlement to public benefits or aid;
  • temporary or permanent disqualification from the practice or commercial activities;
  • being placed under judicial supervision;
  • a judicial winding-up order.

The initiation of investigations or prosecutions of racist and xenophobic offences must not depend on a victim’s report or accusation.

Hate crime

In all cases, racist or xenophobic motivation shall be considered to be an aggravating circumstance or, alternatively, the courts must be empowered to take such motivation into consideration when determining the penalties to be applied.

If you couldn’t be bothered to read all that, the TL;DR is that you can’t say horrid things online if race, nationality, belief, etc comes into it, or even join in if someone else does. If you do all sorts of punishments will be inflicted on you, including a year in prison (as maximum of at least one year? That doesn’t make sense). The victim of such hatespeech doesn’t even need to have accused you of anything and the court reserves the right to determine your motivation for doing stuff.

“Today’s evaluation shows that cooperation with companies and civil society brings results,” said Andrus Ansip, Vice-President for the Digital Single Market. “Companies are now assessing 89% of flagged content within 24 hours, and promptly act to remove it when necessary. This is more than twice as much as compared to 2016. More importantly, the Code works because it respects freedom of expression. The internet is a place people go to share their views and find out information at the click of a button. Nobody should feel unsafe or threatened due to illegal hateful content remaining online.”

“Illegal hate speech online is not only a crime, it represents a threat to free speech and democratic engagement,” said Vĕra Jourová, Commissioner for Justice, Consumers and Gender Equality. “In May 2016, I initiated the Code of conduct on online hatespeech, because we urgently needed to do something about this phenomenon. Today, after two and a half years, we can say that we found the right approach and established a standard throughout Europe on how to tackle this serious issue, while fully protecting freedom of speech.”

Those statements are perfectly Orwellian, insisting as they do that censorship is free speech. The really chilling thing is that they clearly believe that imposing broad and vague restrictions on online speech is vital to protect the freedom of nice, compliant non-hateful people. The EC even had the gall to berate the platforms for not offering enough feedback to those it censors. This could easily be resolved with a blanket statement along the lines of “We’re just following orders.”

As you can see from the tweet below extracted from the full report, the types of things that qualify as hatespeech have increased since the above definition was written. This kind of mission creep is made all the more inevitable by the complicity of Silicon Valley and complete absence of dissenting media, so there’s every reason to assume the definition of hatespeech will continue to expand indefinitely.

 

The Silicon Valley inquisition gathers pace

A number of independent online commentators have been blacklisted by technology giants for seemingly arbitrary reasons.

The past few weeks have seen another round of purging of content creators who rely on the internet for a living. The reasons for doing so are varied but usually default to some kind of transgression of their terms and conditions of use. However these Ts and Cs tend to be vaguely worded and appear to be selectively enforced, leading to fears that these decisions have been driven as much by subjective ideology as exceptional misbehaviour on the part of creators.

If there is an ideological bias it would appear to be against those commentators that are advocates of freedom of speech and unfettered dialogue. On the other side of the fence you have those who are concerned with concepts such as ‘hate speech’, which seek to ensure nothing that is deemed ‘offensive’ should be tolerated in the public domain.

Those latter terms are ill-defined and thus subject to a wide range of interpretation, which means rules that rely on them will, by definition, be subjectively enforced. In spite of that there is growing evidence that Silicon Valley companies are unanimous in their assessments of who should and shouldn’t be banned from all of their public platforms.

We have previously written about the coordinated banning of InfoWars from pretty much all internet publication channels and a subsequent purge of ‘inauthentic activity’ from social media. Now we can add commentator Gavin McInnes to the list of people apparently banned from all public internet platforms and, most worryingly of all, the removal of popular YouTuber Sargon of Akkad from micro-funding platform Patreon.

The internet, social media and especially YouTube have revolutionised the way in which regular punters get access to information, commentary and discussion. Free from the constraints imposed on broadcast TV, YouTubers have heralded a new era of on-demand, unfettered, user-generated content that has rapidly superseded TV as the viewing platform of choice.

Their primary source of income has traditionally been the core internet model: monetizing traffic via serving ads. But YouTube has been removing ads from any videos that have even the slightest chance of upsetting any of its advertisers for some time, forcing creators to call for direct funding from their audience to compensate.

The best-known micro-funding service is Patreon, which is where many YouTubers send their audience if they want to pay for their content. Any decision by Patreon to ban its users can therefore have massive implications for the career and income of the recipient of the ban. Sargon is thought to have had revenues from Patreon alone in excess of £100,000 per year, a revenue stream that has been unilaterally cut off without even a warning, it seems.

Every time an internet company moves against a popular internet figure there is inevitably outcry on both sides of the matter. Prominent advocates of free speech such as Jordan Peterson and Dave Rubin have tweeted their support for Sargon, while many media are actively celebrating the punishing or outright removal from the internet of people they don’t like.

The age-old debate concerning the optimal balance between safety and freedom is being won by those biased in favour of the former on the internet. The leaders of those companies are in a difficult position regarding censorship of their platforms but they seem to be basing their decisions on fear of the internet mob rather than rational, objective enforcement of universal rules. This isn’t a new phenomenon but it seems to be rapidly getting worse.

To finish here’s YouTuber and independent journalist Tim Pool giving his perspective while he still can.

 

Facebook and Twitter coordinate once more over censorship

Facebook recently removed hundreds of accounts for ‘inauthentic’ behaviour and many of those affected have also seen their Twitter accounts suspended.

In a press release entitled ‘Removing Additional Inauthentic Activity from Facebook’, Facebook explained that its doesn’t like inauthentic behaviour, by which it means accounts that seek to mislead people about their real identities and/or objectives. While there is some concern that this could be driven by the desire to influence politics, Facebook reckons it’s mostly ‘clickbait’, designed to drive and then monetise internet traffic.

“And like the politically motivated activity we’ve seen, the ‘news’ stories or opinions these accounts and pages share are often indistinguishable from legitimate political debate,” said the release. “This is why it’s so important we look at these actors’ behaviour – such as whether they’re using fake accounts or repeatedly posting spam – rather than their content when deciding which of these accounts, pages or groups to remove.”

So Facebook is not saying it’s the arbiter of ‘authentic’ speech, which is very wise as that would put it in a highly compromised position. Instead it’s taking action against people posting political content via supposedly fake accounts or who are seen to generate spam. It seems to be hoping this will allow it to remove certain accounts that focus on political content without being accused of political meddling or bias.

All this context and preamble was offered to set up the big reveal, which is that Facebook has removed 559 Pages and 251 accounts that have broken its rules against spam and coordinated inauthentic behaviour. It looks like the timing of this renewed purge is influenced by the imminent US mid-term elections, with Facebook keen to avoid a repetition of claims made during the Cambridge Analytica scandal that it facilitated political meddling by allowing too much of this sort of thing to take place during the last US general election.

Of course Facebook is free to quality control its platform as much as it likes, but if it is seen to lack neutrality and objectivity in so doing, it runs the risk of alienating those of its users that feel discriminated against. In this case the loudest dissent seems to be coming from independent media, some of which feel they have been mistakenly identified as clickbaiters.

The Washington Post spoke to ‘Reasonable People Unite’, which was shut down by Facebook, but which claims to be legitimate (let alone authentic). Meanwhile Reason.com reckons libertarian publishers were targeted and spoke to the founder of The Free Thought Project, who also found himself banned in spite of claimed legitimacy.

Matt Agorist, who writes for The Free Thought Project, tweeted the following, and his subsequent piece indicated that his employer had also been removed from Twitter. This seems to be another manifestation (Alex Jones having been the most high-profile previous case) of coordinated activity between the two sites that, together with YouTube, dominate public debate in the US. A number of other publishers removed by Facebook seem now to have been suspended by Twitter.

Other independent journalists have joined the outcry, including Caitlin Johnstone and Tim Pool in the video below. The latter makes the point that many of those purged seem to be left-leaning, which at least balances the previous impression that right-leaning commentators were being disproportionately targeted, and that many of the accounts taken down may well have been guilty as charged. But the inherent subjectivity involved in determining the relative legitimacy of small publishers is a problem that is only amplified by this latest move.

It seems unlikely that the primary objective of these social media giants is to impose their world view via the censorship of content they disagree with, but this kind of coordinated banning does feel like unilateral speech policing and that should be of concern, regardless of your political position. Twitter doesn’t even seem to have made any public statements on the matter. Meanwhile the range of views considered ‘authentic’ by these private companies seems to be narrowing by the day.

 

Twitter wants your help with censorship

Social network Twitter continues to agonise over how it should censor its users and thinks getting them involved in the process might help.

While all social media companies, and indeed any involved in the publication of user-generated content, are under great pressure to eradicate horridness from their platforms, Twitter probably has the greatest volume and proportion of it. Content and exchanges can get pretty heated on Facebook and YouTube, public conversation giant Twitter is where it seems to really kick off.

This puts Twitter in a tricky position: it wants people to use it as much as possible, but would ideally like them to only say nice, inoffensive things. Even the most rose-tinted view of human nature and interaction reveals this to be impossible, so Twitter must therefore decide where on the nice/horrid continuum to draw the line and start censoring.

To date this responsibility has been handled internally, with a degree of rubber-stamping from the Trust and Safety Council – a bunch of individuals and groups that claim to be expert on the matter of online horridness and what to do about it. But this hasn’t been enough to calm suspicions that Twitter, along with the other tech giants, allows its own socio-political views to influence the selective enforcement of its own rules.

So now Twitter has decided to invite everyone to offer feedback every time it decides to implement a new layer of censorship. Do date the term ‘hate’ has been a key factor in determining whether or not to censor and possibly ban a user. Twitter has attempted to define the term as any speech that attacks people according to race, gender, etc, but it has been widely accused of selectively enforcing that policy along exactly the same lines it claims to oppose, with members of some groups more likely to be punished than others.

Now Twitter wants to add the term ‘dehumanizing’ to its list of types of speech that aren’t allowed. “With this change, we want to expand our hateful conduct policy to include content that dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target,” explained Twitter in a blog post, adding that such language might make violence seem more acceptable.

Even leaving aside Twitter’s surrender to the Slippery Slope Fallacy, which is one of the main drivers behind the insidious spread of censorship into previously blameless areas of speech, this is arguably even more vague than ‘hate’. For example does it include nicknames? Or as the BBC asks, is dehumanizing language targeted at middle-aged white men just as hateful as that aimed at other identity groups?

Perhaps because it’s incapable of answering these crucial questions Twitter wants everyone to tell it what they think of its definitions. A from on that blog post will be open for a couple of weeks and Twitter promises to bear this public feedback in mind when it next updates its rules. What isn’t clear is how transparent Twitter will be about the feedback or how much weight it will carry. What seems more likely is that this is an attempt to abdicate responsibility for its own decisions and deflect criticism of subsequent waves of censorship.

 

EU set to impose tough new rules on social media companies

The European Commission is reportedly planning to bring in new laws that will punish social media companies if they don’t remove terrorist content within an hour of it being flagged.

The news comes courtesy of the FT, which spoke to the EU commissioner for security, Julian King, on the matter of terrorists spreading their message over social media. “We cannot afford to relax or become complacent in the face of such a shadowy and destructive phenomenon,” he said, after reflecting that he doesn’t think enough progress had been made in this area.

Earlier this year the EU took the somewhat self-contradictory step of imposing some voluntary guidelines on social media companies to take down material that promotes terrorism within an hour of it being flagged. In hindsight that move seems to have been made in order to lay the ground for full legislation, with Europe now being able to claim its hand has been reluctantly forced by the failure of social media companies to do the job themselves.

So long as the legal stipulation if for content to be taken down when explicitly flagged as terrorist by police authorities it should be pretty easy to enforce – indeed it could probably be automated. But legislation such as this does pose broader questions around censorship. How is ‘terrorist’ defined? Will there be a right of appeal? Will other organisations be given the power to demand content be taken down? Will this law be extended to other types of contentious content?

At the end of the FT piece it is noted that, while the EU still allows self-regulation on more subjective areas like ‘hate speech’ and ‘fake news’, Germany is a lot more authoritarian on the matter. Given the considerable influence Germany has over the European bureaucracy it’s not unreasonable to anticipate a time when the EU follows Germany’s lead on this matter.

Meanwhile US President Donald Trump – avid user of Twitter but loather of much of the mainstream media – got involved in the social media censorship debate via his favoured medium. You can see the tweets in question below and, while he appears to be motivated by concern that his own supporters are being selectively censored, his broader point is that censorship is bad, full stop.

Lastly Twitter CEO Jack Dorsey continues to publicly agonise about the topic of censorship and specifically how, if at all, he should apply it to his own platform. In an interview with CNN he conceded Twitter as a company has a left-leaning bias, but stressed the platform is policed according to user behaviour rather than perceived ideology. He also noted that transparency is the only answer to allegations of bias.