Facebook attempts to walk the tightrope on censorship

Having criticized Twitter for poking the bear, Facebook seems to be adopting a more nuanced approach to policing its platform.

Twitter’s decision to censor President Trump was an astounding mistake. Of course nobody, no matter how powerful, should be exempt from its policies, but if you’re going to single out one of the most powerful people in the world, you had better make sure you have all your bases covered. Twitter didn’t.

Facebook boss Mark Zuckerberg recognised Twitter’s mistake immediately and announced during an interview with Fox News that Facebook shouldn’t be the arbiter of truth of everything people say online. Even his choice of news outlet was telling, as Fox seems to be the only one not despised by Trump. Zuckerberg was effectively saying ‘leave us out of this’.

Twitter boss Jack Dorsey responded directly with the following tweet thread, which at first attempted to isolate the decision to censor Trump to him alone, but then proceeded to talk in the first person plural.

Within a couple of days Zuckerberg posted further clarification of his position on, of course, Facebook, in which he noted the current violent public response to a man dying in US police custody served as a further reminder of the importance of getting these decisions right.

“Unlike Twitter, we do not have a policy of putting a warning in front of posts that may incite violence because we believe that if a post incites violence, it should be removed regardless of whether it is newsworthy, even if it comes from a politician,” wrote Zuckerberg. “We have been in touch with the White House today to explain these policies as well.”

From that post we can see that Zuckerberg is still in favour of censorship, but sets the bar higher than Twitter and doesn’t see the point in half measures. Worryingly for Zuckerberg, many Facebook employees have taken to Twitter to voice their displeasure at this policy, apparently demanding Facebook does censor the President.

It’s worth reflecting on the two forms of censorship Twitter has imposed on Trump. The first was simply to fact-check a claim he made about postal voting, which linked to a statement saying his claim was ‘unsubstantiated’ according to US media consistently hostile to Trump.

The second superimposed a warning label over the top of a Trump tweet warning of repercussions for rioting, which reads: “This Tweet violated the Twitter Rules about glorifying violence. However, Twitter has determined that it may be in the public’s interest for the Tweet to remain accessible.” Clicking on the label reveals Trump’s tweet, which features the phrase “when the looting starts, the shooting starts.”

That was apparently the bit that was interpreted as glorifying violence, and yet a subsequent Trump tweet using exactly the same phrase has not been subject to any censorious action by Twitter. That discrepancy alone illustrate the impossible Twitter has put itself in (not to mention the fact that the labels don’t survive the embedding process) and there are presumably millions of other examples of borderline threats of violence that it has also let pass. Inconsistent censoring can easily be viewed as simple bias, seeking to tip the scales of public conversation in your favour.

For many people censorship is a simple matter of harm reduction. Why would anyone want to allow speech that could cause harm? The mistake they make is to view harm as an objective, absolute concept on which there is unilateral consensus. As Zuckerberg’s post shows, the perception of harm is often highly subjective and the threshold at which to censor harmful speech is entirely arbitrary.

There is clearly a lot of demand for extensive policing of internet speech nonetheless, but social media companies have to resist it if they want to be able to claim they’re impartial. There’s just no way to keep bias out of the censorship process. If they don’t, they risk being designated as publishers and thus legally responsible for every piece of content they host. This would be calamitous for their entire business model, which makes it all the more baffling that Dorsey would so openly risk such an outcome.

Apple belatedly looks to refocus on podcasts

The podcasting industry was shaken up this week with the announcement that JRE is moving exclusively to Spotify and it looks like it has caught Apple’s attention.

Bloomberg reports that Apple is looking to increase its investment in original podcasts, as well as buying existing ones, to augment its nascent Apple TV+ service. While its easy to view this as a classic case of shutting the stable door after the horse has bolted, Apple seems to view podcasts as either a by-product of video content or as material that could then be adapted to video.

Apple effectively invented the podcast format, which derives its name from the pioneering iPod digital audio player, but the pre-eminence of iTunes as a podcasting platform is under serious threat thanks to this recent development. You have to assume Joe Rogan (pictured) spoke to Apple before recently committing to Spotify, so it would be fascinating to know what led him to ultimately reject it.

If hearsay from Rogan’s friend Alex Jones is to be believed, the straw that broke the camel’s back was supplied by the podcast’s other main publishing platform, YouTube. In an article that seems to have since been taken down, Summit News claimed Rogan told Jones it was YouTube’s censorship of alternative views on the coronavirus pandemic that pushed him over the edge.

According to the piece, YouTube has been actively excluding popular content from its trending lists, including some of Rogan’s biggest. On top of that, YouTube has been taking down some videos from doctors and other experts that challenge the conventional narrative on things like COVID-19 pathology and the desirability of keeping society locked down. Rogan’s move is characterised in the piece as ‘a direct strike against the culture of censorship’.

We don’t know why that piece is no longer available, but it seems unlikely that Jones would have fabricated his conversation with Rogan, even if he is often inclined towards hyperbole. Our best guess is that Rogan either didn’t intend his views to be made public or regretted it once they were, and therefore asked for the story to be taken down. The publisher, Paul Joseph Watson, has close ties to Alex Jones and both of them were banned by Facebook a year ago for being ‘dangerous’.

Back to Apple, the podcasting industry will be hoping Spotify’s move will lead to the kind of spending arms race and bidding war for talent that has characterised the video streaming industry for some time. Not only do podcasts like JRE attract massive audiences, they cost next to nothing to produce. The only catch is that the best ones are completely uncensored and thus risky for prudish publishers. Perhaps that’s ultimately what pushed Rogan away from Apple.

France imposes 1-hour deadline on some social media censorship on pain of massive fines

A new law has been passed in France that allows it to impose draconian punishments on social media companies that fail to take down some content within 60 minutes.

The news comes courtesy of Reuters, which reports: ‘online content providers will have to remove paedophile and terrorism-related content from their platforms within the hour or face a fine of up to 4% of their global revenue.’ Other content that is deemed ‘manifestly illicit’ by whoever makes these decisions will have to be taken down within 24 hours.

“People will think twice before crossing the red line if they know that there is a high likelihood that they will be held to account,” said Justice Minister Nicole Belloubet, apparently oblivious to the fact that the law largest the platforms, not their users. It’s not clear whether the responsibility for identifying content that crosses this like will be the responsibility of the platforms too, but if it is, they will need to be provided with a comprehensive censorship manual if they’re expected to comply.

The matter of social media censorship is a very tricky one and nobody is saying illegal content should be allowed to remain in the public domain, but this looks like a very clumsy approach by the French. There are many alternatives to the imposition of massive fines and this smacks of yet another cash grab by the French state on the US tech sector.

Twitter tries a better alternative to censorship

Public tug-of-war platform Twitter is opting to label, rather than censor, tweets it considers misleading about the COVID-19 situation.

Twitter’s latest tweak was announced in a blog post entitled: Updating our Approach to Misleading Information. “Starting today, we’re introducing new labels and warning messages that will provide additional context and information on some Tweets containing disputed or misleading information related to COVID-19,” it said.

The disputed part is hilarious, since dispute is what characterises Twitter. What they mean is ‘disputed by sources we favour’. Whether or not something is misleading once more depends on which sources you consider to be definitive. For example Facebook has defaulted to the World Health Organisation as the unimpeachable source on all things ‘rona.

Since all decisions on accuracy are subjective, with the exception of ‘settled science’ (itself a hotly disputed concept), those in a position to make them should do so with humility and a soft touch. Sadly they all to often opt for outright censorship in the mistaken belief that will resolve whatever problems they think the banned speech creates.

Twitter is taking a more sensible approach in this case, by attaching labels to tweets it takes issue with, hyperlinked to either its own curated repository of ‘correct’ information or an external trusted source. Both will be subject to their own biases, of course, but at least outright censorship has been averted and people are being permitted to use their own judgment about what to believe.

Having said that, there is an escalating scale that can still lead to censorship if the tweet is considered harmful enough. Twitter is, of course, free to police its platform as it sees fit, but if it opts to censor too many marginal tweets then this sensible concession will quite rightly be viewed as window dressing and an empty gesture.

Facebook hopes new Oversight Board will resolve censorship dilemma

Facebook’s Oversight Board has announced its first 20 members and will start hearing cases related to content dispute later this year, but the fundamental problems with censorship remain.

Mark Zuckerberg announced in a Facebook post that the first 20 members of the independent Oversight Board. “The Oversight Board will have the power to overturn decisions we’ve made on content as long as they comply with local laws. Its decisions will be final — regardless of whether I or anyone else at the company agrees with them,” he wrote. “Facebook won’t have the power to remove any members from the board. This makes the Oversight Board the first of its kind.”

The selection process started with Facebook selecting four “Co-Chairs” of the Board, who then worked with Facebook to select the rest. The Charter decrees that after the formation of the board “a committee of the board will select candidates to serve as board members”. Ultimately the Board will have 40 members. Board members will serve fixed terms of three years, up to a maximum of three terms. The Board’s financial independence is “guaranteed by the establishment of a $130 million trust fund that is completely independent of Facebook, which will fund our operations and cannot be revoked”, it says in a press release.

The first Co-Chairs, Catalina Botero-Marino (Dean of Law School at Universidad de Los Andes from Colombia), Jamal Greene (Law School Professor at Columbia University), Michael W. McConnell (Professor and Director of the Constitutional Law Center at Stanford Law School), and Helle Thorning-Schmidt (Former Prime Minister of Denmark) wrote an opinion piece in The New York Times laying out their tasks.

When the Board starts hearing cases later this year, “Users will be able to appeal to the oversight board if they disagree with Facebook’s initial decision about whether to take down or leave up a given piece of content, and Facebook can also refer cases to the board,” the article said. “In the initial phase users will be able to appeal to the board only in cases where Facebook has removed their content, but over the next months we will add the opportunity to review appeals from users who want Facebook to remove content.”

There is almost an “over-to-you” type of sigh of relief from Zucherberg. “The Oversight Board will help us protect our community by ensuring that important decisions about content and enforcement are thoughtful, protect free expression, and won’t be made by us alone,” he said in his post. “I know that people will disagree about what should and shouldn’t come down. But I’m confident that the Oversight Board will make these decisions thoughtfully and fairly. I look forward to watching them begin their work.”

The Oversight Board may be able to take some of the trickiest burdens off Zuckerberg’s shoulders, but if he thinks he could wash his hands completely off troubles with the set-up of this Upper House, he would be wrong. The Oversight Board may find itself facing as many questions it cannot answer as those it can.

The fundamental question remains, as this publication has stressed more than once, who gets to decide what the right answers should be? While there is no dispute that 5G does not spread coronavirus, when it comes to issues we genuinely do not have a definite answer yet, the matters can get messy. Facebook has been actively removing Covid-19 related content not toeing the WHO line, regardless of WHO’s own dubious communication messages and conspicuous cosiness with China. Would the Oversight Board have upheld the content’s right to remain standing if it did not toe WHO line? Moreover, when it comes to “truth” about the novel coronavirus that caused Covid-19, if there is anything the world’s scientists could agree on, it is that we do not yet know much about it.

Another often disputed topic is hate speech. The Board expects to see “cases that examine the line between satire and hate speech”, but the definition of hate speech varies from person to person. The Student Union at Oxford University recently passed an “Academic Hate Speech Motion”, demanding materials it deemed harmful or “triggering” be removed and banned from the syllabus, which led to Richard Dawkins, an Oxford alumnus, retorted that, by the hate speech definition in the student motion, “history students can’t read up on women’s suffrage, or the rise of Nazism or Apartheid, theology students can’t read Bible or Koran”.

The University immediately rejected the motion and upheld the principle that “‘free speech is the lifeblood of a university.” Suppose the Student Union would ask Facebook to remove certain content it believes falling into its definition of hate speech but which by the definition of the University “enables the pursuit of knowledge”, would the Oversight Board side with the students or with the school?

In his essay “On Liberty” (1859), John Stuart Mill gave four reasons why opinions one does not agree should not be suppressed. These should still be our guiding principles:

“First, if any opinion is compelled to silence, that opinion may, for aught we can certainly know, be true. To deny this is to assume our own infallibility. Secondly, though the silenced opinion be an error, it may, and very commonly does, contain a portion of truth; and since the general or prevailing opinion on any subject is rarely or never the whole truth, it is only by the collision of adverse opinions that the remainder of the truth has any chance of being supplied.

Thirdly, even if the received opinion be not only true, but the whole truth; unless it is suffered to be, and actually is, vigorously and earnestly contested, it will, by most of those who receive it, be held in the manner of a prejudice, with little comprehension or feeling of its rational grounds. And not only this, but, fourthly, the meaning of the doctrine itself will be in danger of being lost, or enfeebled, and deprived of its vital effect on the character and conduct: the dogma becoming a mere formal profession, inefficacious for good, but cumbering the ground, and preventing the growth of any real and heartfelt conviction, from reason or personal experience.” (Chapter II, “Of the liberty of thought and discussion”)

The Facebook Oversight Board does seems like an honest attempt to establish a balanced, independent body for making censorship decisions. But even the most qualified, objective censors are still censors and, but definition, have to make subjective distinctions between ‘good’ and ‘bad’ speech. Merely shrugging and pointing to the board will not absolve Facebook of responsibility for these decisions and won’t resolve the underlying paradox of platforms increasingly behaving as publishers.

Facebook poaches Ofcom gamekeeper

Regulation is coming and Facebook knows it, so it has reportedly persuaded Ofcom’s Director of Content Standards, Licensing and Enforcement to join the team.

The news comes courtesy of the Times, which reports that Tony Close resigned last week and was placed in gardening leave as soon as it became clear where he was headed. Close had been at Ofcom since 2003 and was most recently one of the people heading up Ofcom’s regulatory strategy with regard to social media, a role that became a lot more interesting when it was given new censorship powers earlier this year.

Neither Ofcom nor Facebook seem to have confirmed the move and we hadn’t received a response to our enquiry to Ofcom at time of writing. However there’s no sign of Close on Ofcom’s content board page, which seems to confirm he’s legged it. Facebook seems to have a taste for UK establishment figures, having nabbed for Deputy PM Nick Clegg to head up its government relations in 2018.

Close continues the rich tradition of public servants taking lucrative positions late in their career in the private sector to help navigate their former beat. He will be able to fill Facebook in on the latest thinking when it comes to regulating social media companies, something Facebook insists it welcomes, but presumably also wants to ensure doesn’t get in the way of business.

The regulation of big social media will be a defining issue of the next few years. They are supposed to be neutral platforms that allow public discussion without any editorial involvement of their own. Increasingly, however, pressure from advertisers, politicians and regulators has compelled them to take an active role in censoring their platforms to ensure the ‘wrong’ kinds of content don’t appear on them.

That kind of activity is associated with publishers, not platforms, but the likes of Facebook, Twitter and YouTube still don’t produce their own content. That, along with the practical impossibility of editing every single piece of content before it’s published, means social media companies can’t be classified as publishers for the purposes of regulation.

So it seems clear that a new category needs to be created for services that facilitate publication but don’t produce their own content. Regulators would then need to create a unique set of rules and obligations for that category to abide by, such as parameters of acceptable speech, as well as a proper structure to protect the interests of those who publish on those platforms.

It’s very hard to see where the best place to draw those lines is. This publication would prefer minimal censorship combined with robust public challenges to contentious content, but we’re apparently in a minority. Mainstream sentiment seems to err towards a more censorious approach to ‘preventing harm’ and it will be the job of regulators like Ofcom to define that. Facebook has quite sensibly used some of its abundant funds to get a greater insight into what form that definition may take.

Twitter tries to stop people calling for damage to 5G infrastructure

In the latest episode of social media censorship whack-a-mole, Twitter is going to remove any tweets that might incite people to do silly things.

Perhaps drawing on the ‘shouting fire in a crowded theatre’ justification for censorship, Twitter is worried its platform could be used to spread panic among the populace through the dissemination of incendiary material. On the other hand, it could be responding to pressure from the US state, which seems convinced the Chinese are using social media to stir things up.

Either way, Twitter has added another clause to its already Byzantine list of things people aren’t allowed to say. In the section headed ‘protecting the public conversation’ (from what? On behalf of who?), the recent amendment is headed ‘Broadening our guidance on unverified claims’. Here it is summarised, of course, in a tweet.

Examples of the kinds of things that Twitter had deemed no longer cool are: “The National Guard just announced that no more shipments of food will be arriving for two months — run to the grocery store ASAP and buy everything” or “5G causes coronavirus — go destroy the cell towers in your neighborhood!” What is less clear is whether those same sentiments, but without the specific calls to action, are still allowed. That will probably be covered in the next round.

While they’re thinking about that, the Twitter censors should also decide whether all calls to action are bad. While urging people to panic-buy is hardly the most harmful thing you could urge them to do, some calls to action may be actively benign, or at least ambiguous. Take the tweet below, which concerns the word ‘liberate’.

Where it gets really interesting is when you consider the tweet wasn’t so much about semantics as whether these new rules apply to everyone. You see, US President Trump recently sent the tweet below, as well as a couple of others concerning Michigan and Minnesota, which have been interpreted as a call for the citizens of those states to resist some of the restrictions that have been imposed on them as a result of the coronavirus pandemic.

For those of you unfamiliar with the US Constitution, the 2nd Amendment reads as follows: “A well-regulated militia, being necessary to the security of a free state, the right of the people to keep and bear arms, shall not be infringed.” What, exactly, Trump was calling for with these tweets remains unclear, but indirect references to militia and arms seem like the sort of thing these new Twitter rules would censor if they had been tweeted by a regular punter.

The longer the coronavirus lockdowns continue, the more people will grow restive and express their frustration over social media. Stopping people publishing clear and direct incitements to criminal activity, such as destroying telecoms infrastructure, is one thing. But Twitter is going to struggle to censor every unverified claim or statements that could lead to public unrest.

US cell site vandalized as Chinese agents accused of sowing panic

US intelligence agencies claim Chinese agents are working to amplify messages that cause panic, while the first report of a cell tower being vandalized has emerged.

As arguably the most significant global crisis since World War 2 progresses, the public appetite for crazy theories and salacious gossip has increased, alongside a perfectly reasonable thirst for the latest news. Additionally this is certainly the most major crisis of the internet and social media era, so every type of information is being spread with unprecedented speed and scope.

Anxiety tends to drive suggestibility as awareness of danger is heightened alongside an increased need for reassurance and, according to a New York Times report, US intelligence agencies are worried that those with an interest in sowing panic among the US population are actively seeking to spread pieces of information that may do so.

Six unnamed US officials told the NYT they reckon Chinese operatives are starting to act in this manner, which is reminiscent of the kind of mischief that has been associated with Russia for some time. Specifically they’re suspected of using bogus phone and social media accounts to get the ball rolling on already existing to make sure they gain traction.

A major cause of domestic friction in the US is the lockdown imposed on most of the country in a bid to slow the spread of COVID-19. Civil liberties are a foundation of US culture in a more profound way than much of Europe, let alone the rest of the world, so its citizens tend to be more instinctively opposed to being told what to do by the state. As a result there are already protests against the lockdown in some parts of the country, which are presumably being fuelled by rhetoric from the President, urging some states to open up.

The US President doesn’t seem to have commented on these allegations, perhaps conscious of his own prolific use of social media to further his interests, and the Chinese Foreign Ministry dismissed them out of hand. However, Trump has been keen to point the finger of blame for the crisis at China and tensions between the two countries have undoubtedly been heightened by it.

What’s not clear is whether deliberate misinformation played a role in the vandalization of a Verizon cell site in New Jersey, as spotted by Light Reading. The vandals haven’t been caught, so we can’t be sure what specific grievances they have against the cell site, but it’s hard to believe the lunacy regarding COVID-19 and 5G that has gripped much of Europe didn’t play a part in their thinking.

The longer these unprecedented lockdowns continue, the greater the probability there is of significant civil disobedience. While hostile foreign powers may be seeking to stoke panic, attempts to censor such messaging are not only futile, they may well serve to augment existing paranoia. The anonymous US officials who briefed the NYT may have been hoping to unite the country in the face of a common enemy, but they are paradoxically also adding to the flow of unsubstantiated gossip they claim to be fighting against.

Facebook doubles down on COVID-19 censorship

Under mounting pressure to counter misinformation around the COVID-19 pandemic, Facebook is increasingly dictating what its users should see and think.

Facebook already downgrades any posts it doesn’t like the look of regarding the virus, but it’s apparently concerned that some of its users might still interact with the wrong content. It’s not their fault, you see, they’re just hapless plebs with not critical faculties of their own. Thankfully Facebook is on the case.

The social media giant’s VP of Integrity (an Orwellian job title if there ever was one), Guy Rosen, recently provided An Update on Our Work to Keep People Informed and Limit Misinformation About COVID-19. “We’re going to start showing messages in News Feed to people who have liked, reacted or commented on harmful misinformation about COVID-19 that we have since removed,” said Rosen.

“These messages will connect people to COVID-19 myths debunked by the WHO including ones we’ve removed from our platform for leading to imminent physical harm. We want to connect people who may have interacted with harmful misinformation about the virus with the truth from authoritative sources in case they see or hear these claims again off of Facebook.”

As ever with censorship initiatives, it all comes down to who decides what it the truth, what is harmful, and so on. Facebook seems determined to position the World Heath Organisation as the ultimate authority on such matters, despite mounting accusations of its bias in favour of China, which contributed to the recent decision by US President Trump to halt its funding.

One of the single biggest reasons Facebook has decided to double down on censorship may have been a report by online activist organization Avaaz, criticising Facebook for not censoring more. Sadly many journalist are also jumping on the censorship bandwagon, apparently unaware of how utterly self-defeating such a desire is.

The above tweet does, presumably inadvertently, serve one very useful purpose. If Facebook is being so proactive about censorship of COVID-19 talk, why isn’t it applying that level of rigour to other topics organizations like Avaaz disapprove of? Surely the censorship should never stop until bad speech is entirely eradicated from the internet and, ideally, people’s minds.

As we have said many times, the genie is out of the bottle when it comes to free speech on the internet. People will always find somewhere to say what they want and attempts to stop them often do more harm (which we’re supposed to be against, right?) than good. Conversely, if Facebook is serious about telling its users what to think, even its latest increase in censorship doesn’t go nearly far enough.