The Silicon Valley inquisition gathers pace

A number of independent online commentators have been blacklisted by technology giants for seemingly arbitrary reasons.

The past few weeks have seen another round of purging of content creators who rely on the internet for a living. The reasons for doing so are varied but usually default to some kind of transgression of their terms and conditions of use. However these Ts and Cs tend to be vaguely worded and appear to be selectively enforced, leading to fears that these decisions have been driven as much by subjective ideology as exceptional misbehaviour on the part of creators.

If there is an ideological bias it would appear to be against those commentators that are advocates of freedom of speech and unfettered dialogue. On the other side of the fence you have those who are concerned with concepts such as ‘hate speech’, which seek to ensure nothing that is deemed ‘offensive’ should be tolerated in the public domain.

Those latter terms are ill-defined and thus subject to a wide range of interpretation, which means rules that rely on them will, by definition, be subjectively enforced. In spite of that there is growing evidence that Silicon Valley companies are unanimous in their assessments of who should and shouldn’t be banned from all of their public platforms.

We have previously written about the coordinated banning of InfoWars from pretty much all internet publication channels and a subsequent purge of ‘inauthentic activity’ from social media. Now we can add commentator Gavin McInnes to the list of people apparently banned from all public internet platforms and, most worryingly of all, the removal of popular YouTuber Sargon of Akkad from micro-funding platform Patreon.

The internet, social media and especially YouTube have revolutionised the way in which regular punters get access to information, commentary and discussion. Free from the constraints imposed on broadcast TV, YouTubers have heralded a new era of on-demand, unfettered, user-generated content that has rapidly superseded TV as the viewing platform of choice.

Their primary source of income has traditionally been the core internet model: monetizing traffic via serving ads. But YouTube has been removing ads from any videos that have even the slightest chance of upsetting any of its advertisers for some time, forcing creators to call for direct funding from their audience to compensate.

The best-known micro-funding service is Patreon, which is where many YouTubers send their audience if they want to pay for their content. Any decision by Patreon to ban its users can therefore have massive implications for the career and income of the recipient of the ban. Sargon is thought to have had revenues from Patreon alone in excess of £100,000 per year, a revenue stream that has been unilaterally cut off without even a warning, it seems.

Every time an internet company moves against a popular internet figure there is inevitably outcry on both sides of the matter. Prominent advocates of free speech such as Jordan Peterson and Dave Rubin have tweeted their support for Sargon, while many media are actively celebrating the punishing or outright removal from the internet of people they don’t like.

The age-old debate concerning the optimal balance between safety and freedom is being won by those biased in favour of the former on the internet. The leaders of those companies are in a difficult position regarding censorship of their platforms but they seem to be basing their decisions on fear of the internet mob rather than rational, objective enforcement of universal rules. This isn’t a new phenomenon but it seems to be rapidly getting worse.

To finish here’s YouTuber and independent journalist Tim Pool giving his perspective while he still can.


The Children Act: US lawmakers asking to know how YouTube collects data on children

US Congressmen have demanded Google CEO answers questions on how YouTube tracks the data of minors.

Anyone who has been a parent to toddlers or pre-schoolers in the last dozen years must have felt, like it or not, YouTube has been a wonderful thing. It does not only provide occasional surrogate parenting but also delivers much genuine pleasure to the kids, from entertainment to education, with sheer silly laughter in between.

Meanwhile we have also recognised that YouTube can be a pain as much as a pleasure. The pre-roll and interstitial ads on such content are all clearly pushed at kids, in particular game and toy shopping; recommendations are based on what has been played therefore encouraging binge watching; not to mention the disturbing Peppa Pig or Micky Mouse spoof parodies that keep creeping through, a clear sign that, while you are watching YouTube, “YouTube is watching you”.

But neither the pleasure nor the pain should have been there in the first place, because, though not many of us have paid attention, “YouTube is not for children”, as the video service officially puts it. In its terms of service YouTube does require users to be 13 years and above. But, unlike Facebook, which would lock the user out unless he has an account, anyone can watch YouTube without the need of an account. An account is only needed when someone intends to upload a clip or make a comment. Even in situation like this, children can pretend to be above the age limit by inputting a faked date of birth, or simply by using someone else’s account. And YouTube has known that all along, it even teaches users how to make “family-friend videos”. Admit it or not, YouTube is for children.

Following complaints from 23 child and privacy advocacy groups to the Federal Trade Commission (FTC), two congressmen, David Cicilline (D) of Rhode Island, and Jeff Fortenberry (R) of Nebraska, sent a letter to Google’s CEO Sundar Pichai on September 17, demanding information on YouTube’s practices related to collection and usage of data of underaged users. The lawmakers invoked the Children’s Online Privacy Protection Act 1998 (COPPA), which forbids the collection, use or disclosure of children’s online data without explicit parental consent, and contrasted it with Google’s terms of service which give Google (and its subsidiaries) the permission to collect user data including geolocation, device ID, and phone number. The congressmen asked Google to address by October 17 eight questions, which are essentially related to:

  • What quantity and type of data YouTube has collected on children;
  • How YouTube determines if the user is a child, what safeguard measures are in place to prevent children from using the service;
  • How children’s content is tagged, and how this is used for targeted advertising;
  • How YouTube is positioning YouTube Kids, and why content for children is still retained on the main YouTube site after being ported to the Kids version

Google would not be the first one to fall foul of COPPA. In a recent high-profile case, FTC, which has the mandate to implement the law, fined the mobile advertising network inMobi close to $1 million for tracking users’, including children’s location information without consent.

This certainly is a headache that Google can do without. It has just been humiliated by the revelation that users’ location data was still being tracked after the feature had been turned off, not to mention the never-ending lawsuits in Europe and the US over its alleged anti-trust practices. It also, once again, highlights the privacy minefield the internet giants find themselves in.  Facebook is still being haunted by the Cambridge Analytica scandal, while Amazon’s staff were selling consumer data outright.

Nine years before COPPA came into force, an all-encompassing Children Act was passed in the UK in 1989. In one of its opening lines the Act states “the child’s welfare shall be the court’s paramount consideration.” This line was later quoted by the author Ian McEwan in his novel, titled simply “The Children Act” (which was recently made into a film of the same title). In that spirit we laud the congressmen for taking the action again YouTube’s profiteering behaviours. To borrow from McEwan, sometimes children should be protected from their pleasure and from themselves.

Ofcom ponders how to rid the internet of horridness

Nobody’s in favour of censorship but freedom comes at a price, right? This is essentially the premise for a new Ofcom discussion paper on online content.

Entitled ‘Addressing harmful online content’, the document has been published alongside some research specially commissioned into people’s concerns about online content. Its stated aim is to initiate a public conversation about this stuff but it gives the impression of trying to set the course of that debate in the direction of greater regulation and censorship.

The tension between security and freedom is as old as civilization and there are some standard techniques for persuading people to surrender their autonomy. A word currently in vogue is ‘harm’, which has the benefits of being emotive, universally disapproved of and yet broad enough to be subject to constant redefinition. If you can get people to agree that harm must be opposed then you can establish consensus on anything else so long as you position it in opposition to harm.

If harm alone isn’t emotive enough then it’s just a simple matter of determining the freedom you want to take away is harmful to children, and this is where many arguments in favour of regulating the internet start. Also likely to make an early appearance are ‘but’ phrases such as ‘the internet offers many benefits but…’ or ‘freedom of speech is important but it comes with responsibilities.’

The Ofcom discussion document is no different, but (you see, it’s easily done) it does appear to aspire to some degree of balance and thoroughness. After the standard preamble about how children need to be protected from horridness online (which is hard to disagree with, the question is how), it notes that regular broadcast TV is a lot more regulated than online platforms like YouTube.

Ofcom broadcast regulation table

The stuff about the BBC is irrelevant as it’s tax-funded and thus subject to a unique level of state interference. Ofcom gets to keep an eye on other broadcaster’s catch-up services but has no role in any other online content. IT reckons other online press is regulated by IPSO, which is not even approved as a regulator, and IMPRESS, which is not supported by the press. We can assure you that has had not contact with either, but perhaps we will after this.

Ofcom also laments the different amounts of regulation a piece of video is subject to depending on how it’s consumed. Live TV gets a lot of oversight, catch-up less so and YouTube none right now. The growing impression is that a lot of this is targeted very specifically at YouTube, which also happens to be where children increasingly go to for video.

Ofcom broadcast regulation table 2

The document then moves on to a cherry-picked selection of anecdotes describing positive outcomes of the kind of regulation it seems Ofcom would like to see more of. In the absence of equivalent negative anecdotes, who can Ofcom expect this to be considered evidence in any honest and rigorous sense of the word?

More balance is shown when the document moves onto the challenges of trying to regulate a platform such as YouTube. They include the scale of the platform, the variety of content on it, the fact that much of it is user-generated and the fact that it’s global. There is also a genuine attempt to explore the dichotomy we flagged up at the start of freedom versus security.

“Another relevant principle is the safeguarding of freedom of expression,” says that passage. This means that people are able to share and receive ideas and information without unnecessary interference, such as excessive regulations or restrictions. When the need to protect audiences from harm comes into tension with the need to preserve free expression, the weight that a regulator places on the two aims reflects the priorities set by Parliament, as well as audiences’ evolving attitudes.

“Depending on the weight attributed in an online context, there is a risk that regulation might inadvertently incentivise the excessive or unnecessary removal of content that limits freedom of speech and audience choice. Such concerns have been raised in the context of the new German law.”

“Our experience in regulating broadcasting shows that while balancing audience protection and freedom of expression is not straightforward, it can be done in a way that is transparent, principles based and fair. Applying this to an online world might translate into greater attention to the processes that platforms employ to identify, assess and address harmful content – as well as to how they handle subsequent appeals.”

A lot of very good points were made by that passage, but the concluding one would seem to fundamentally undermine calls for greater regulation. These platforms (as well as the press) already have loads of processes in place to tackle exactly the harmful stuff Ofcom seems to be worried about. Many would argue YouTube has already gone too far in this respect (albeit mainly to placate advertisers), but in any case doesn’t that render calls for regulation redundant?

Another part of this discussion document that seems somewhat self-defeating is the research. You can see the headline data points below, which seems to indicate the majority of the UK is pretty worried about a bunch of stuff online. This in turn would appear to be a clear call to action for Ofcom and the government to intervene in order to reassure and protect the anxious electorate.

Ofcom survey findings

But when you navigate through the supporting documents you can see a very big gulf between spontaneous and ‘prompted’ responses. It is to Ofcom’s credit that it has been so transparent about the findings, but it must also know that the vast majority of media will simply report the headline figures without nuance or caveat. That sounds dangerously close to the kind of ‘fake news’ that everyone claims to be so worried about online.

Ofcom survey findings prompted

Ofcom survey findings prompted 2

It’s right that Ofcom should take the lead in initiating a (somewhat belated) public discussion on how the wild west of the internet should be approached. But it’s hard to escape the impression that its desired outcome will be new powers and laws that restrict online activity in the name of shielding our delicate, innocent eyes and ears from ‘harm’. If the internet is all about greater choice then shouldn’t we be trusted to decide for ourselves what’s in our own best interests?

YouTube mobile app now tells you how much of your life you’re spending on it

Google seems to be concerned about pathological use of its YouTube video platform to it has made some new tools to help manage addiction.

The wonderfully euphemistic premise for this move is the neologism ‘digital wellbeing’. Since we don’t actually exist digitally this can only refer to the health (largely mental but in extreme cases possibly physical too) implications of spending too much time online. “Our goal is to provide a better understanding of time spent on YouTube, so you can make informed decisions about how you want YouTube to best fit into your life,” says the blog post announcing these new tools.

Firstly you can now easily see how much time you’ve spent watching YouTube via your account page. ‘Time watched’ is now prominently displayed when you navigate you your account page via the mobile app (although not via the desktop route) and prodding that reveals stats on you much time you’ve spent watching YouTube vids, including your daily average for the past week. We imagine this could present a pretty brutal wake-up call for some people.

On that note, within the same set of tools is the ability to aggregate all your YouTube notifications (reminders that someone you follow has uploaded new material) into a daily digest, rather than be constantly be bombarded by enticements to watch another vid. There’s even the capacity to set yourself a cap on the amount of time you spend on YouTube, which will allow the app to act as your conscience and urge you to expand your horizons once that time threshold is reached. You can even disable notifications to allow you to get some rest before the next binge.

“We’re dedicated to making sure that you have the information you need to better understand how you use YouTube and develop your own sense of digital wellbeing,” concludes the blog, written by Brian Marquardt, Director of Product Management, who felt the need to tell us he recently watched a YouTube clip of James Corden hanging out with The Backstreet Boys.

Google makes money every time someone watches a YouTube video it serves ads onto, so why would it be trying to help people spending less time doing so? The likely answer is that some people find it so addictive they’ve taken to abandoning the platform entirely in order to do other things like leaving the house and talking to people in real life. Google presumably wants to help them maintain their addictions at levels just short of pathological, to maximise YouTube traffic.

Youtube mobile tools 1

Youtube mobile tools 2

Youtube mobile tools 3

Internet giants approach the censorship point of no return

The apparently coordinated banning of conspiracy site InfoWars has brought to a head the role of social media companies in censoring public discussion.

InfoWars is headed by Alex Jones, a polemicist who likes to shout his often highly questionable views and theories at the camera. He has a large following and frequently says things that are offensive to many but, to date, he seems to have been accepted at part of the public debate mix, albeit a relatively extreme one.

However last week YouTube, Facebook, Apple and Spotify all took down content and channels from InfoWars on the grounds that it had broken their rules. This move was celebrated by many opponents of InfoWars but also called into question the grounds for taking the action and whether those rules are being applied equally to all.

Unsurprisingly Jones thinks it’s a conspiracy, but a number of other commentators are asking whether social media censorship rules are more strict for right leaning commentators than those on the left. Conservative publication Breitbart noted that groups such as Antifa – a militant far-left organisation – seems to escape unpunished for public statements that are at least as questionable.

And then there’s the matter of free speech in general, and more specifically censorship. Most people accept there has to be some limit on what can be said in public, such as shouting “fire” in a crowded theatre or explicitly calling for a crime to be committed, and the useful debate focuses on where that limit should be, as implemented by law.

But social media platforms are private companies and are thus free to implement their own policies independent of laws and apparent public will. For some time they have been censoring speech that would be allowed by law, which is their prerogative, but since most Western public discussion now takes place via this oligopoly of platforms, apparent coordinated action by them becomes a matter of public concern.

This concern is amplified when there is a perception of political bias behind the censorship decisions. Silicon Valley is generally considered to tend very much to the left of the political spectrum and social media seems to be especially twitchy about commentary deemed to be from the ‘far right’.

As the location for the most heated public debate, Twitter is the social media platform at the front line of the censorship issue. Intriguingly it has so far declined to ban InfoWars, despite evidence that it has violated Twitter rules. Of all the platforms Twitter seems to be having the most nuanced and sophisticated internal discussion on censorship, as evidenced by this NYT piece and the tweet below from CEO Jack Dorsey.

On top of the ethical and philosophical questions raised by the perception of selective censorship by social media companies there are also commercial ones. When YouTube started demonetizing videos it was in response to complaints from advertisers about having their brands placed alongside content ‘incompatible with their values’ or something like that. But there is a real danger, thanks to phenomena like the Streisand effect, that high profile censorship such as this will permanently drive traffic away from their platforms and create the demand for fresh competitors.

Sooner or later the big internet companies surely have to explicitly detail the cut off point for what speech they consider ‘unacceptable’ and clearly demonstrate they are implementing them even-handedly, or face an increasing backlash. It seems appropriate to conclude by referring the discussion to a couple of prominent YouTubers who, while no fans of InfoWars, are very concerned about selective censorship.


YouTube strikes back in increasingly important mobile video battle

No sooner does Instagram make its mobile video move than YouTube and Snapchat counter-attack in an area of growing commercial significance to telcos too.

Facebook subsidiary Instagram launched IGTV yesterday in a bid to wrest back some of the initiative in a mobile video space largely dominated by YouTube. In hindsight the announcement may have been timed to steal some of YouTube’s thunder, because just a few hours later the Google-owned giant announced a bunch of initiatives designed to keep its ‘creators’ loyal.

It’s no coincidence that we’re getting so many online video-related announcements right now because we’re in the middle of VidCon – a big event devoted entirely to just that. Traditionally it has been a convention of YouTubers, i.e. people who devote much of their time to creating video content and sticking it up on YouTube. Since its acquisition by Viacom earlier this year it seems to have embraced the corporate world more closely and this is reflected in all these announcements.

Monetization is a critical issue when it comes to user-generated video as kids increasingly aspire to make a living that way. The most successful YouTubers make millions, but traffic doesn’t always map directly onto revenue, with YouTube reserving the right not to serve ads on content it thinks advertisers might not want to be associated with.

The result of this approach is that creators are increasingly finding their videos ‘demonetized’, with no prospect of traffic being converted into money. YouTube seems to be aware how alienating this process is to its creators and has belatedly moved to appease them with some new tools to help them pay the bills beyond taking a cut of ad revenue.

In a blog Neal Mohan, Chief Product Officer at YouTube, announced its creators are earning more money than ever from advertising, but conceded the need to create other revenue channels, building on the Super Chat service it introduced last year that enabled viewers of a live stream to pay money in order to make their comments more prominent.

So now we have Channel Memberships, a premium subscription service that offers special access to the creator for five dollars per month. YouTube has also partnered with a merchandise specialist to assist creators with flogging branded tat to their viewers. Lastly there is Premiers, which aims to turn a pre-recorded video into a live event, thus unlocking the potential of things like Super Chat.


All this stuff is as much a response to alternative revenue-generation mechanisms such as Patreon, which is an easy way for anyone to pledge small regular donations to someone they want to support, thus bypassing the advertising channel, as to Facebook. There’s also Amazon-owned Twitch, which live-streams games and allows viewers to pay for premium virtual tat such as emojis if that’s what floats their boat.

The other big player in mobile video is Snapchat, which has been offering portrait-aligned video suspiciously similar to the IGTV announcement for some time. With much less fanfare it has just announced its Shows video format, which was previously only available to corporate producers, has now been extended to regular creators.

The only other major social media platform we haven’t mentioned yet is Twitter, but BuzzFeed reckons mobile video has been a key reason for the recent turnaround in its fortunes. If you had bought Twitter stock in August of last year you would have tripled your money by now and, alongside a focus on news, a general rethink and a healthy dollop of luck, BuzzFeed puts that down to an aggressive push into premium live video.

A visit to your Twitter stream typically finds sponsored video clips interspersed within the usual bile, virtue-signalling and twitch hunts. These could be ads, news clips, sports coverage. “Video is really really important to us,” Matt Derella, Twitter’s head of revenue and content partnerships, told BuzzFeed. “It’s our largest format in terms of revenue.”

All this is directly relevant to the telecoms industry as video continues to put enormous strain on networks and operators increasingly look to content to boost their ARPUs and become less dependent on traditional contracts for their revenues. Internet companies are becoming increasingly reliant on mobile video for their business models, which could create a host of new opportunities for telcos able to move quickly enough to exploit them.

Facebook takes fight to YouTube on mobile with IGTV

Facebook subsidiary Instagram has launched a new app dedicated to long-form video on mobile devices that seems designed to compete with dominant incumbent YouTube.

If you want to publish video longer than a few minutes on the internet right now (outside of China) YouTube is by far the best place to get traffic and maybe even monetise your efforts. There are alternative specialist services, such as Vimeo, but they’re much smaller, and other social media platforms tend to be used for mini clips.

Instagram has traditionally been all about photos and while some producers, such as comedian Kyle Dunnigan, have adapted their video content to it, to date Instagram and Facebook have left the longer video market to their great competitor Google.

Not any more it seems. IGTV is a dedicated service within Instagram as well as a standalone app that increases the maximum length of uploaded videos from one minute to one hour. Additionally it displays the video in portrait (or vertical, as Instagram puts it), full-screen, while YouTube requires you to view full-screen video in landscape, thus needing to rotate your phone by 90 degrees. Oh, the first-world problems we have to endure.

“IGTV is different in a few ways,” said Kevin Systrom, Co-Founder & CEO of Instagram, in a blog that rather embarrassingly seems to feature a broken link to a video. “First, it’s built for how you actually use your phone, so videos are full screen and vertical. Also, unlike on Instagram, videos aren’t limited to one minute. Instead, each video can be up to an hour long.”

IGTV screens

This launch seems designed to address several important issues for Facebook. It has been agonizing over user engagement and seems to want people to use the main Facebook platform for ‘engaging with each other’ somehow, instead of just monging out at cat video compilations, so it seems to be hoping to ring-fence the video stuff on Instagram. But even this strategy seems to be confused, as we saw with the recent announcement of a tool apparently designed to limit the time spend on Instagram.

The bigger play seems to be to take on YouTube as the place for user-generated content. YouTube has been spending most of this year trying to alienate many of its producers by refusing to serve ads against their content, thus depriving them of the main means of being paid for their work. The market is desperate for a viable alternative and this could be it, so we imagine YouTube execs will be watching this situation very closely.

Having said that they don’t need to panic just yet, because right now there’s no way of directly monetising videos on IGTV. As reported by Variety, Systrom said he wants to build ‘engagement’ first but tentatively conceded that monetising is “obviously a very reasonable place to end up.”

If and when that does happen Facebook has the opportunity to steal a lot of video business from Google, but only if it does a better job of looking after its producers than YouTube has. Advertisers are very sensitive about having their brand positioned next to the ‘wrong’ kind of content, but accurately identifying that content is tricky. YouTube id currently erring on the side of caution, leading to innocuous videos being demonetised. If even Google can’t get that algorithm right, what hope does Facebook have?

Teens prefer cat videos to online stalking – Pew Research

The Pew Research Centre has released new research which questions whether we can continue to call Facebook the king of social media.

The statistics show that only 51% of US teens aged 13-17 use Facebook, with only 10% saying it is their preferred social media platform. In comparison, YouTube was the most popular, with 85% of the teens stating they use the platform, 32% saying it is the most frequently visited, while SnapChat attracts the attention of 69%, with 35% of the respondents stating they use it the most. Interestingly, Facebook-owned Instagram was more popular than Facebook itself, with 72% of the respondents using the platform.

“This shift in teens’ social media use is just one example of how the technology landscape for young people has evolved since the Center’s last survey of teens and technology use in 2014-2015,” Jingjing Jiang and Monica Anderson said in a blog post.

“The social media landscape in which teens reside looks markedly different than it did as recently as three years ago. In the Center’s 2014-2015 survey of teen social media use, 71% of teens reported being Facebook users. No other platform was used by a clear majority of teens at the time: Around half (52%) of teens said they used Instagram, while 41% reported using Snapchat.”

Perhaps one of the reasons is the increase in smartphone penetration. This edition of the research found 95% of respondents own or have access to a smartphone, up from 73% when the research was conducted in 2014-15. Instagram and SnapChat are platforms which are mobile meaning that during the days of lower smartphone penetration, this was the only social media option. YouTube is of course popular on both desktop and mobile, however the explosion in video content over the last couple of years, as well as cheaper data tariffs, may go some way to explain the dominance.

A final interesting statistic is the amount of time teens are spending online. 45% of teens state they are online almost constantly, compared to 24% back in 2014-15 edition of the research, though only a small proportion feel it is having a negative impact. When asked what impact the online world was having on their live, 31% believed it to be mostly positive, 45% were neutral, while only 24% said it was negative.

The internet has been an important aspect of our lives since its inception, but looking at research focused on the future generations, it is starting to appear more dominant than important. Perhaps Facebook is one of those brands that peaked too early.

Facebook and YouTube reveal their offensiveness offensive and intolerance of intolerance

Facebook and Google-owned YouTube have both attempted to offer some transparency when it comes to one of today’s haziest trends: abusive, offensive or ‘inappropriate’ online content.

While these are firms which have hardly helped themselves over the last couple of years considering the almost allergic reactions to transparency, you do have to have a bit of sympathy; being a mediator of appropriate content is a tight-rope walk. If you come down too tough it would be seen as being overly sensitive and limiting freedom of speech, while too far the other direction can lead to loss of advertisers, a PR disaster and more ammunition for self-righteous politicians.

Starting with Facebook, the team has decided to unveil its policies towards offensive content in its continued damage-limitation quest following the Cambridge Analytica scandal. Employing a content policy team, which Facebook describes as having subject matter experts in numerous fields including hate speech, child safety and terrorism, is a good starting point and sets the standards. These individuals will consult external experts to understand the development of social norms and language to adapt these policies in time, and hopefully find the sweet spot which keeps everyone happy.

This is the big issue which we see with the policy; there is no such thing as policy which will keep everyone happy. Some countries like guns, some don’t. Some treat women as equal, some are based in the stone-age when it comes to enlightenment. Some embrace national stereotypes, while others find nicknames offensive. The world is regionalised, as are the people who live on it. Facebook has been forced by rule makers and politicians into finding a solution, of which there isn’t one, unless you want social media to turn into a beige exchange of people discussing the weather.

But as the social media giant states, policy is only as good as the enforcement. Reports of offensive content will be reviewed by the Community Operations team, there are currently 7,500 content reviewers and the additional help of artificial intelligence, before action is taken. That said, should a post be removed, the user now has the right to appeal. This certainly sounds like an incredibly democratic way to approach the situation, but again, it leads to nuances which could send the team further down the rabbit hole.

The person who posted the deemed offensive content may not have been offended by it in the first place. This is where it becomes tricky. Obviously there are examples which are clearly offensive and inappropriate, but there will be examples where content is offensive to some and not others. Take guns for instance. In the US, posing with an instrument of death would not be deemed offensive, but in the UK, parents might be concerned that if their children see the image, weapons and violence become normalised in their minds. These are two seemingly similar countries, which have quite different opinions on such content. Who decides who is right and who is wrong?

While Facebook has tried to demonstrate its newly found principles of transparency, YouTube has been less nuanced, simply posted figures from the last quarter on how many videos have been removed from the platform due to policy violations. The report has been released at the same time as the Google financials, so we can expect this report once a quarter.

Over the last quarter, 8 million videos were removed from the platform, the majority of which were spam or people attempting to upload adult content. 6.7 million of these videos were first flagged for review by machines rather than humans, with 76% being removed before a single view. Removing the video before it has been viewed is proving to be a successful exercise for YouTube, as you can see below, which shows how videos depicting violent extremism were removed.


The promise of artificial intelligence in this space is probably the only way in which the problem is going to be dealt with, and it does seem YouTube is getting a good handle on the situation. You do have to bear in mind there are far less posts to be reviewed on YouTube compared to Facebook, though being able to lean on the expertise of the Google AI team will certainly help here.

While the machine learning technologies developed by YouTube are able to remove videos, it is worth noting the majority make their way to content reviewers here as well. Firstly the video is reviewed, but the team also look at the metadata such as the title, description or tags. The reviewer will also be able to identify the context, identifying whether the purpose of the content is educational, documentary, scientific or artistic.

This is where YouTube is making a stance against the PC-army. Should the video fall into one of these categories, it will generally be left up. People have the right to be offended, but it doesn’t mean what they are offended by is offensive. Some of these videos might be hidden behind an age restriction, or in some cases nothing is done. A good example would be those who do not believe in Darwin’s Theory of Evolution. Just because there are some schools who believe in creationism and are offended by these videos online, does not mean everyone would be. This element of YouTube’s policy is something we like; the team seem to realise there is no such thing as a perfect solution for everyone, but are taking appropriate steps to make sensible and logical decisions.

We should never give the internet giants too much credit, as most have shown themselves to be mistrusting and misleading in one area or another, but tackling offensive content online is a very difficult task. There is no such thing as a perfect policy which covers everyone around the world, and we wonder whether Facebook’s search for this silver bullet could land it in a worse position than it is in now.

The social media censorship debate intensifies

Twitter has been accused of ‘shadowbanning’ certain types of user, once more calling into question the role of private companies in public censorship.

The accusation comes in the form of a report from Project Veritas, which used hidden cameras to record current and former Twitter employees appearing to confirm and endorse the practice of ‘downranking’ or even ‘shadowbanning’ specific Twitter users if they disapprove of what they’re posting.

In essence Twitter stands accused by this report of deprioritising posts from certain users without notifying them. This results in those users getting less prominence on the platform and thus, it’s alleged, effectively having their posts buried. The criteria for this deprioritisation are unclear, thus leaving Twitter open to accusation of potential bias along ideological, commercial or other grounds.

Twitter has responded via Fox News with the following statement, which also addresses accusations that it’s too quick to divulge private communications such as direct messages to state agencies.

“Twitter only responds to valid legal requests, and does not share any user information with law enforcement without such a request,” said the Twitter spokesperson. “We deplore the deceptive and underhanded tactics by which this footage was obtained and selectively edited to fit a pre-determined narrative.

“Twitter is committed to enforcing our rules without bias and empowering every voice on our platform, in accordance with the Twitter Rules. Twitter does not shadowban accounts. We do take actions to downrank accounts that are abusive, and mark them accordingly so people can still to click through and see these Tweets if they so choose.”

The question begged by Twitter’s response revolves around how it defines ‘accounts that are abusive’. It’s probably not hyperbole to suggest that the majority of Twitter accounts yield comments that could be considered abusive from time to time, but we assume the majority of accounts are not being downranked.

It seems unlikely that political ideology forms a central part of the internet giants’ business strategies – certainly not above growth, profit, etc. But as the Damore vs Google case shows there is concern that certain viewpoints may be so intrinsic to some organisations that a degree of bias is built into their entire corporate culture. If this is the case then it’s not unreasonable to assume that employees charged with policing their public platforms may also be influenced in that direction.

Another concern over private organisations censoring public discourse is derived directly from the profit motive. Many mainstream media routinely focus on a core set of narratives designed to appeal to their audience – just look at the Guardian vs the Daily Mail – and it’s not inconceivable that social media might be tempted to do the same.

It is therefore to Facebook’s credit that it has recently announced a major change to how it prioritises what appears on a given individual’s Facebook feed to give preference to personal stuff over commercially driven content. “I’m changing the goal I give our product teams from focusing on helping you find relevant content to helping you have more meaningful social interactions,” said Facebook CEO Mark Zuckerberg.

On the flip side Google-owned YouTube is in the middle of an ongoing controversy over the practice of ‘demonetization’. YouTube faced a lot of pressure to be more careful about which videos it serves ads on earlier this year and has since been a lot more proactive in this area. Since professional YouTubers largely rely on ad revenue for their business model, having that stream removed is very significant and potentially a form of censorship.

The original controversy revolved around ads being served against what was considered to be ‘extremist’ content and it’s hard to argue that putting your brand next to such stuff is a good look. But just as with other pivotal terms such as ‘hate’ and ‘abusive’, we lack a broad consensus for the definition of ‘extremist’.

One example of a popular YouTuber who is currently seeing some of his content demonetized by YouTube, but who it would be difficult to describe as ‘extremist’, is US interviewer Dave Rubin. His M.O. is to film extended dialogues with interesting people and his stated socio-political position is ‘classic liberal’ with an emphasis on freedom of speech. Here’s his latest tweet on being demonetized by YouTube.

We seem to be at a critical juncture regarding public discussion on the internet, with the almost impossible task of balancing a variety of conflicting and increasingly entrenched interests being passed around like a hot potato. This topic was recently discussed at the CES tech show, which you can watch below (warning: there’s about a minute of pointless noise at the beginning). The point made by Eric Weinstein at 17:00 about the different forms of fake news touches on part of the challenge and you can see him expand on them to one Dave Rubin below that.