The Silicon Valley inquisition gathers pace

A number of independent online commentators have been blacklisted by technology giants for seemingly arbitrary reasons.

The past few weeks have seen another round of purging of content creators who rely on the internet for a living. The reasons for doing so are varied but usually default to some kind of transgression of their terms and conditions of use. However these Ts and Cs tend to be vaguely worded and appear to be selectively enforced, leading to fears that these decisions have been driven as much by subjective ideology as exceptional misbehaviour on the part of creators.

If there is an ideological bias it would appear to be against those commentators that are advocates of freedom of speech and unfettered dialogue. On the other side of the fence you have those who are concerned with concepts such as ‘hate speech’, which seek to ensure nothing that is deemed ‘offensive’ should be tolerated in the public domain.

Those latter terms are ill-defined and thus subject to a wide range of interpretation, which means rules that rely on them will, by definition, be subjectively enforced. In spite of that there is growing evidence that Silicon Valley companies are unanimous in their assessments of who should and shouldn’t be banned from all of their public platforms.

We have previously written about the coordinated banning of InfoWars from pretty much all internet publication channels and a subsequent purge of ‘inauthentic activity’ from social media. Now we can add commentator Gavin McInnes to the list of people apparently banned from all public internet platforms and, most worryingly of all, the removal of popular YouTuber Sargon of Akkad from micro-funding platform Patreon.

The internet, social media and especially YouTube have revolutionised the way in which regular punters get access to information, commentary and discussion. Free from the constraints imposed on broadcast TV, YouTubers have heralded a new era of on-demand, unfettered, user-generated content that has rapidly superseded TV as the viewing platform of choice.

Their primary source of income has traditionally been the core internet model: monetizing traffic via serving ads. But YouTube has been removing ads from any videos that have even the slightest chance of upsetting any of its advertisers for some time, forcing creators to call for direct funding from their audience to compensate.

The best-known micro-funding service is Patreon, which is where many YouTubers send their audience if they want to pay for their content. Any decision by Patreon to ban its users can therefore have massive implications for the career and income of the recipient of the ban. Sargon is thought to have had revenues from Patreon alone in excess of £100,000 per year, a revenue stream that has been unilaterally cut off without even a warning, it seems.

Every time an internet company moves against a popular internet figure there is inevitably outcry on both sides of the matter. Prominent advocates of free speech such as Jordan Peterson and Dave Rubin have tweeted their support for Sargon, while many media are actively celebrating the punishing or outright removal from the internet of people they don’t like.

The age-old debate concerning the optimal balance between safety and freedom is being won by those biased in favour of the former on the internet. The leaders of those companies are in a difficult position regarding censorship of their platforms but they seem to be basing their decisions on fear of the internet mob rather than rational, objective enforcement of universal rules. This isn’t a new phenomenon but it seems to be rapidly getting worse.

To finish here’s YouTuber and independent journalist Tim Pool giving his perspective while he still can.


Facebook and Twitter coordinate once more over censorship

Facebook recently removed hundreds of accounts for ‘inauthentic’ behaviour and many of those affected have also seen their Twitter accounts suspended.

In a press release entitled ‘Removing Additional Inauthentic Activity from Facebook’, Facebook explained that its doesn’t like inauthentic behaviour, by which it means accounts that seek to mislead people about their real identities and/or objectives. While there is some concern that this could be driven by the desire to influence politics, Facebook reckons it’s mostly ‘clickbait’, designed to drive and then monetise internet traffic.

“And like the politically motivated activity we’ve seen, the ‘news’ stories or opinions these accounts and pages share are often indistinguishable from legitimate political debate,” said the release. “This is why it’s so important we look at these actors’ behaviour – such as whether they’re using fake accounts or repeatedly posting spam – rather than their content when deciding which of these accounts, pages or groups to remove.”

So Facebook is not saying it’s the arbiter of ‘authentic’ speech, which is very wise as that would put it in a highly compromised position. Instead it’s taking action against people posting political content via supposedly fake accounts or who are seen to generate spam. It seems to be hoping this will allow it to remove certain accounts that focus on political content without being accused of political meddling or bias.

All this context and preamble was offered to set up the big reveal, which is that Facebook has removed 559 Pages and 251 accounts that have broken its rules against spam and coordinated inauthentic behaviour. It looks like the timing of this renewed purge is influenced by the imminent US mid-term elections, with Facebook keen to avoid a repetition of claims made during the Cambridge Analytica scandal that it facilitated political meddling by allowing too much of this sort of thing to take place during the last US general election.

Of course Facebook is free to quality control its platform as much as it likes, but if it is seen to lack neutrality and objectivity in so doing, it runs the risk of alienating those of its users that feel discriminated against. In this case the loudest dissent seems to be coming from independent media, some of which feel they have been mistakenly identified as clickbaiters.

The Washington Post spoke to ‘Reasonable People Unite’, which was shut down by Facebook, but which claims to be legitimate (let alone authentic). Meanwhile reckons libertarian publishers were targeted and spoke to the founder of The Free Thought Project, who also found himself banned in spite of claimed legitimacy.

Matt Agorist, who writes for The Free Thought Project, tweeted the following, and his subsequent piece indicated that his employer had also been removed from Twitter. This seems to be another manifestation (Alex Jones having been the most high-profile previous case) of coordinated activity between the two sites that, together with YouTube, dominate public debate in the US. A number of other publishers removed by Facebook seem now to have been suspended by Twitter.

Other independent journalists have joined the outcry, including Caitlin Johnstone and Tim Pool in the video below. The latter makes the point that many of those purged seem to be left-leaning, which at least balances the previous impression that right-leaning commentators were being disproportionately targeted, and that many of the accounts taken down may well have been guilty as charged. But the inherent subjectivity involved in determining the relative legitimacy of small publishers is a problem that is only amplified by this latest move.

It seems unlikely that the primary objective of these social media giants is to impose their world view via the censorship of content they disagree with, but this kind of coordinated banning does feel like unilateral speech policing and that should be of concern, regardless of your political position. Twitter doesn’t even seem to have made any public statements on the matter. Meanwhile the range of views considered ‘authentic’ by these private companies seems to be narrowing by the day.


Twitter wants your help with censorship

Social network Twitter continues to agonise over how it should censor its users and thinks getting them involved in the process might help.

While all social media companies, and indeed any involved in the publication of user-generated content, are under great pressure to eradicate horridness from their platforms, Twitter probably has the greatest volume and proportion of it. Content and exchanges can get pretty heated on Facebook and YouTube, public conversation giant Twitter is where it seems to really kick off.

This puts Twitter in a tricky position: it wants people to use it as much as possible, but would ideally like them to only say nice, inoffensive things. Even the most rose-tinted view of human nature and interaction reveals this to be impossible, so Twitter must therefore decide where on the nice/horrid continuum to draw the line and start censoring.

To date this responsibility has been handled internally, with a degree of rubber-stamping from the Trust and Safety Council – a bunch of individuals and groups that claim to be expert on the matter of online horridness and what to do about it. But this hasn’t been enough to calm suspicions that Twitter, along with the other tech giants, allows its own socio-political views to influence the selective enforcement of its own rules.

So now Twitter has decided to invite everyone to offer feedback every time it decides to implement a new layer of censorship. Do date the term ‘hate’ has been a key factor in determining whether or not to censor and possibly ban a user. Twitter has attempted to define the term as any speech that attacks people according to race, gender, etc, but it has been widely accused of selectively enforcing that policy along exactly the same lines it claims to oppose, with members of some groups more likely to be punished than others.

Now Twitter wants to add the term ‘dehumanizing’ to its list of types of speech that aren’t allowed. “With this change, we want to expand our hateful conduct policy to include content that dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target,” explained Twitter in a blog post, adding that such language might make violence seem more acceptable.

Even leaving aside Twitter’s surrender to the Slippery Slope Fallacy, which is one of the main drivers behind the insidious spread of censorship into previously blameless areas of speech, this is arguably even more vague than ‘hate’. For example does it include nicknames? Or as the BBC asks, is dehumanizing language targeted at middle-aged white men just as hateful as that aimed at other identity groups?

Perhaps because it’s incapable of answering these crucial questions Twitter wants everyone to tell it what they think of its definitions. A from on that blog post will be open for a couple of weeks and Twitter promises to bear this public feedback in mind when it next updates its rules. What isn’t clear is how transparent Twitter will be about the feedback or how much weight it will carry. What seems more likely is that this is an attempt to abdicate responsibility for its own decisions and deflect criticism of subsequent waves of censorship.


EU set to impose tough new rules on social media companies

The European Commission is reportedly planning to bring in new laws that will punish social media companies if they don’t remove terrorist content within an hour of it being flagged.

The news comes courtesy of the FT, which spoke to the EU commissioner for security, Julian King, on the matter of terrorists spreading their message over social media. “We cannot afford to relax or become complacent in the face of such a shadowy and destructive phenomenon,” he said, after reflecting that he doesn’t think enough progress had been made in this area.

Earlier this year the EU took the somewhat self-contradictory step of imposing some voluntary guidelines on social media companies to take down material that promotes terrorism within an hour of it being flagged. In hindsight that move seems to have been made in order to lay the ground for full legislation, with Europe now being able to claim its hand has been reluctantly forced by the failure of social media companies to do the job themselves.

So long as the legal stipulation if for content to be taken down when explicitly flagged as terrorist by police authorities it should be pretty easy to enforce – indeed it could probably be automated. But legislation such as this does pose broader questions around censorship. How is ‘terrorist’ defined? Will there be a right of appeal? Will other organisations be given the power to demand content be taken down? Will this law be extended to other types of contentious content?

At the end of the FT piece it is noted that, while the EU still allows self-regulation on more subjective areas like ‘hate speech’ and ‘fake news’, Germany is a lot more authoritarian on the matter. Given the considerable influence Germany has over the European bureaucracy it’s not unreasonable to anticipate a time when the EU follows Germany’s lead on this matter.

Meanwhile US President Donald Trump – avid user of Twitter but loather of much of the mainstream media – got involved in the social media censorship debate via his favoured medium. You can see the tweets in question below and, while he appears to be motivated by concern that his own supporters are being selectively censored, his broader point is that censorship is bad, full stop.

Lastly Twitter CEO Jack Dorsey continues to publicly agonise about the topic of censorship and specifically how, if at all, he should apply it to his own platform. In an interview with CNN he conceded Twitter as a company has a left-leaning bias, but stressed the platform is policed according to user behaviour rather than perceived ideology. He also noted that transparency is the only answer to allegations of bias.

Internet giants approach the censorship point of no return

The apparently coordinated banning of conspiracy site InfoWars has brought to a head the role of social media companies in censoring public discussion.

InfoWars is headed by Alex Jones, a polemicist who likes to shout his often highly questionable views and theories at the camera. He has a large following and frequently says things that are offensive to many but, to date, he seems to have been accepted at part of the public debate mix, albeit a relatively extreme one.

However last week YouTube, Facebook, Apple and Spotify all took down content and channels from InfoWars on the grounds that it had broken their rules. This move was celebrated by many opponents of InfoWars but also called into question the grounds for taking the action and whether those rules are being applied equally to all.

Unsurprisingly Jones thinks it’s a conspiracy, but a number of other commentators are asking whether social media censorship rules are more strict for right leaning commentators than those on the left. Conservative publication Breitbart noted that groups such as Antifa – a militant far-left organisation – seems to escape unpunished for public statements that are at least as questionable.

And then there’s the matter of free speech in general, and more specifically censorship. Most people accept there has to be some limit on what can be said in public, such as shouting “fire” in a crowded theatre or explicitly calling for a crime to be committed, and the useful debate focuses on where that limit should be, as implemented by law.

But social media platforms are private companies and are thus free to implement their own policies independent of laws and apparent public will. For some time they have been censoring speech that would be allowed by law, which is their prerogative, but since most Western public discussion now takes place via this oligopoly of platforms, apparent coordinated action by them becomes a matter of public concern.

This concern is amplified when there is a perception of political bias behind the censorship decisions. Silicon Valley is generally considered to tend very much to the left of the political spectrum and social media seems to be especially twitchy about commentary deemed to be from the ‘far right’.

As the location for the most heated public debate, Twitter is the social media platform at the front line of the censorship issue. Intriguingly it has so far declined to ban InfoWars, despite evidence that it has violated Twitter rules. Of all the platforms Twitter seems to be having the most nuanced and sophisticated internal discussion on censorship, as evidenced by this NYT piece and the tweet below from CEO Jack Dorsey.

On top of the ethical and philosophical questions raised by the perception of selective censorship by social media companies there are also commercial ones. When YouTube started demonetizing videos it was in response to complaints from advertisers about having their brands placed alongside content ‘incompatible with their values’ or something like that. But there is a real danger, thanks to phenomena like the Streisand effect, that high profile censorship such as this will permanently drive traffic away from their platforms and create the demand for fresh competitors.

Sooner or later the big internet companies surely have to explicitly detail the cut off point for what speech they consider ‘unacceptable’ and clearly demonstrate they are implementing them even-handedly, or face an increasing backlash. It seems appropriate to conclude by referring the discussion to a couple of prominent YouTubers who, while no fans of InfoWars, are very concerned about selective censorship.


Silicon Valley’s ugly duckling starting to blossom

Despite being one of the first social media networks to disrupt the way we communicate, Twitter has never really gleamed the rewards of the connected economy, but perhaps this is changing.

In February, Twitter posted its first ever quarterly profit, and the latest financial report perhaps indicate this was not a fluke. The numbers are certainly heading the right direction, and Twitter could prove to be a platform which collects the digital bounties.

“Our second quarter results reflect the work we’re doing to ensure more people get value from Twitter every day,” said CEO Jack Dorsey. “We want people to feel safe freely expressing themselves and have launched new tools to address problem behaviours that distort and distract from the public conversation.

“We’re also continuing to make it easier for people to find and follow breaking news and events, and have introduced machine learning algorithms that organize the conversation around events, beginning with the World Cup. These efforts contributed to healthy year-over-year daily active usage growth of 11% and demonstrate why we’re investing in the long-term health of Twitter.”

Looking specifically at the numbers, daily active users increased 11% year-on-year, while monthly active users increased to 335 million users. Total revenues across the period grew 24% to $771 million, while the business continued to remain profitable with net income of $100 million. The US accounted for $367 million, a 10% year-on-year increase, though the international markets grew a very impressive 44% to $344 million. Asia was the big growth driver here, with Japan being the most successful, remaining the second largest territory in the Twitter footprint.

As we pointed out earlier in the year, Twitter just seems to be getting better at working with advertisers. Back in February, the team introduced a number of new features, including a new Promoted Tweet composer and a subscription advertising service for small businesses, which essentially made it easier for advertisers to use the platform as a means to engage potential customers.

Over the last three months, some of the more success features included Video Website Cards, Video App Cards, In-Stream Video Ads, and Website Click Cards, as video ad revenue accounted for more than half of the total. These initiatives seem to have had a continued positive impact, as ad engagements increased 81% year-over-year and cost per engagement decreased 32% year-over-year.

Of course, helping advertisers is only half the story, as the platform has to remain engaging. It does help that some of the worlds more controversial (and terrifying) figures are using the platform as a primary means to communicate with the planet, but the team are also introducing more partnerships. With deals with ESPN, Viacom and NBCUniversal, the platform becomes more engaging, while the negative side of Twitter is being successfully addressed.

During the quarter, the team introduced new measures to handle spam, malicious automation, and platform manipulation. As of May 2018, systems identified and challenged more than 9 million potentially spammy or automated accounts per week, up from 6.4 million in December 2017. With the introduction of machine learning and automation processes, twice the number of accounts are being removed for violating spam conditions. The number of spam complaints from users has dropped from an average of approximately 25k per day in March, to approximately 17k per day in May.

Twitter seem to be doing what everyone sort of expected in the first place; delivering a platform which people like and a means for advertisers to communicate with them. It might have taken a few years to get there, but it seems Twitter is finally putting the ugly duckling image behind it.

Would you believe it, Twitter posts its first ever profit

Now this is not something many people were expecting, but the ugly duckling of the social media giants posted its first ever profit in the last quarter.

Over the course of the last three months Twitter reported revenues of  $732 million, a 2% year-on-year increase, while profit stood at $91 million. We’ve said it before, but we’ll say it again as it is not something we expected to write, this is Twitter’s first ever quarter of profitability.

Advertising revenue totalled $644 million for the quarter, an increase of 1% year-over-year, while data licensing and other revenue was $87 million, up 10%. US advertising revenues shrunk 8%, though international was up 17% year-on-year. Total ad engagements were up 75% year-over-year which has been put down to engagement growth, improved products, better ad relevance to the user.

“Q4 was a strong finish to the year,” said Twitter CEO Jack Dorsey. This comment alone is a possible early entrant to understatement of the year.

“We returned to revenue growth, achieved our goal of GAAP profitability, increased our shipping cadence, and reached five consecutive quarters of double digit DAU growth. I’m proud of the steady progress we made in 2017, and confident in our path ahead.”

The final quarter also saw the team introduce a new Promoted Tweet composer that simplified the process of creating new Promoted Tweets. Advertisers who had access to this feature created 26% more Promoted Tweets and launched 13% more campaigns, spending 23% more on Twitter. Alongside this product, launched a new agency resource centre for mid-sized digital agencies and  Twitter Promote Mode (TPM), a subscription advertising service for small businesses to help them reach more people without the work of having to create ads or manage campaigns.

In short, the team made it easier for advertisers to spend money. A simple idea, but the best ones are. If this is the solution, you have to wonder how difficult the team were making it for advertisers to part with their cash over the last 11 years.

At the time of writing, Twitter share price was up an impressive 19%, though it should be worth noting overnight trading took the price higher. Nailing down a reason for the impressive performance is tricky, though the team has been purging fake accounts which could explain the greater faith advertisers are having in the platform.

Considering the rise of fake news and abuse over the social media, all platform owners are attempting to create a more attractive environment for both consumers and advertisers. Another area to consider is the engagement of users. While the total number of users was flat, engagement was up year-on-year for both daily and monthly active users, 12% and 4% year-on-year increases respectively. Both of these factors will offer greater confidence to advertisers to part with cash.

Over the next couple of months there will be a few new changes to the platform as well. The main aim will be to improve core ad offerings through better performance and measurement, including ad platform improvements, self-serve measurement studies, and third-party accreditation. The team will also explore  new channels of demand, such as online video, and introducing new ways to buy ads on Twitter, including alpha testing of programmatic buying. Data and enterprise solutions revenue are two other areas where the team foresee more success.

Perhaps this is a corner turned for Twitter, we’ll see. Profitability will certainly be welcomed over the next three months, but the team could use with a couple of ideas on how to increase the total number of users if it wants to make a real dent on the connected economy. Increasing the number of characters to 280 per tweet is a good start, but there will need to be more ideas like this in the pipeline.

The social media censorship debate intensifies

Twitter has been accused of ‘shadowbanning’ certain types of user, once more calling into question the role of private companies in public censorship.

The accusation comes in the form of a report from Project Veritas, which used hidden cameras to record current and former Twitter employees appearing to confirm and endorse the practice of ‘downranking’ or even ‘shadowbanning’ specific Twitter users if they disapprove of what they’re posting.

In essence Twitter stands accused by this report of deprioritising posts from certain users without notifying them. This results in those users getting less prominence on the platform and thus, it’s alleged, effectively having their posts buried. The criteria for this deprioritisation are unclear, thus leaving Twitter open to accusation of potential bias along ideological, commercial or other grounds.

Twitter has responded via Fox News with the following statement, which also addresses accusations that it’s too quick to divulge private communications such as direct messages to state agencies.

“Twitter only responds to valid legal requests, and does not share any user information with law enforcement without such a request,” said the Twitter spokesperson. “We deplore the deceptive and underhanded tactics by which this footage was obtained and selectively edited to fit a pre-determined narrative.

“Twitter is committed to enforcing our rules without bias and empowering every voice on our platform, in accordance with the Twitter Rules. Twitter does not shadowban accounts. We do take actions to downrank accounts that are abusive, and mark them accordingly so people can still to click through and see these Tweets if they so choose.”

The question begged by Twitter’s response revolves around how it defines ‘accounts that are abusive’. It’s probably not hyperbole to suggest that the majority of Twitter accounts yield comments that could be considered abusive from time to time, but we assume the majority of accounts are not being downranked.

It seems unlikely that political ideology forms a central part of the internet giants’ business strategies – certainly not above growth, profit, etc. But as the Damore vs Google case shows there is concern that certain viewpoints may be so intrinsic to some organisations that a degree of bias is built into their entire corporate culture. If this is the case then it’s not unreasonable to assume that employees charged with policing their public platforms may also be influenced in that direction.

Another concern over private organisations censoring public discourse is derived directly from the profit motive. Many mainstream media routinely focus on a core set of narratives designed to appeal to their audience – just look at the Guardian vs the Daily Mail – and it’s not inconceivable that social media might be tempted to do the same.

It is therefore to Facebook’s credit that it has recently announced a major change to how it prioritises what appears on a given individual’s Facebook feed to give preference to personal stuff over commercially driven content. “I’m changing the goal I give our product teams from focusing on helping you find relevant content to helping you have more meaningful social interactions,” said Facebook CEO Mark Zuckerberg.

On the flip side Google-owned YouTube is in the middle of an ongoing controversy over the practice of ‘demonetization’. YouTube faced a lot of pressure to be more careful about which videos it serves ads on earlier this year and has since been a lot more proactive in this area. Since professional YouTubers largely rely on ad revenue for their business model, having that stream removed is very significant and potentially a form of censorship.

The original controversy revolved around ads being served against what was considered to be ‘extremist’ content and it’s hard to argue that putting your brand next to such stuff is a good look. But just as with other pivotal terms such as ‘hate’ and ‘abusive’, we lack a broad consensus for the definition of ‘extremist’.

One example of a popular YouTuber who is currently seeing some of his content demonetized by YouTube, but who it would be difficult to describe as ‘extremist’, is US interviewer Dave Rubin. His M.O. is to film extended dialogues with interesting people and his stated socio-political position is ‘classic liberal’ with an emphasis on freedom of speech. Here’s his latest tweet on being demonetized by YouTube.

We seem to be at a critical juncture regarding public discussion on the internet, with the almost impossible task of balancing a variety of conflicting and increasingly entrenched interests being passed around like a hot potato. This topic was recently discussed at the CES tech show, which you can watch below (warning: there’s about a minute of pointless noise at the beginning). The point made by Eric Weinstein at 17:00 about the different forms of fake news touches on part of the challenge and you can see him expand on them to one Dave Rubin below that.