Social media censorship is a public concern and needs a public solution

With much of public discussion now taking place on the three main social platforms the time has come to take editorial control away from their owners.

Even before the Cambridge Analytica scandal it had become apparent what a major role social media was already playing in public life. Politicians use Twitter to communicate directly with the electorate and spend billions on Facebook’s targeted advertising. Meanwhile a new generation of political and social commentators have been given their voice by YouTube and now attract audience numbers mainstream media can only dream of.

But all three of these platforms are commercial operations with obligations to maximise returns to their investors. In all three cases the business model is the classic media one of charging for access to their audience, which means they rely on advertisers for their revenue. This in turn can lead to conflicts of interest.

These have always existed in traditional media too. It is far more common than you might imagine for publications to receive pressure from advertisers to change editorial decisions under threat of advertising revenue being taken away if they don’t. They then face a simple choice: the short-term fix of capitulation to blackmail or the long-term investment in the trust of their audience and the credibility of their title.

The dilemma is different for social media, however, since they don’t produce the content they sell advertising against. Instead their business model has been to make is as easy as possible for anyone in the world to publish on their platforms, a model so successful that much of the advertising traditional media used to rely on has now moved to social media, such that digital ad spend is forecast to overtake traditional spend in the US this year.

Inevitably social media are facing the same kind of advertiser pressure traditional media always have, but their response is usually to capitulate. The reason for this is simple: they have no investment whatsoever in the content they host and no specific editorial theme or angle to protect. Because of this they seem much more ready to remove content and even ban users if they think is will keep the ad money flowing.

One other by-product of caving in to commercial pressure is that it sets a precedent, with advertisers emboldened to be ever more demanding with their requests. In the case of social media this has resulted in increasing pressure to ban from social media any contributors advertisers fear may harm their brands by association. This capitulation has also emboldened activists to call for bans of anyone they disagree with, sometimes even alerting advertisers to the PR danger to increase the pressure further.

This PR pressure came to a head for Facebook last week when it decided to ban several accounts it had unilaterally decided were ‘dangerous’ and to pre-brief a number of media about it before even notifying the users themselves. While opponents of the people bans applauded the move, there has been wider concern about the arbitrary nature of the action and the power of Facebook to decide who gets to take part in the public conversation.

A common argument at times like this, often made by people otherwise deeply suspicious of the motives of big corporations, is to insist that private companies like Facebook are free to police their platforms as they see fit. But the fact that those platforms is where most public discussion takes place and that those companies tend to just buy competitors when they get too big means this is a public concern both from a freedom of speech and a competition perspective.

Probably the most famous social media user is also arguably the most significant politician in the world: US President Donald Trump. He was deeply concerned by Facebook’s actions and, appropriately enough wasted little time Tweeting about it.

  He went on to refer to one of the banned people – Paul Joseph Watson, a UK citizen – directly in subsequent tweets and retweeted a number people objecting to the move, noting it appeared to target people on the conservative side of the political spectrum. Watson responded by calling for him to revoke the protection internet platforms have from the consequences of what is posted on them, since it was now acting as a publisher.  

Elsewhere one of the founders of Facebook published an op-ed calling for the break-up of Facebook on the grounds that it had grown too powerful and too much of that power is held by Mark Zuckerberg alone, who personally holds the overall majority of voting power in the company. Others have argued, however, that since Facebook isn’t a monopoly in any of its markets any attempt to break it up would be illegal and that better regulation would be much more effective.

Again it’s common for accusations of hypocrisy to be levelled at those who call for regulation to protect freedom of speech, but in this case the position is entirely consistent. If speech is being restricted by a private oligopoly then public intervention may be the only way to combat that. As any telecoms company could tell you, regulation of oligopolies in markets with high barriers to entry is commonplace and vital to ensure consumers aren’t held to ransom.

The father of the free market Adam Smith famously wrote the following in his definitive book ‘The nature and causes of the wealth of nations’, in reference to the necessity of regulating cartels:  “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices.”

In this case we’re not talking about cartel behaviour, although sometimes their activities can seem suspiciously coordinated, nor is the primary concern a contrivance to raise prices. The commodity at stake is not money but the ability to take part in public discussion. This is arguably no less important a utility then water, electricity or telephony, but to date the companies that control it have faced far less scrutiny than utilities.

Such is this burden of responsibility that even Zuckerberg himself has publicly called for increased regulation. His underlying motives may be self-preservatory, but the logic is sound. Nobody thinks access to public conversation should be controlled by private companies, but currently there are now regulation in place to take that decision away from them. Zuckerberg seems to have concluded that if there were that would take a lot of the heat off him.

Decisive regulation may also pre-empt the litigation that is bound to hit social media companies as they continue to restrict their users. Watson indicated he’s tempted to take legal action, especially since the discovery phase would require Facebook to reveal the rationale behind his banning, possibly exposing the political bias he suspects is behind it. Watson also recently tweeted about a possible legal precedent that may be set in Poland, that would prohibit social media companies from acting against anything legal.

This would appear to be the best solution for everyone. Social media companies would be able to tell pushy advertisers that such decisions have been taken out of their hands, while users would have the law is their sole guide to acceptable public speech. There would still be the matter of different laws in different countries and deliberately censorious and ill-defined legal terms such as ‘hate speech’, but things would be a lot clearer than they are now.

Essentially this would mean that, in order to retain the protections afforded to platforms, social media would not be able to censor anything legal. Alternatively , if they want to take a more active editorial role they should be treated as publishers and be liable for all content published on them, right now they’re somewhere in between and that’s unsustainable. Here’s independent Journalist Tim Pool with a US take on the matter.

 

Facebook selectively bans ‘dangerous’ users

Social media giant Facebook has significantly stepped-up its censorship efforts by banning seven accounts, six of which are often described as ‘far-right’.

Alex Jones, his publication Infowars, Milo Yiannopoulos, Paul Joseph Watson, Laura Loomer, Paul Nehlen and Louis Farrakhan. Jones and Infowars, of which Watson is an editor, are known more for general conspiracy theories than any specific political stance, while Yiannopoulos is a notorious provocateur, Loomer a political activist, Nehlen a fringe US politician and Farrakhan the leader of The Nation of Islam.

“We’ve always banned individuals or organizations that promote or engage in violence and hate, regardless of ideology,” said a Facebook spokesperson in response to our query. The process for evaluating potential violators is extensive and it is what led us to our decision to remove these accounts.” The ban also applies to Facebook-owned social networking service Instagram.

Further enquiries revealed that all were banned for violating Facebook’s policies against dangerous individuals & organizations. Among the things Facebook considers to be violations include:

  • Calling for violence against people based on factors like race, ethnicity or national origin
  • Following a ‘hateful’ ideology
  • Use of ‘hate speech’ or ‘slurs’, even on other social media sites
  • Whether they’ve had stuff removed from Facebook or Instagram before

There are also two tiers of ban. Facebook told us it usually censors all other users from even praising these banned people and organisations, regardless of context, which implies it does allow criticism of them or indeed neutral commentary. But there’s another tier that covers people who haven’t transgressed according to the criteria above but are still considered ‘dangerous’ by Facebook according to unspecified criteria. They get banned but everyone else is still allowed to say nice things about them if they want and it’s unclear which category each of the banned people fall into, hence whether or not users should avoid saying nice things about them.

Facebook did indicate to us some of the signals that prompted it to take action in these cases. It looks like many of them are being punished for associating with Gavin McInnes, founder of Vice magazine and a provocateur in the Yiannopoulos mould. Jones recently interviewed him, Loomer ‘appeared with’ him and also praised another banned person, Faith Goldy, while Yiannopoulos himself also praised McInnes as well as banned activist Tommy Robinson. Farrakhan has been banned for multiple public statements disparaging Jews.

While all of the people banned have doubtless broken the state rules at some time or other, questions remain about the specificity of those rules and how indiscriminately they’re enforced. According to Wikipedia (not necessarily the most authoritative source but we have to start somewhere) hate speech is defined as ‘a statement intended to demean and brutalize another’.

On the surface this would seem to apply to the majority of discourse over social media, but the definition is typically narrowed to such statements that are deemed to be influenced by race, religion, ethnic origin, national origin, sex, disability, sexual orientation, or gender identity. As the Wikipedia page illustrates, every country has its own hate speech legislation, but Facebook has decided to draft its own.

‘A hate organization is defined as: Any association of three or more people that is organized under a name, sign, or symbol and that has an ideology, statements, or physical actions that attack individuals based on characteristics, including race, religious affiliation, nationality, ethnicity, gender, sex, sexual orientation, serious disease or disability,’ explains the Facebook community standards page.

If we take these guidelines literally, therefore, you can be abusive on Facebook, as people frequently are, so long as you don’t call for violence or make any reference to the person’s identity. This is obviously a very difficult thing to enforce, leading to concerns that there may be a degree of political or other bias in doing so.

A common example of this cited by those who perceive political bias is the case of Antifa. The name is an abbreviation of ‘anti-fascist’ and it’s a group set up to counter perceived far-right activity. There are, however, numerous reports of this activity involving violence, especially against a group founded by McInnes called the Proud Boys. Antifa has even been labelled a domestic terrorist group in the US and yet many of  its Facebook pages remain unbanned.

The matter of actively campaigning politicians is another hot-button issue. Nehlen seems to be the only member of this newly-banned group to describe themselves as a politician, but in the UK at least two candidates standing in the imminent European elections have had their campaign accounts banned from Twitter due to the individuals in question having already been banned from that platform.

A common response to concerns about selective banning by social media platforms is that they’re private (although publicly-listed) companies and are thus free to ban whoever they please. The biggest problem with this argument is that they have also become the new public square, and the platform from which political campaigns are now largely based.

The Cambridge Analytica scandal hinged on concerns that Facebook had been used to manipulate elections and US President Donald Trump famously uses Twitter as his primary means of public communication. By selectively banning certain accounts social media companies not only open themselves up to accusations of political bias, they also run the risk of directly undermining the entire political process.

UK wants to force internet companies to think of the children

A UK regulator has drafted 16 things internet companies need to do to help protect children online or else.

To be precise it has launched a consultation of a document called ‘Age appropriate design: a code of practice for online services’, but there is little precedent for these consultations resulting in anything other than plan A being fully implemented. It lays down a bunch of rules that anyone providing online services that could be accessed by children – i.e. nearly all of them – need to do.

“This is the connected generation,” explained Information Commissioner Elizabeth Denham. “The internet and all its wonders are hardwired into their everyday lives. We shouldn’t have to prevent our children from being able to use it, but we must demand that they are protected when they do. This code does that.

“The ICO’s Code of Practice is a significant step, but it’s just part of the solution to online harms. We see our work as complementary to the current focus on online harms and look forward to participating in discussions regarding the Government’s white paper.”

There are many conceits and Orwellian aspirations implied in those two short statements, not least the inference that the government could prevent children from being able to access the internet if it wanted to. But then nobody’s in favour of harm are they, so surely this is all for the best. Here’s a summary of the 16 commandments.

  1. Best interests of the child

Protect them from any conceivable harm but you’re still allowed to make money so long as you do that.

  1. Age-appropriate application

If you can stop kids accessing your stuff then don’t worry about all these rules.

  1. Transparency

Provide clear privacy information, including ‘bite sized’ explanations at the point at which use of personal data is activated that kids can understand.

  1. Detrimental use of data

Don’t use kids’ data in a way that might be detrimental to them.

  1. Policies and community standards

Implement your own policies.

  1. Default settings

Privacy settings must be ‘high’ by default be difficult to change. Reset existing user settings accordingly.

  1. Data minimisation

Only collect the minimum amount of data you need to provide your service.

  1. Data sharing

Don’t share kids’ personal data unless you’ve got a really good reason to do so.

  1. Geolocation

Switch it off by default unless you’ve got a really good reason not to and even than make it clear that it’s on.

  1. Parental controls

Let kids know when their parents are keeping an eye on them.

  1. Profiling

Turn it off by default unless you’ve got a really good reason not to and even then think of the children.

  1. Nudge techniques

Don’t try to persuade kids to lower their privacy protections and don’t use things like reward loops to keep kids engaged. This could even include ‘likes’.

  1. Connected toys and devices

All this applies to them too.

  1. Online tools

Give kids tools to protect themselves online and make them prominent.

  1. Data protection impact assessments

A bureaucratic process to demonstrate you’ve complied with these rules.

  1. Governance and accountability

More bureaucracy to show you’ve done what you’re told.

“If you don’t comply with the code, you are likely to find it difficult to demonstrate that your processing is fair and complies with the GDPR and PECR,” warns the consultation document. “If you process a child’s personal data in breach of this code and the GDPR or PECR, we can take action against you.

“Tools at our disposal include assessment notices, warnings, reprimands, enforcement notices and penalty notices (administrative fines). For serious breaches of the data protection principles, we have the power to issue fines of up to €20 million or 4% of your annual worldwide turnover, whichever is higher.”

Some of the above points, such as 3, 5 and 14 seem perfectly sensible, but taken all together this initiative seems designed to massively increase the bureaucratic burden on nearly all internet companies. As ever the largest ones can just call on their compliance departments to mitigate the restrictions and keep the companies out of trouble. Small ones, however, may have to just impose age restrictions.

In that respect this seems like an extension of UK porn block law, which Wired does a good job of picking holes in below. At the very least this sort of thing is great news for VPN providers. The announcement coincides with  the European Copyright Directive clearing its final hurdle, so before long everyone will be able to access the internet secure in the knowledge that nothing bad will ever happen to them.

 

As Facebook fails once more Zuck faces rebellion from activist investors

All Facebook sites were down once more yesterday, which coincides with Facebook shareholders calling for its founder to have less control over the company.

According to Bloomberg this marked the third time this year the social media giant has suffered a major outage. Not just Facebook, but Instagram, WhatsApp and Messenger were all affected by the outage, reminding everyone just how much social goodness is controlled by just one company. Facebook doesn’t seem to have said anything other than a brief, generic apology.

It has been widely observed that this increased incidence of outages coincides with Facebook’s decision to merge its various messaging apps onto one platform and put a greater emphasis on privacy a month or so ago. There is definitely some merit in that revised strategy and it wouldn’t be surprising if it caused some service disruption, but if so why not just come out and admit it?

One reason may be Facebook’s increasingly restive shareholders. In a recent filing ahead of its annual shareholders meeting Facebook listed a proposal calling for all stock to have equal voting power. The central issue is that Class B stock, which isn’t publicly traded, has ten times more voting power than regular Class A stock. By bizarre coincidence Founder Mark Zuckerberg owns enough, apparently, to have a majority in any shareholder vote.

“Since July 2018, Facebook value dropped as much as 40% due to management and Board decisions that have not protected shareholder value,” opened the supporting statement. “By allowing certain stock more voting power, our company takes public shareholder money but does not provide us an equal voice in our company’s governance. Founder Mark Zuckerberg controls over 51% of the vote, though he owns only 13% of the economic value of the firm.”

Facebook’s share price went down the toilet after it reported rubbish numbers in in the middle of last year. Having peaked $217 just before those earnings it plunged to a nadir of $124 by Christmas, but has since recovered to $179 – an 18% drop – which is close the pre Cambridge Analytica peak. So the claim that bad management decisions have diminished shareholder value seems weak.

And while the disproportionate influence of these Class B shares does seem unfair, they have been in place since the IPO and anyone buying Class A shares will have been aware of them, so it seems somewhat disingenuous for such stockholders to suddenly start crying now. Having said that if Facebook keeps dropping the ball we can expect to see such calls increase in frequency and intensity, however futile they may be.

Facebook placates Europe for now

In a bid to keep the European Commission off its back social media giant Facebook is admitting to its users that they’re the product.

Despite this being the media business model since the first newspapers were printed, the EC seems to think making Facebook spell out its business model represents some kind of progress. Those few users that even care will now be able to find some kind ‘digital media for dummies’ guide buried somewhere in their Facebook details. This is probably a product of all the faux outrage expressed when it was revealed that politicians can use Facebook for targeted advertising before elections.

This thrilling new section of Facebook will also clarify the nature of the implicit contract users enter into with Facebook when they post stuff, as well as clarify the rules for removing posts and suspending accounts. Facebook has vowed to be a bit more reasonable when it comes to unilaterally changing its Ts and Cs, and to admit its liabilities when it comes to things like Cambridge Analytica.

“Today Facebook finally shows commitment to more transparency and straight forward language in its terms of use,” said Commissioner Vera Jourová. “A company that wants to restore consumers trust after the Facebook/ Cambridge Analytica scandal should not hide behind complicated, legalistic jargon on how it is making billions on people’s data. Now, users will clearly understand that their data is used by the social network to sell targeted ads. By joining forces, the consumer authorities and the European Commission, stand up for the rights of EU consumers.”

If this is all Facebook has to do to get the EC off its back then Mark Zuckerberg must be laughing himself sick right now, pausing only to sign off a massive pay rise for Nick Clegg. Companies like Google and Microsoft have probably already written to the EC, asking why they weren’t given the ‘publish some clarifications’ option before getting fined into next week. While this seems to have temporarily placated the EC, Facebook’s minimal gesture seems useless to its users.

UK finally runs out of patience with internet players

The UK Government has unveiled a public consultation which may well see stricter rules placed on the digital giants as the era of the wild-west internet draws to a close.

To date the internet giants have largely been unregulated. This was fine when everyone admired the likes of Google, Facebook and Amazon, though numerous scandals have exposed the darker side of the internet economy. Many might have seen the likes of Mark Zuckerberg and Jeff Bezos as friendly folk, out to democratize technology, though the underlying business model has shocked many, forcing slumbering politicians into action.

Today, the Department of Digital, Culture, Media and Sport and the Home Office have jointly launched a new public consultation on proposals which will aim to introduce a new regulatory body to govern the internet economy and create a ‘duty of care’ mandate to ensure online safety and tackle illegal and harmful activity, as well as creating a rulebook suitable for the digital economy. The new body will have the power to hand out fines should any of the internet players fall short of expectations.

“The era of self-regulation for online companies is over,” said Digital Secretary Jeremy Wright. “Voluntary actions from industry to tackle online harms have not been applied consistently or gone far enough. Tech can be an incredible force for good and we want the sector to be part of the solution in protecting their users. However, those that fail to do this will face tough action.”

The new regulator, and the soon to be created rules, will apply to any company that allows users to share or discover user generated content or interact with each other online. Those platforms who do not meet expectations will either have sites blocked or face significant fines. One of the questions which the consultation will look to answer is whether the new regulatory body should be part of an existing department, or a newly formed body which would be funded through a levy placed on the sales of the internet companies in the UK.

While this is a necessary step forward, this is going to be a very complicated process. Creating a mechanism which protects users but also maintains the principles of free speech is a delicate equation to balance. We suspect there will never be a situation where all parties are satisfied, such are the complications when dealing with opinions; who should be the judge on what is offensive and what is acceptable?

Ultimately, this is a move which is long overdue. Social media companies, such as Facebook, have slipped between the regulatory red tape for years, fitting nicely into the grey area between ‘platform’ and ‘publisher’. While these services might have started as a platform, they have certainly evolved beyond that, however, it would not be just to designate them as publishers; they are not content creators as such. A new definition is needed and only then can rules fit for the digital era be created.

Change is on the horizon, and it is unavoidable for the internet economy. That said, the lobby machine will soon be taken up a gear to attempt to minimize the impact of any new rules. Numerous European nations are swiftly moving forward to create more accountability, and the internet players will have to change their game-plan from offensive to damage limitation.

That said, the internet players have been given enough chances to clear up their act over the last few years. These are companies which have operated without the severity of the red-tape restraints of other segments, partly because governments have not understood how these businesses operate, but this era of self-regulation is drawing to a close.

“Despite our repeated calls to action, harmful and illegal content – including child abuse and terrorism – is still too readily available online,” said Home Secretary Sajid Javid.

“That is why we are forcing these firms to clean up their act once and for all.”

If 52% don’t understand data-sharing economy, is opt-in redundant?

Nieman Lab has unveiled the results of research suggesting more than half of adults do not realise Google is collecting and storing personal data through usage of its platforms.

The research itself is quite shocking and outlines a serious issue as we stride deeper into the digital economy. If the general population does not understand the basic principles behind the data-sharing economy, how are they possibly going to protect themselves against the nefarious intentions from the darker corners of the virtual world?

You also have to question whether there is any point in the internet players seeking consent if the user does not understand what he/she is signing up for.

According to the research, 52% of the survey respondents do not expect Google to collect data about a person’s activities when using its platforms, such as search engines or YouTube, while 57% do not believe Google is tracking their web activity in order to create more tailored advertisements.

While most working in the TMT industry would assume the business models of the Google and the other internet are common knowledge, the data here suggests otherwise.

66% also do not realise Google will have access to personal data when using non-Google apps, while 64% are unaware third-party information will be used to enhance the accuracy of adverts served on the Google platforms. Surprisingly, only 57% of the survey respondents realise Google will merge the data collected on each of its own platforms to create profiles of users.

Although this survey has been focused on Google, it would be fair to assume the same respondents do not appreciate this is how many newly emerging companies are fuelling their spreadsheets. The data-sharing economy is the very reason many of the services we enjoy today are free, though if users are not aware of how this segment functions, you have to question whether Google and the other internet giants are doing their jobs.

The ideas of opt-in and consent are critically important nowadays. New rules in the European Union, GDPR, set about significant changes to dictate how companies collect, store and use personal information collected by the service providers. These rules were supposed to enforce transparency and encourage the user to be in control of their personal information, though this research does not offer much encouragement.

If the research suggests more than half of adults do not understand how Google collects personal information or uses it to enhance its own advertising capabilities, what is the point of the opt-in process in the first place?

Reports like this suggest the opt-in process is largely meaningless as users do not understand what they are giving the likes of Google permission to do. The blame for this lack of education is split between the internet giants, who have become experts at muddying the waters, and the users themselves.

Those who use the services for free but do not question the continued existence of ‘free’ platforms should forgo the right to be annoyed when scandals emerge. Not taking the time to understand, or at least attempt to, the intricacies of the data-sharing economy is the reason many of these scandals emerge in the first place; users have been blindly handing power to the internet giants.

The internet players need to do more to educate the world on their business models, however the user does have to take some of the responsibility. We’re not suggesting everyone becomes an internet economy expert, but gaining a basic understanding is not incredibly difficult. However, it does seem ignorance is bliss.

Austria and Australia join the march against Silicon Valley

The days of the wild-west internet seem to be coming to a close with Austria and Australia becoming the latest nations to update the rules governing the business activities of the internet giants.

At the foot of the Alps, the Austrians are proposing a new 5% sales tax on digital revenues which are realised in the country, another European state to tackle the ‘creative’ accounting practices of Silicon Valley. Down under, the Australian Government plans on introducing tougher rules which will place greater accountability on social media platforms for extreme and offensive content.

For years the world watched in amazement as the likes of Google, Facebook and Amazon climbed higher up the ladder of influence. We gazed in wonderment as Silicon Valley seemed to pluck profits out of thin-air and their CEOs hit celebrity status. But then the scandals started to roll-in and we all realised these companies had abused the privilege of self-regulation.

The Cambridge Analytica scandal was the watershed moment, a saga which dominated headlines around the world for months and hauled politicians away from free lunches and back into the debating chambers. All of a sudden everyone realised the likes of Zuckerberg, Bezos and Page were not our friends, but incredibly intelligent businessmen who were exploiting the grey areas sitting idly between the mass of criss-crossing red-tape.

What followed this scandal was a more forensic look at the business models of the internet giants. Those looking close enough found trickily worded terms and conditions, confusing processes, ransom opt-ins and abused freedoms. Users were being tracked without their knowledge, personal information was being traded as a commodity and tax havens were being exploited. Opinion on Silicon Valley turned sour.

On the other side of the coin, it wasn’t just the craft and cunning of Silicon Valley lawyers to blame, but inadequate rules for today’s digital era. Politicians and regulators woke up to the fact rules and legislation needed to be updated to create a fair and reasonable policy landscape to hold the internet giants accountable. Experts were brought in to account for the vast gulf in competence and the march towards Silicon Valley began.

A perfect storm has been brewing around the internet giants and as the weeks pass more countries are taking a more stringent approach to the business of the internet. Australia has been trundling along with incremental progress, and now Austria has entered the fray.

“Through the digital tax package, we are closing tax loopholes and thereby ensuring that large digital corporations, agency platforms and retail platforms are called to account,” said Austrian Finance Minister Hartwig Löger. “Through fair taxation of the digital economy, we are establishing equity in taxation.”

Moving forward, a digital tax of 5% will be introduced for large digital corporations, those with global sales of € 750 million, of which €25 million originates in Austria. The new rules will also take away VAT exemptions for deliveries from foreign countries. Previously, orders valued below €22 were exempt from the tax.

“Through this measure, we are taking digital agency platforms to task,” said State Secretary of Finance Hubert Fuchs. “No one is entitled to evade the obligation to pay tax.”

Austria is of course not alone in this tax assault. As the member states of the European Union cannot agree on a bloc-wide tax mechanism, plans were blocked by nations who benefit from the status quo such as Ireland, individual states have gone on alone. France and the UK have already set plans in motion, but we expect such proposals to start snowballing before too long.

Australia is targeting a different area of contention however. Following events in Christchurch, New Zealand, and the simultaneous live-streaming of the incident, the Australian Government has introduced new rules which will hold social media and other social media platforms accountable for the dissemination of offensive material.

The Sharing of Abhorrent Violent Material bill creates new offences for content service providers and hosting services who fail to act expeditiously to remove videos containing “abhorrent violent conduct”. Such conduct is defined as terrorist acts, murders, attempted murders, torture, rape or kidnap.

The technology community and legal experts have slammed the new rules, and while there are some valid points, the social media and hosting platforms might have to be forced forward. It is an incredibly difficult task to identify these videos, such is the complexity of identification in such as vast swarm of uploaded content nowadays, but without the threat of penalty there is a risk progress will not move at a desired pace.

Following the incident, Facebook pointed out that it did take down the video quickly, though it was not able to use AI to identify the content. This is where it becomes incredibly difficult for the technology industry; these applications need abhorrent content to be trained to identify abhorrent content. It’s a bit of a catch-22 situation, but harsh penalties for non-compliance will force the industry to find a solution.

“We have heard feedback that we must do more – and we agree,” said Facebook COO Sheryl Sandberg in a letter to the New Zealand Herald. “In the wake of the terror attack, we are taking three steps: strengthening the rules for using Facebook Live, taking further steps to address hate on our platforms, and supporting the New Zealand community.”

Sandberg has promised new restrictions on how live videos can be uploaded and streamed to the platform, though details were incredibly thin. Facebook will not want to introduce too many restrictions, making the process too convoluted and tiresome will impact user experience, though it clearly has to do something. The opportunity to broadcast horrific acts has become too accessible.

This is the problem which Facebook and everyone else in the digital economy is facing. The promise is to open up the gates and allow people to express themselves, but unfortunately there are people who will take advantage of this situation. It is an incredibly difficult equation to balance.

Technology will eventually help the internet companies get to a suitable position, with the potential of AI grafting through the millions of uploads, however the training period is going to be a difficult process. The risk of going too sensitive is restricting free speech, though with content uploaded from shows such as Game of Thrones, there is plenty of room for error.

The internet giants will want to resist change, despite giving the impression of encouraging more regulation and government intervention, but it won’t be able to hold back the tides forever. With privacy concerns, fake news, tax evasion, political influence, anti-trust accusations and the unknown power of data analytics, the internet giants are simply fighting on too many fronts.

These are companies who have incredible financial power and immense armies of lobbyists, but Silicon Valley is the bad guy right now. Politicians have spotted an opportunity to make PR points by unloading on the punching bag, and you can guarantee there will be many lining up to take a swing.

The politics of technology

The latest mutiny at Google illustrates what a political game technology has become and it’s only going to get more so.

Last week Google announced the creation of ‘An external advisory council to help advance the responsible development of AI.’ In so doing Google was acknowledging a universal concern about the ethics of artificial intelligence, automation, social media and technology in general. It also seemed to be conceding that the answers to these concerns need to be universal too.

The Silicon Valley tech giants are frequently accused of having a political bias in their thinking that is hostile to conservative perspectives. Normally this wouldn’t matter, but since the likes of Google, Facebook and Twitter have so much control over how everyone gets their information and opinion, any possible bias in the way they do so becomes a matter of public concern.

In an apparent attempt to demonstrate diversity of viewpoints in this new Advanced Technology External Advisory Council (ATEAC), Google included Kay Coles James, who it describes as ‘a public policy expert with extensive experience working at the local, state and federal levels of government. She’s currently President of The Heritage Foundation, focusing on free enterprise, limited government, individual freedom and national defense.’

This decision has upset over a thousand Google employees, however, who made their feelings publicly known yesterday via a an article titled Googlers Against Transphobia and Hate. The piece accuses James of being ‘anti-trans, anti-LGBTQ, and anti-immigrant’ and linked to three recent tweets of hers as evidence.

Beyond those tweets it’s hard to fully test the veracity of those allegations, but it does seem clear that they are largely political. The Equality Act is a piece of US legislation currently being debated in the US House of Representatives, sponsored almost entirely by members of the Democrat Party. The legal status of transsexual people is intrinsically political, as is immigration policy, and attitudes towards them tend to be similarly polarised.

The Googlers Against Transphobia certainly seem to fall into that category, but what makes them noteworthy, apart from their numbers, is that they expect their employer to adhere to their political positions. Google has attempted to defend the appointment of James to one of the eight ATEAC positions by stressing the importance of diversity of thought.

Here’s what the Google dissidents think of that argument. “This is a weaponization of the language of diversity,” they wrote. “By appointing James to the ATEAC, Google elevates and endorses her views, implying that hers is a valid perspective worthy of inclusion in its decision making. This is unacceptable.”

This is just the latest internal insurrection Google has faced from its passionately political workforce. Every time a story emerges about Google working on a special search engine for China there is considerable disquiet among the rank and file, ironically opposed to censorship in this case. And then there was the case of James Damore, sacked by Google for trying to start an internal conversation about gender diversity at the company.

Google’s struggles pale when compared to those of Facebook, however. Every time it seems to have just about recovered from the last crisis it finds itself in a new one. The latest was catalysed by the atrocity committed in New Zealand, in which a gunman killed 50 people praying in two mosques in Christchurch and live-streamed himself on Facebook.

Understandably questions were immediately asked about how Facebook could have allowed that streaming to happen. While it acted quickly to ensure the video and any copies of it were taken down, Facebook was under massive pressure to implement measures to ensure such a thing couldn’t happen again. Its response has been to announce ‘a ban on praise, support and representation of white nationalism and white separatism on Facebook and Instagram.’

These kinds of ideologies are largely rejected by mainstream society for many good reasons, but ideologies they remain. Facebook is also moving against claimed ‘anti-vaxxers’, i.e. people who fear the side-effects of vaccines. They may well be misguided in this fear but it is nonetheless an opinion and, so far, as legal one.

Finding itself under pressure to police ideologies and opinions on its platforms, Facebook seems to have realised this is an impossible task. For every ‘unacceptable’ position it acts against there are thousands waiting in the wings and an obvious extrapolation reveals a future Facebook in which very few points of view are permitted. In apparent acknowledgment of that dilemma Facebook recently called on governments to make a call on censorship, but it should be careful what it wishes for.

Another type of content facing increasing calls for censorship is claimed ‘conspiracy theories, with a recent leak revealing how Facebook agonises over such decisions. Google-owned Facebook is also acting against such content, but seems to prefer sanctions short of outright banning, such as the recent removal of videos published by activist Tommy Robinson from all search results.

Again this puts technology companies in the position of censors of content that often has a political nature. How do you define a conspiracy theory anyway and should all of them be censored? Should, for example, the MSNBC network in the US be sanctioned for aggressively pursuing a narrative of President Trump colluding with Russia to win his general election when a two-year investigation has revealed it to be false? Is that not a conspiracy theory too? Politics and technology collide once more.

The current era of political interference in internet platforms was probably started by the Cambridge Analytica and subsequent allegations that the democratic process had been corrupted. As technology increasingly determines how we view and interact with the world this problem is only going to get bigger and it’s hard to see how technology companies can possibly please all of the people all of the time.

Which brings us back to the start of this piece: AI. The only hope internet they have of monitoring the billions of interactions they host per day is through AI-driven automation. But even that has to be programmed by people with their own personal views and ethics and will need to be responsive to public sentiment as it in turn reacts to events.

As the US President has done so much to demonstrate, technology platforms are now the places much of politics and public discussion take place. At the same time they’re owned by commercial organizations with no legal requirement to serve the public. They have to balance pressure from both professional politicians and the politics of their own employees with the dangers of alienating their users if they’re seen to be biased. Something’s got to give.

This dilemma was illustrated well in a recent Joe Rogan podcast featuring Twitter, which you can see below. In it Twitter CEO Jack Dorsey and his head of content moderation Vijaya Gadde defend themselves from accusations of bias from independent journalist Tim Pool.

 

Facebook faces hyper-targeted advertising lawsuit

The US Department of Housing and Urban Development (HUD) has lodged a lawsuit against Facebook, challenging the hyper-targeted big data model which has made OTTs billions over the years.

Quoting the Fair Housing Act, the HUD has claimed Facebook is breaking the law by encouraging, enabling, and causing housing discrimination. The Fair Housing Act prohibits discrimination in housing and housing-related services, including online advertisements. Facebook’s advertising platform is said to discriminate individuals based on race, colour, national origin, religion, sex, disability and familial status, violating the Act.

“Even as we confront new technologies, the fair housing laws enacted over half a century ago remain clear – discrimination in housing-related advertising is against the law,” said General Counsel Paul Compton.

“Just because a process to deliver advertising is opaque and complex doesn’t mean that it’s exempts Facebook and others from our scrutiny and the law of the land. Fashioning appropriate remedies and the rules of the road for today’s technology as it impacts housing are a priority for HUD.”

Complaints were originally raised by the HUD last summer, though the two parties have been in discussions to come to some sort of settlement to avoid legal action. Reading between the lines, talks have broken down or the HUD leadership team wants to give the impression it is taking a more hardened stance against the social media segment.

Although it should come as little surprise Facebook is facing a lawsuit considering the ability for Mark Zuckerberg to stumble from one blunder to the next, this one effectively challenges the foundations of the business model. Hyper-targeted advertising is the core not only of Facebook’s business, but numerous other companies which have emerged as the dawn breaks on the blossoming data-sharing economy.

What is worth noting is this is not the first time Facebook has faced such criticisms. The American Civil Liberties Union (ACLU) has also challenged the social media giant, and earlier this month Facebook stating it was changing the way its advertising platform was set up to prevent abuses with the targeting features.

“One of our top priorities is protecting people from discrimination on Facebook,” said Facebook COO Sheryl Sandberg. “Today, we’re announcing changes in how we manage housing, employment and credit ads on our platform. These changes are the result of historic settlement agreements with leading civil rights organizations and ongoing input from civil rights experts.”

As a result of the clash with the ACLU and other parties, Facebook agreed to remove any gender, age and race-based targeting from housing and employment adverts, creating a one-stop portal instead.

According to the HUD, Facebook allows advertisers to exclude individuals from messaging based on where they live and their societal status. For example, whether someone is a parent or non-American, these categories have been deemed discriminatory. Facebook also allows advertisers to effectively zone off neighbourhoods for campaigns, which is also deemed a violation of the Act. By bringing together data from the digital platform and other insight from non-digital means, HUD is effectively challenging the legitimacy of digital and targeted advertising.

As with other similar cases, the HUD is bringing attention to the light-touch regulatory landscape for the internet economy. While traditional advertising is held accountable by strict rules, the internet operates with relative freedom. This is partly down to the age of mass market media online, it is still comparatively new, and the fact few bureaucrats understand how the data machines work.

What is worth noting is that this is an incredibly narrow focus for the HUD, though should it be successful the same concepts could be applied, and other elements of the Facebook hyper-targeted advertising model could be challenged.

Facebook might be the target here, though many companies will be watching this case with intrigue. Precedent is a powerful tool in the legal and regulatory world, and should the HUD win, the same business model which is being applied elsewhere would be compromised also.