Zuckerberg admits he’s nowhere near defining hate speech

Facebook CEO Mark Zuckerberg faced the Senate’s Commerce and Judiciary committees yesterday and, while he did a decent job, he struggled to give straight answers to questions about the policing of hate speech.

Zuck has a reputation for avoiding adversarial public encounters but he pretty much had to front up following the Cambridge Analytica scandal. While he didn’t exactly set the world on fire with his cautious answers, he didn’t seem to make any serious errors either and Facebook’s share price went up a bit as a result. The Washington Post has the full transcript and, if you want, you can watch the whole thing here.

One issue Zuck was unable to give a straight answer on was the matter of ‘hate speech’. Facebook has committed to being more proactive in tacking things like terrorism recruitment, political interference, incitement to violence, etc, but also to tackling hate speech. In common with everyone else who makes makes strident comments on the matter, however, he seems worryingly unsure about what it actually is and as we discussed previously, failure to get on top of this matter could ultimately present an even greater threat to Facebook.

The first person to explore this matter (they’re all Senators) was Chuck Grassley. In response to his initial line of questioning Zuck expanded on some of his prepared statement. “It’s not enough to just build tools, we need to make sure that they’re used for good,” he said. “And that means that we need to now take a more active view in policing the ecosystem and in watching and kind of looking out and making sure that all of the members in our community are using these tools in a way that’s going to be good and healthy.”

This is clearly a very worrying statement. So many of the operative terms are highly subjective – good, policing, healthy – as to render it meaningless in every useful sense. All it really conveys is a desire to do the right thing but as a basis of policy it’s at best useless and at worst a commitment to censorship based entirely on the arbitrary whims of Zuck and his employees.

In response to further questioning from John Thune Zuck said: “By the end of this year, by the way, we’re going to have more than 20,000 people working on security and content review, working across all these things. So, when content gets flagged to us, we have those people look at it. And, if it violates our policies, then we take it down. Some problems lend themselves more easily to A.I. solutions than others. So hate speech is one of the hardest, because determining if something is hate speech is very linguistically nuanced, right?

“Hate speech – I am optimistic that, over a 5 to 10-year period, we will have A.I. tools that can get into some of the nuances – the linguistic nuances of different types of content to be more accurate in flagging things for our systems.”

So Facebook is committing to a system of flagging stuff with AI and then using human moderators to assess it, but that AI is at least five years away from being useful when it comes to identifying hate speech, regardless of how it’s defined. It’s also easy to imagine Facebook increasingly moving the burden of moderation towards AI once the heat is off because that will save a lot of overhead.

The most acute questioning on this matter came from Ted Cruz, who simply asked “Does Facebook consider itself a neutral public forum?” Zuck avoided giving a yes or no answer to that and subsequent follow-ups until Cruz said: “Let me try this, because the time is constrained. It’s just a simple question. The predicate for Section 230 immunity under the CDA is that you’re a neutral public forum. Do you consider yourself a neutral public forum, or are you engaged in political speech, which is your right under the First Amendment?”

“Well, Senator, our goal is certainly not to engage in political speech,” said Zuck. “I am not that familiar with the specific legal language of the law that you speak to. So I would need to follow up with you on that. I’m just trying to lay out how broadly I think about this.”

This answer seems slippery and evasive. We weren’t aware of that law but then we’re not the head of the world’s biggest social media company. Wikipeda is aware of it and reveals the following clause: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

This would appear to be a pretty key piece of US legislation in determining the legal status of social media companies, so it’s hard to believe Zuck is ‘unfamiliar’ with it. Maybe he anticipated being backed into a corner by the Republican Senator, who went on to express concern about Facebook being biased against conservative opinions, organisations and content.

“Senator, let me say a few things about this,” said Zuck. “First, I understand where that concern is coming from, because Facebook in the tech industry are located in Silicon Valley, which is an extremely left-leaning place, and this is actually a concern that I have and that I try to root out in the company, is making sure that we do not have any bias in the work that we do, and I think it is a fair concern that people would at least wonder about.”

That was a much better and more candid answer. The Damore vs Google case has served to highlight how antagonistic to conservative views some parts of Silicon Valley are, and there are broader concerns that ‘hate speech’ can often be conflated with certain views on things like immigration, sexuality, abortion, gun control, etc.

Cruz cut to the chase in his final sequence of questioning. “Mr. Zuckerberg, do you feel it’s your responsibility to assess users, whether they are good and positive connections or ones that those 15 to 20,000 people deem unacceptable or deplorable?”

Previously we removed verbal tics and pauses when copying over the WP’s transcript but it’s worth leaving them in this time to illustrate what a tricky topic this is for Zuck. “Well, I — I think that you would probably agree that we should remove terrorist propaganda from the service. So that, I agree. I think it is — is clearly bad activity that we want to get down. And we’re generally proud of — of how well we — we do with that.

“Now what I can say — and I — and I do want to get this in before the end, here — is that I am — I am very committed to making sure that Facebook is a platform for all ideas. That is a — a very important founding principle of — of what we do.

“We’re proud of the discourse and the different ideas that people can share on the service, and that is something that, as long as I’m running the company, I’m going to be committed to making sure is the case.”

In other words he took a long time to not answer the question at all. He clearly didn’t want to admit that all content on Facebook will be censored according to the subjective whims of his moderators but at the same time needed to commit to tackling relatively easy stuff like terrorist propaganda.

Further probing on this topic from Mike Lee and Ben Sasse revealed a reassuring level of concern about the definition of hate speech and the implications of using ill-defined terms as the basis for censorship. Again Zuck attempted to balance the various conflicts but it’s an impossible challenge.

There clearly needs to be broad consensus on the rules of public discourse – ideally involving international legal coordination. And just as importantly those rules need to be simple and unambiguous, and prominently displayed for every user of social media. Only then can users feel confident about those topics they can discuss without censure, or worse.

UK Government unveils its own AI to detect terrorist content online

The UK Government has announced the development of new software which promises to automatically detect terrorist content on any online platform.

Developed by the Home Office and ASI Data Science, the Government boasted the piece of kit ‘detected 94% of Daesh propaganda with 99.995% accuracy’ during tests, with a small number of videos being reviewed by employees if the software gets confused. The tool can be used by any platform where videos are reviewed during the uploading process, with the aim being to catch the content before it hits the internet.

“Over the last year we have been engaging with internet companies to make sure that their platforms are not being abused by terrorists and their supporters,” said Home Secretary Amber Rudd. “I have been impressed with their work so far following the launch of the Global Internet Forum to Counter-Terrorism, although there is still more to do, and I hope this new technology the Home Office has helped develop can support others to go further and faster.”

It certainly is a different approach to working with the digital community, one which will almost certainly be more successful. The last 12-18 months have seen various governments around the world try to force the technology industry to bend to its will, even when government ideas are to the detriment of users. A collaborative approach such as this, aiding the technology firms as opposed to bitterly arguing with them, is an idea we weren’t expecting.

What we aren’t too sure about is the effectiveness of the technology. Yes, tests have proven it works but we don’t think it will be too long before the nefarious characters have found a way around it. The success of the technology will almost certainly be dependent on how it is adopted and enhanced by the internet giants. The UK Government might well have produced a good technology, but the engineers working for the likes of Facebook, Google or Twitter for example will be smarter. They could take a good idea and turn it into an incredibly resilient and proactive piece of artificial intelligence.

Perhaps these are the discussions which will happen over the next couple of days as Rudd visits Silicon Valley. Rudd has traditionally been very combative, aggressive and self-righteous when it comes to dealing with the technology industry, let’s hope she is able to put her ego aside for these talks. The UK Government will not be able to address the rise in hate speech and terrorist propaganda on its own; working with the technology industry is critical.

What might be an interesting little twist to the story would be if Rudd forces the technology giants to adopt the technology by law. Technologists and innovators are usually quite protective of their platforms and being forced to do something by lethargic legislators might irritate the technology firms. In the first instance, the technology will be used by smaller companies such as Vimeo, Telegraph and pCloud, but it would surprise very few if the Government looked to force the bigger players into adoption as well.

While it has been used for political point scoring and scare mongering in the past, the threat of online extremist propaganda is quite real. The UK Government has identified 400 unique online platforms which were used to push out poisonous material in 2017, while 145 new platforms from July until the end of the year had not been used before.

The success of this technology will be in how it is enhanced in the future. At some point, someone will figure out a way to beat it, trick it or circumnavigate it. The Government will need help in developing its effectiveness for the future. We hope Rudd has this mentality when sitting down with the technology firms. Take a collaborative, open source approach and it could turn out to be a very good idea. Force them into doing something they are not happy with and the technology firms could turn into stubborn teenagers.