Facebook CEO Mark Zuckerberg faced the Senate’s Commerce and Judiciary committees yesterday and, while he did a decent job, he struggled to give straight answers to questions about the policing of hate speech.
Zuck has a reputation for avoiding adversarial public encounters but he pretty much had to front up following the Cambridge Analytica scandal. While he didn’t exactly set the world on fire with his cautious answers, he didn’t seem to make any serious errors either and Facebook’s share price went up a bit as a result. The Washington Post has the full transcript and, if you want, you can watch the whole thing here.
One issue Zuck was unable to give a straight answer on was the matter of ‘hate speech’. Facebook has committed to being more proactive in tacking things like terrorism recruitment, political interference, incitement to violence, etc, but also to tackling hate speech. In common with everyone else who makes makes strident comments on the matter, however, he seems worryingly unsure about what it actually is and as we discussed previously, failure to get on top of this matter could ultimately present an even greater threat to Facebook.
The first person to explore this matter (they’re all Senators) was Chuck Grassley. In response to his initial line of questioning Zuck expanded on some of his prepared statement. “It’s not enough to just build tools, we need to make sure that they’re used for good,” he said. “And that means that we need to now take a more active view in policing the ecosystem and in watching and kind of looking out and making sure that all of the members in our community are using these tools in a way that’s going to be good and healthy.”
This is clearly a very worrying statement. So many of the operative terms are highly subjective – good, policing, healthy – as to render it meaningless in every useful sense. All it really conveys is a desire to do the right thing but as a basis of policy it’s at best useless and at worst a commitment to censorship based entirely on the arbitrary whims of Zuck and his employees.
In response to further questioning from John Thune Zuck said: “By the end of this year, by the way, we’re going to have more than 20,000 people working on security and content review, working across all these things. So, when content gets flagged to us, we have those people look at it. And, if it violates our policies, then we take it down. Some problems lend themselves more easily to A.I. solutions than others. So hate speech is one of the hardest, because determining if something is hate speech is very linguistically nuanced, right?
“Hate speech – I am optimistic that, over a 5 to 10-year period, we will have A.I. tools that can get into some of the nuances – the linguistic nuances of different types of content to be more accurate in flagging things for our systems.”
So Facebook is committing to a system of flagging stuff with AI and then using human moderators to assess it, but that AI is at least five years away from being useful when it comes to identifying hate speech, regardless of how it’s defined. It’s also easy to imagine Facebook increasingly moving the burden of moderation towards AI once the heat is off because that will save a lot of overhead.
The most acute questioning on this matter came from Ted Cruz, who simply asked “Does Facebook consider itself a neutral public forum?” Zuck avoided giving a yes or no answer to that and subsequent follow-ups until Cruz said: “Let me try this, because the time is constrained. It’s just a simple question. The predicate for Section 230 immunity under the CDA is that you’re a neutral public forum. Do you consider yourself a neutral public forum, or are you engaged in political speech, which is your right under the First Amendment?”
“Well, Senator, our goal is certainly not to engage in political speech,” said Zuck. “I am not that familiar with the specific legal language of the law that you speak to. So I would need to follow up with you on that. I’m just trying to lay out how broadly I think about this.”
This answer seems slippery and evasive. We weren’t aware of that law but then we’re not the head of the world’s biggest social media company. Wikipeda is aware of it and reveals the following clause: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
This would appear to be a pretty key piece of US legislation in determining the legal status of social media companies, so it’s hard to believe Zuck is ‘unfamiliar’ with it. Maybe he anticipated being backed into a corner by the Republican Senator, who went on to express concern about Facebook being biased against conservative opinions, organisations and content.
“Senator, let me say a few things about this,” said Zuck. “First, I understand where that concern is coming from, because Facebook in the tech industry are located in Silicon Valley, which is an extremely left-leaning place, and this is actually a concern that I have and that I try to root out in the company, is making sure that we do not have any bias in the work that we do, and I think it is a fair concern that people would at least wonder about.”
That was a much better and more candid answer. The Damore vs Google case has served to highlight how antagonistic to conservative views some parts of Silicon Valley are, and there are broader concerns that ‘hate speech’ can often be conflated with certain views on things like immigration, sexuality, abortion, gun control, etc.
Cruz cut to the chase in his final sequence of questioning. “Mr. Zuckerberg, do you feel it’s your responsibility to assess users, whether they are good and positive connections or ones that those 15 to 20,000 people deem unacceptable or deplorable?”
Previously we removed verbal tics and pauses when copying over the WP’s transcript but it’s worth leaving them in this time to illustrate what a tricky topic this is for Zuck. “Well, I — I think that you would probably agree that we should remove terrorist propaganda from the service. So that, I agree. I think it is — is clearly bad activity that we want to get down. And we’re generally proud of — of how well we — we do with that.
“Now what I can say — and I — and I do want to get this in before the end, here — is that I am — I am very committed to making sure that Facebook is a platform for all ideas. That is a — a very important founding principle of — of what we do.
“We’re proud of the discourse and the different ideas that people can share on the service, and that is something that, as long as I’m running the company, I’m going to be committed to making sure is the case.”
In other words he took a long time to not answer the question at all. He clearly didn’t want to admit that all content on Facebook will be censored according to the subjective whims of his moderators but at the same time needed to commit to tackling relatively easy stuff like terrorist propaganda.
Further probing on this topic from Mike Lee and Ben Sasse revealed a reassuring level of concern about the definition of hate speech and the implications of using ill-defined terms as the basis for censorship. Again Zuck attempted to balance the various conflicts but it’s an impossible challenge.
There clearly needs to be broad consensus on the rules of public discourse – ideally involving international legal coordination. And just as importantly those rules need to be simple and unambiguous, and prominently displayed for every user of social media. Only then can users feel confident about those topics they can discuss without censure, or worse.