Internet giants decide US government has nothing to offer security talks

A coalition of internet giants have decided to have a meeting to discuss cybersecurity and misinformation during November’s US mid-term elections, but the government didn’t make the invite list.

It isn’t often the worlds tech giants all get along, but this seems to be an area which they can all agree on. Something needs to be done to remove a repeat of the controversy which has constantly stalked Donald Trump’s Presidential win, and it isn’t even worth bothering listening to the opinions of the government.

According to Buzzfeed, Nathaniel Gleicher, Facebook’s Head of Cybersecurity Policy, called the meeting, inviting twelve other organizations but the government was not on the list. The snub seems to follow a similar meeting in May, where each of the invitees left feeling somewhat disappointed with the government contribution. We can only imagine Department of Homeland Security Under Secretary Chris Krebs and Mike Burham from the FBI’s Foreign Influence Task Force simply sat in the corner, one holding a map and the other pointing to Russia shouting ‘we found it, we found it, look, they don’t even do water sports properly’.

“As I’ve mentioned to several of you over the last few weeks, we have been looking to schedule a follow-on discussion to our industry conversation about information operations, election protection, and the work we are all doing to tackle these challenges,” Gleicher wrote in an email.

The meeting will take place in three stages featuring the likes of Google, Twitter, Snap and Microsoft. Firstly, each company will discuss the efforts they have been making to prevent abuse of the platform. Second will be an open discussion on new ideas. And finally, the thirteen organizations will discuss whether the meeting should become a regular occurrence.

While interference from foreign actors has proved to be a stick to poke the internet giants in the US, criticism of the platforms and a lack of action in tackling misinformation has been a global phenomenon. European nations have been trying to hold the internet players accountable for hate speech and fake news for years, but Trump’s Presidential win is perhaps the most notable impact misinformation has had on the global stage.

With the mid-term elections a perfect opportunity for nefarious characters to cause chaos the internet players will have to demonstrate they can protect their platforms from abuse. Should abuse be present again, not only would this be a victory for the dark web and the bottom dwellers of digital society, but it will also give losing politicians an opportunity to shift the blame for not winning. While this meeting is an example of industry collaboration, each has been launching their own initiatives to tackle the threat.

Facebook most recently revealed it scored users from one to ten on the likelihood they would abuse the content flagging system, and has been systematically taking down suspect accounts. Twitter has algorithms in place to detect potential dodgy accounts and limits the dissemination of posts. Microsoft recently bought several web domains registered by Russian military intelligence for phishing operations, then shut them down. Google has also been hoovering up content and fake accounts on its YouTube platform.

Whether the internet giants can actually do anything to prevent abuse of platforms and the spread of misinformation remains to be seen. That said, keeping the bundling, boresome bureaucrats out of the meeting is surely a sensible idea. Aside from the fact most government workers are as useful as a bicycle pump in a washing machine, Trump-infused politically-motivated individuals are some of the most notable sources of fake news in the first place.

Facebook turns the tables and starts measuring your credibility

Attack is sometimes the best form of defence, and with Facebook’s credibility being heavily question, the social media giant has decided to start tracking the trustworthiness of users.

Some might find the concept of being evaluated by Facebook somewhat uncomfortable, especially considering recent events which have made CEO Mark Zuckerberg and his cronies as trustworthy as a child-snatcher in a playground, but it is a necessary step to clean up the platform. In a sense, Facebook is building the foundations to crowdsource its fight against fake profiles and misinformation.

While Facebook does now employ a team of reviewers to judge whether posts fall outside the platforms rules, the battle against misinformation and hate speech starts with the user flagging content which they deem inappropriate. Of course, people’s standards vary, which is the main difficulty in judging what should be appropriate for the world and what shouldn’t, but the credibility score seeks to identify those who are trying to abuse the system.

According to the Washington Post, users will be scored between one and ten dependent on the reliability of their feedback when flagging content as inappropriate. Details on how this are done are thin on the ground right now, this is done intentionally, but the aim is to find those who are intentionally flagging content as inappropriate when it isn’t. Political opponents for example, or perhaps those who would benefit financially from market confusion.

There are of course those who just find enjoyment in trolling others, and ideological warriors who simply don’t want to accept certain truths, or promote lies. After introducing the flagging feature in 2015, Facebook noticed there were certain people abusing the system, flagging content which they simply didn’t agree with. Disagreeing with an opinion is fine, that is the users choice, but that users opinion should not impact the credibility of the post when the judgement is not based on hard fact. By identifying those who are flagging content as inappropriate when it is not, the fact-checking team in Facebook can become much more efficient.

Unfortunately for Facebook, the task is much more complicated as there will be some who simply promote or flag content incorrectly who do not fall into the standard fake news profile. Take eco-warriors who are trying to save the planet by attacking the reputation of oil companies. They might promote content which is inappropriate or flag something simply because the company does not sit well with their principles. While they might be doing it for what they consider good reasons, it is still misinformation and in the same category as more nefarious means. Fake news is fake news, there is no such thing as justification.

Such a strategy from Facebook just shows how complicated it has become to battle against misinformation and maintain credibility. The algorithm will aim to identify these individuals and assess the risk associated with their activities. Twitter already does this to a degree, assessing the risk of a profile factors into how much the posts should be spread across the platform. It seems the algorithm will be used to aid Facebook’s reviewers assessments of flagged content, but also containing the risk of nefarious actors.

As mentioned before, how the algorithm actually works is hazy right now. While this might make people uncomfortable, not knowing how they are being judged, it is completely necessary. If Facebook publicises the rules and how it is coming to such conclusions, the same nefarious actors will find a way to beat the system, making it completely redundant.

Although the idea of having human fact-checkers will make Joe and Jane Bloggs feel safer on the platform, it is completely unpractical. As the tsunami of misinformation continues to grow, artificial intelligence is increasingly looking like the only option to keep such platforms honest and trustworthy.