Facebook ban: What can it do and will it work?

8:04 pm on 28 March 2019

Opinion - In the wake of the Christchurch mosque shootings, Facebook will move to ban white supremacy on its platform. But does it have the will and the way to do so?

Facebook up on a laptop

Photo: 123RF

What exactly is Facebook changing?

From next week Facebook will introduce a ban on "praise, support and representation of white nationalism and separatism" on its Facebook and Instagram websites, which have 2.3 billion and 1 billion users respectively logging into them each day.

It is a policy change, a more hardline interpretation of its Community Guidelines and Dangerous Individuals & Organizations policy which dictate who is allowed to use Facebook and Instagram and what they can post.

The aim is to rid the world's biggest social network of the organised hate groups that have managed to get away with peddling white supremacy under the guise of discussion of nationalism and what Facebook describes as "American pride". The changes will be reflected in the automated and manual systems Facebook uses to detect and delete objectionable content on its websites.

What technically can it do to detect and remove this type of content?

Facebook invested nearly $US8 billion in research and development 2017. It has the artificial intelligence, computer processing power and data storage capacity to fine-tune its automated systems to flag and delete white supremacist material. After all, Facebook and Google have greatly improved their ability to quickly flag Islamic extremism and the horrendous Isis videos that were circulating social networks in large numbers a couple of years ago.

The key to this is training their AI systems using machine learning. It involves using computer algorithms to analyse the content of text, images and videos to detect key words, phrases, objects, images and icons, according to pre-defined rules. The machines can determine context, intent and tone of material and make a decision. Over time, with human oversight, the systems learn and improve. But it requires a huge database of content examples to get really good, which takes time and computing resources.

What will be the role of Facebook's army of content moderators?

The more than 10,000 content moderators at Facebook will now be tasked with manually checking the content that these newly tweaked computer algorithms detect and flag for further scrutiny. This will increase their workload, so Facebook may end up having to hire more people to do that unpleasant and emotionally draining job.

They'll also have to be adept at determining the line between reasonable nationalism and content that subtly pushes the white supremacist barrow. As such, Facebook may have to assemble a specialist team of moderators tasked with that particular job.

But as regular Facebook and Instagram users, we have a role to play as moderators too. By flagging content as objectionable, we put it in front of Facebook's team sooner and that helps fine tune their manual and automated systems. If you think a post has crossed the line into racist territory, report it.

Why hasn't it clamped down on white supremacy earlier?

Because Facebook hasn't had the blowtorch applied to it on the issue of white supremacy to this extent before. The Christchurch massacre has changed things. Even the violent demonstrations at Charlottesville didn't prompt the type of policy rethink Facebook will enact next week.

Facebook's priorities lay elsewhere - on detecting child porn and beheading videos - the content that had previously been most likely to cause harm and outrage. It has now seen the light and concluded that "white nationalism and separatism cannot be meaningfully separated from white supremacy and organized hate groups".

The change likely has a lot to do with the heightened talk in recent days of regulatory intervention to force social networks into action. Facebook hates the prospect of that. Its founder Mark Zuckerberg is also fundamentally lukewarm about the concept of vetting content on his platform. He ran into trouble before trying to vet news articles on Facebook. He wants information to be free. But the utopia he dreamed of creating is dragged down to earth by human nature and the rotten apples who spread hate.

Will this apply to Facebook's live streaming video service?

In theory yes, but as we found with Christchurch, that automated moderation system is broken. For this "ban" to be meaningful, Facebook will have to invest in fixing its live-streaming issues or introduce restrictions on who can start a video stream. Again, it has the resources to throw at the problem, but it will be loathe to sacrifice profitability to do so.

Could Facebook's move stifle free speech?

Yes, it could. Facebook is basically redefining what's considered to be civil discussion. AI systems aren't great at dealing with the subtle and the nuanced. White supremacists will also be gaming the system to see what they can get away with.

Manual content moderators who have mere seconds to consider a post or photo will get things wrong. There will be so-called "false positives" and "false negatives" - decisions that see racist content slip through the filter while reasonable comment is flagged and deleted. But that will have to be balanced against the overall benefits of removing the toxic elements.

Will it work?

That is down to Facebook's willingness to live up to its rhetoric, to throw money and people at the problem. The social media giant has faced a barrage of criticism over its live-streaming of mass murder. If that won't force Facebook to change voluntarily, nothing will.

*Peter Griffin is the founder of the Science Media Centre, editor of Sciblogs, and a technology commentator for RNZ and the Listener.

Get the RNZ app

for ad-free news and current affairs