Facebook Bans White Power Content After Christchurch Massacre

LegalMarketingRegulationSocialMediaSurveillance-IT
data centre, facebook

After mosque murder spree, white nationalism and white separatism content banned by Facebook

Facebook has announced the ban from next week of “praise, support and representation of white nationalism and white separatism on Facebook and Instagram.”

The announcement it said what the social networking giant taking a stand against hate, and follows the massacre of 50 people in mosques in Christchurch, New Zealand earlier this month – an attack that was partly live streamed online.

The move comes amid continual pressure from governments around the world wanting social networking firms such as Twitter, Facebook and Google to crack down on hate speech and other extremist content.

Facebook - Shutterstock - © Pan Xunbin / Shutterstock.com

Hate Crackdown

Facebook said in a blog post that people and groups promoting white nationalism and white separatism were “deeply linked to organised hate groups and have no place on our services.”

“Our policies have long prohibited hateful treatment of people based on characteristics such as race, ethnicity or religion – and that has always included white supremacy,” said Facebook. “We didn’t originally apply the same rationale to expressions of white nationalism and white separatism.”

“But over the past three months our conversations with members of civil society and academics who are experts in race relations around the world have confirmed that white nationalism and white separatism cannot be meaningfully separated from white supremacy and organised hate groups,” said the social network.

Facebook said it had carried out its own review of hate figures and organisations, which revealed “the overlap between white nationalism and white separatism and white supremacy.”

“Going forward, while people will still be able to demonstrate pride in their ethnic heritage, we will not tolerate praise or support for white nationalism and white separatism,” said Facebook.

The firm also admitted it need to become faster at both finding and then removing hate from its platforms, and it will look to AI after its use since 2017 in finding terrorist content.

“Over the past few years we have improved our ability to use machine learning and artificial intelligence to find material from terrorist groups,” said Facebook. “Last fall (or Autumn), we started using similar tools to extend our efforts to a range of hate groups globally, including white supremacists. We’re making progress, but we know we have a lot more work to do.”

Facebook also said that people searching for white supremacy content would be redirected to Life After Hate, an organisation founded by former violent extremists that provides crisis intervention, education, support groups and outreach.

“Unfortunately, there will always be people who try to game our systems to spread hate,” said Facebook. “Our challenge is to stay ahead by continuing to improve our technologies, evolve our policies and work with experts who can bolster our own efforts. We are deeply committed and will share updates as this process moves forward.”

Welcomed move

Facebook’s move was welcomed by New Zealand Prime Minister Jacinda Ardern, who had called for social media platforms to be accountable for what users post.

“Having said that, I’m pleased to see that they are including it, and that they have taken that step, but I still think that there is a conversation to be had with the international community about whether or not enough has been done,” she was quoted by Reuters as telling a media conference in Christchurch on Thursday.

“There are lessons to be learnt here in Christchurch and we don’t want anyone to have to learn those lesson over again,” she reportedly said.

Quiz: Will you like our Facebook quiz?

Read also :
Author: Tom Jowitt
Click to read the authors bio  Click to hide the authors bio