Facebook has pledged to utilise artificial intelligence (AI) in order to keep terrorist content off the social network.

It should be noted that Facebook is a founding member of ‘Partnership on AI’ – a non-profit group founded last September by a number of tech firms to develop AI.

All of these comes as Facebook (and other tech firms) come under increasing political pressure from European leaders, in the wake of recent terrorist attacks.

Hostile Place

And Facebook was one of the tech firms that met with Home Secretary Amber Rudd in the wake of  attack in Westminster in March.

They pledged back then to work harder to tackle terrorist propaganda online, and now Facebook has opened up about its efforts.

Facebook’s Monika Bickert, director of Global Policy Management, and Brian Fishman, Counterterrorism policy manager said they wanted its platform to be a “hostile place for terrorists”.

“In the wake of recent terror attacks, people have questioned the role of tech companies in fighting terrorism online,” they wrote, before explaining how they use AI to keep terrorist content off Facebook, something Facebook has not talked about publicly before.

“Our stance is simple: There’s no place on Facebook for terrorism,” they said. “We remove terrorists and posts that support terrorism whenever we become aware of them.”

They admitted it was a challenge to keep terrorist content off its platform, as it is used by nearly 2 billion every month.

“We want to find terrorist content immediately, before people in our community have seen it,” they wrote. “Already, the majority of accounts we remove for terrorism we find ourselves. But we know we can do better at using technology – and specifically artificial intelligence – to stop the spread of terrorist content on Facebook.”

AI Usage

They said that while the introduction of using AI to fight terrorism is a fairly recent development, it is already changing the way Facebook keeps potential terrorist propaganda and accounts off the platform.

“We are currently focusing our most cutting edge techniques to combat terrorist content about ISIS, Al Qaeda and their affiliates, and we expect to expand to other terrorist organisations in due course,” they wrote.

It seems that Facebook’s AI uses a combination of techniques to spot terrorist content.

This includes image matching (where the system checks if a video or picture matches known terrorism content).

Facebook has also begun using AI to understand any text that might be advocating for terrorism. Its AI algorithm is still in the early stages here, but it should “get better over time.”

Facebook also explained that it also removes terrorist clusters, as its algorithms also “fan out” to try to identify related material that may also support terrorism. It has also become much faster at detecting new fake accounts created by repeat offenders.

And Facebook’s anti terror efforts are also used across different platforms including WhatsApp and Instagram.

Human Factor

But it insists that at the moment, AI cannot catch everything and its algorithms are not yet as good as people when it comes to understanding terrorist-related context.

Facebook cited an example of a photo of an armed man waving an ISIS flag, which could be propaganda or recruiting material, but could also be an image in a news story.

This is why Facebook needs humans. It needs the Facebook community as a whole to report accounts or content that may violate its policies, and it has its own team of terrorism and safety specialists.

Indeed, Facebook said that in the past year it grown its team of counterterrorism specialists to more than 150 people, which can speak nearly 30 different languages. This human counterterrorism team also responds immediately to law enforcement requests.

And lastly Facebook said it partners with other tech firms, researchers and governments, to quickly identify and slow the spread of terrorist content online.

For example in December last year it joined with Microsoft, Twitter and Google to create a shared industry database that can quickly identify terrorist content.

“We want Facebook to be a hostile place for terrorists,” they wrote. “The challenge for online communities is the same as it is for real world communities – to get better at spotting the early signals before it’s too late.”

That said, Facebook and governments around the world disagree on giving governments backdoor access into their systems and the delicate issue of encryption remains a touchy subject at the moment.

Quiz: Will you like our Facebook quiz?

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Amazon Alexa Recovers After Morning Outage

Alexa wake up alarm didn't work this morning? Smart lights didn't turn on? Outage of…

2 days ago

UK, Australia Reach Cyber, Critical Tech Agreement

Australia says it will 'fight back' against nation state cyberattacks, after agreements with the UK…

2 days ago

Italian Regulator Recalculates Apple, Amazon Fines

Italian regulator admits it has redetermined the fines against Apple and Amazon, over the sale…

3 days ago

Red Cross ‘Appalled’ As Hackers Steal Humanitarian Data Of 515,000 People

A new low. International Committee of the Red Cross shuts down reunification system, after hackers…

3 days ago

Russia Proposes Ban On Cryptocurrencies, Crypto Mining

Russia's central bank has this week proposed the banning on the use and mining of…

3 days ago