UK Government Shows Off AI Tool For Extremist Video Takedown

Parliament Government London © anshar Shutterstock 2012

Home Secretary on visit to US warns tech firms it may force them to use it for UK operations

The British government has this week unveiled an AI tool that it claims can automatically detect terrorist content on any online platform.

The AI tool is said to automatically detect 94 percent of Daesh propaganda with 99.995 percent accuracy.

The release of the tool comes as part of a two-day visit to San Francisco, by the Home Secretary Amber Rudd, where she is meeting tech firms as well as the US Secretary of Homeland Security Kirstjen Nielsen, to discuss how the UK and US can work together to tackle terrorist content online.

terrorist

AI Tool

The Home Office said it had worked with London-based ASI Data Science to develop the AI tool, which utilises “advanced machine learning to analyse the audio and visuals of a video to determine whether it could be Daesh (ISIS) propaganda.”

The government contributed £600,000 to help fund the development of the tool.

It is said to be highly accurate, and if it were to analyse one million randomly selected videos, only 50 would require additional human review.

And what is more, the tool can be used by any platform, and integrated into the upload process. The thinking is that the majority of video propaganda could be stopped before it ever reaches the internet.

The Home Office and ASI said they would share the methodology behind the new model with smaller companies, in order to help combat the abuse of their platforms by terrorists and their supporters.

Of course, the bigger tech firms such as Google and Facebook, have their own technology and processes to detect and take extremist content.

But the UK government has warned that smaller platforms such as Vimeo, Telegra.ph and pCloud are increasingly targeted by Daesh and its supporters to spread their propaganda, and these smaller players often do not have the same level of resources to develop technology.

The tool was trained by using over 1,000 Daesh videos.

“Over the last year we have been engaging with internet companies to make sure that their platforms are not being abused by terrorists and their supporters,” said Home Secretary Amber Rudd. “I have been impressed with their work so far following the launch of the Global Internet Forum to Counter-Terrorism, although there is still more to do, and I hope this new technology the Home Office has helped develop can support others to go further and faster.

Rudd has this week travelled to Silicon Valley to hold a series of meetings with the main communication service providers to discuss tackling terrorist content online.

Legal Requirement?

And she warned that the British government could force tech firms to use the tool.

“We’re not going to rule out taking legislative action if we need to do it,” the home secretary was quoted by the BBC as saying..

“But I remain convinced that the best way to take real action, to have the best outcomes, is to have an industry-led forum like the one we’ve got.”

Facebook has previously said it partners with other tech firms, researchers and governments, to quickly identify and slow the spread of terrorist content online.

And in December 2016 it joined with Microsoft, Twitter and Google to create a shared industry database that can quickly identify terrorist content.

Quiz: What do you know about privacy?