Google Insists AI Better Than Humans For Extremist Content Takedown

Google has provided an update about how it goes about tackling illegal, extremist and hate speech content on its video-sharing website YouTube.

It revealed that its “cutting-edge machine learning technology” is faster and more accurate than humans in removing illicit content from YouTube.

It comes after the search engine giant found itself at the centre of controversy, after it was accused by some of ‘cashing in on extremist content.’

AI extremist content

Matters got even worse for Google in March this year when many big name organisations including Sky, Vodafone, the British government, Marks & Spencer, the BBC, The Guardian and large advertising companies pulled their adverts from YouTube over concerns their ads were being displayed alongside extremist content.

Google said at the time that it flagged and then reviewed questionable content, and dealt with about 200,000 flags per day. Indeed, it said it reviewed 98 percent of the material flagged within 24 hours.

But then in June Google said it would implement new measures to curb the use of YouTube as a propaganda tool for extremists.

These new steps included the increased use of technology to identify extremism-related videos, the introduction of more numerous independent staff to flag such content, steps to make borderline videos less visible and expanded support for counter-radicalisation efforts.

And now Google has provided an update as to the progress of these new measures.

It revealed in a blog post that better detection and faster removal is being driven by machine learning.

AI Improvements

“We’ve always used a mix of technology and human review to address the ever-changing challenges around controversial content on YouTube,” Google wrote. “We recently began developing and implementing cutting-edge machine learning technology designed to help us identify and remove violent extremism and terrorism-related content in a scalable way. We have started rolling out these tools and we are already seeing some positive progress.”

The firm said that its machine learning systems are faster and more effective than ever before, and over 75 percent of the videos that were removed for violent extremism over the past month were taken down before receiving a single human flag.

These AI systems have also been proven to be more accurate than humans at flagging videos that need to be removed.

And these AI systems has helped Google cope with the sheer scale of the issue, after it said that its initial use of machine learning has more than doubled both the number of videos it has removed for violent extremism, as well as the rate at which it has taken this kind of content down.

“We are encouraged by these improvements, and will continue to develop our technology in order to make even more progress,” said Google. “We are also hiring more people to help review and enforce our policies, and will continue to invest in technical resources to keep pace with these issues and address them responsibly.”

Google also pledged to introduce tougher standards, and that any videos that don’t violate its policies but which still contain controversial religious or supremacist content, will be placed in a ‘limited state’, where comments are removed and money cannot be made from them.

“Altogether, we have taken significant steps over the last month in our fight against online terrorism,” said Google’s Youtube team. “But this is not the end. We know there is always more work to be done. With the help of new machine learning technology, deep partnerships, ongoing collaborations with other companies through the Global Internet Forum, and our vigilant community we are confident we can continue to make progress against this ever-changing threat.”

In June both the UK and France said they were considering imposing fines on social media companies if they fail to remove extremist content as part of a joint national security effort in the wake of terror attacks on both countries.

Quiz: Put your knowledge of artificial intelligence (AI) to the test.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

UK Law Aims To Boost Security For ‘Smart’ Devices

New UK rules bring in basic security requirements for millions of internet-connected devices, aiming to…

1 hour ago

Alphabet Value Surges Over $2tn On Dividend Plan

Google parent Alphabet sees market capitalisation surge over $2tn on plan to over first-ever cash…

7 hours ago

Google Asks US Court To Dismiss Federal Adtech Case

Google asks Virginia federal court to dismiss case brought by US Justice Department and eight…

8 hours ago

Snap Sees Surge In Users, Ad Revenues

Snapchat parent Snap reports user growth, revenues in spite of tough competition, in what may…

8 hours ago

Shein Subject To Most Stringent EU Digital Rules

Quick-growing fast-fashion company Shein must comply with most stringent level of EU digital rules after…

9 hours ago

Intel Shares Sink As AI Surge Hits Chip Revenue

Intel shares sag after company shares gloomy revenue predictions, as data centre chip demand hit…

9 hours ago