Google is to add more engineering resources and human experts in its efforts to remove extremist content from YouTube
Google said it plans to implement new measures to curb the use of YouTube as a propaganda tool for extremists amidst growing pressure to take action by the UK government and a series of attacks in the country in recent months.
In an editorial published in the Financial Times on Sunday, later republished as a blog post, Google called extremism an “attack on open societies”.
‘More needs to be done’
“While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done,” wrote Google general counsel Kent Walker.
The four measures outlined in the article include increased use of technology to identify extremism-related videos, the introduction of more numerous independent staff to flag such content, steps to make borderline videos less visible and expanded support for counter-radicalisation efforts.
Google also reiterated that it’s working with companies including Microsoft, Facebook and Twitter to establish an industry forum to develop technology that can be used by other companies to identify and remove extremist content online.
The technical resources are to include an expansion of the engineering support for training machine learning tools that can more quickly catch militant videos, Google said.
“We have used video analysis models to find and assess more than 50 per cent of the terrorism-related content we have removed over the past six months,” Walker wrote.
The company said it’s expanding YouTube’s Trusted Flagger programme, which relies on independent experts to identify videos that don’t meet YouTube’s standards, adding 50 non-governmental organisations (NGOs) to the 63 that already participate in the programme, supported by grant funding.
In the case of videos that don’t clearly violate YouTube’s policies, but which are considered inflammatory, Google plans to display the content behind an interstitial warning and won’t allow the content to display ads, user endorsements or comments.
“That means these videos will have less engagement and be harder to find,” wrote Walker.
Google said YouTube also plans to expand its participation in a programme that displays targeted anti-extremist adverts that point viewers to videos aiming to debunk militant propaganda.
Social media companies including Google, Facebook and Twitter have been criticised for allowing extremist content and misinformation to flourish on their platforms while blocking content that appears relatively harmless, but which could offend the sensibilities of their most prudish users.
Facebook, for instance, was castigated earlier this year for asking an Italian art historian to remove a photo of the famous sixteenth-century statue of the god Neptune that stands in a public square in Bologna, because of the statue’s nudity, and in the past has also censured Gustave Courbet’s painting The Origin of the World.
The House of Commons Home Affairs Select Committee recently issued a report heavily critical of social networks’ efforts to remove illegal content.
Labour MP Yvette Cooper, who chairs the committee, said she welcomed Google’s latest efforts.
“The select committee recommended that they should be more proactive in searching for – and taking down – illegal and extremist content, and to invest more of their profits in moderation,” Cooper said. “News that Google will now proactively scan content and fund the trusted flaggers who were helping to moderate their own site is therefore important and welcome.”
Put your knowledge of artificial intelligence (AI) to the test. Try our quiz!