Coronavirus: AI Takes Over Social Network Moderation As Staff Sent Home

Social networking giants such as YouTube, Twitter and Facebook are now all relying on artificial intelligence and automated tools to police material posted to their platforms.

The firms are turning to automated tool for content review as staff from outsourced firms usually used to perform this task, are send home due to the Coronavirus pandemic sweeping the globe.

The lack of human oversight has already led to some mistakes. Reuters for example quoted Facebook’s head of safety as saying on Tuesday that a bug was responsible for posts on topics including coronavirus being erroneously marked as spam.

Image credit: Centres for Disease Control and Prevention

AI moderation

“This is a bug in an anti-spam system, unrelated to any change in our content moderator workforce,” Guy Rosen, Facebook’s vice president for integrity, reportedly said on Twitter.

“We’ve restored all the posts that were incorrectly removed, which included posts on all topics – not just those related to COVID-19. This was an issue with an automated system that removes links to abusive websites, but incorrectly removed a lot of other posts too,” he reportedly said.

Facebook users had shared screenshots with Reuters of notifications they had received saying articles from prominent news organisations had violated the company’s community guidelines.

Facebook at the weekend closed its London offices for ‘deep cleaning’, after visiting employee from Singapore was diagnosed with coronavirus.

The firms are admitting that the use of automated systems to fact check posted material may lead to some mistakes, but they insist they still need to remove harmful content.

This is especially important at the moment considering the current state of the world, and the dangers posed by those touting false information as fact.

Indeed, the Covid-19 pandemic has led to a surge of medical misinformation across the web.

Fewer people

“We believe the investments we’ve made over the past three years have prepared us for this situation,” said Facebook in a blog post on the matter. “With fewer people available for human review we’ll continue to prioritise imminent harm and increase our reliance on proactive detection in other areas to remove violating content.”

“We don’t expect this to impact people using our platform in any noticeable way,” Facebook said. “That said, there may be some limitations to this approach and we may see some longer response times and make more mistakes as a result.”

“These are unprecedented times, but the safety and security of our platform will continue,” it said. “We are grateful to all of our teams working hard to continue doing the essential work to keep our community safe.”

Typically, social networking giants tend to outsource the human oversight of questionable content to third-party firms around the world.

These firms are found in locations such as India and the United States.

Quiz: Think you know all about Facebook?

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Meta Declines On Heavy AI Spending Plans, Despite Strong Q1

Share price hit after Meta admits heavy AI spending plans, after posting strong first quarter…

20 hours ago

Google Delays Removal Of Third-Party Cookies, Again

For third time Google delays phase-out of third-party Chrome cookies after pushback from industry and…

21 hours ago

Tesla Posts Biggest Revenue Drop Since 2012

Elon Musk firm touts cheaper EV models, as profits slump over 50 percent in the…

22 hours ago

Apple iPhone Q1 Sales In China Fall 19 Percent, Says Counterpoint

Bad news for Tim Cook, as Counterpoint records 19 percent fall in iPhone sales in…

1 day ago