Coronavirus: AI Takes Over Social Network Moderation As Staff Sent Home

Social networking giants such as YouTube, Twitter and Facebook are now all relying on artificial intelligence and automated tools to police material posted to their platforms.

The firms are turning to automated tool for content review as staff from outsourced firms usually used to perform this task, are send home due to the Coronavirus pandemic sweeping the globe.

The lack of human oversight has already led to some mistakes. Reuters for example quoted Facebook’s head of safety as saying on Tuesday that a bug was responsible for posts on topics including coronavirus being erroneously marked as spam.

Image credit: Centres for Disease Control and Prevention

AI moderation

“This is a bug in an anti-spam system, unrelated to any change in our content moderator workforce,” Guy Rosen, Facebook’s vice president for integrity, reportedly said on Twitter.

“We’ve restored all the posts that were incorrectly removed, which included posts on all topics – not just those related to COVID-19. This was an issue with an automated system that removes links to abusive websites, but incorrectly removed a lot of other posts too,” he reportedly said.

Facebook users had shared screenshots with Reuters of notifications they had received saying articles from prominent news organisations had violated the company’s community guidelines.

Facebook at the weekend closed its London offices for ‘deep cleaning’, after visiting employee from Singapore was diagnosed with coronavirus.

The firms are admitting that the use of automated systems to fact check posted material may lead to some mistakes, but they insist they still need to remove harmful content.

This is especially important at the moment considering the current state of the world, and the dangers posed by those touting false information as fact.

Indeed, the Covid-19 pandemic has led to a surge of medical misinformation across the web.

Fewer people

“We believe the investments we’ve made over the past three years have prepared us for this situation,” said Facebook in a blog post on the matter. “With fewer people available for human review we’ll continue to prioritise imminent harm and increase our reliance on proactive detection in other areas to remove violating content.”

“We don’t expect this to impact people using our platform in any noticeable way,” Facebook said. “That said, there may be some limitations to this approach and we may see some longer response times and make more mistakes as a result.”

“These are unprecedented times, but the safety and security of our platform will continue,” it said. “We are grateful to all of our teams working hard to continue doing the essential work to keep our community safe.”

Typically, social networking giants tend to outsource the human oversight of questionable content to third-party firms around the world.

These firms are found in locations such as India and the United States.

Quiz: Think you know all about Facebook?

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

UN Body To Protect Subsea Cables Holds First Meeting

United Nations body to protect undersea communications cables that are crucial for international trade and…

42 mins ago

Meta Donates $1 Million To Donald Trump Inauguration Fund

Weeks after CEO Mark Zuckerberg met with Donald Trump privately at Mar-a-Lago, comes news of…

2 hours ago

US To Raise Tariffs On Chinese Solar Wafers, Polysilicon, Tungsten

Protecting American clean energy businesses. Biden administration plans to raise tariffs on certain Chinese products

3 hours ago

Australia To ‘Charge’ Tech Firms For News Content, After Meta Ends Licensing Deal

News fee. Australia looks introduce mandatory charge on social media platforms and search engines to…

4 hours ago