Depressing stats sees social network remove 11.6 million pieces of child abuse content in three month period
The toxic side of social networking has been laid bare in the latest ‘Community Standards Enforcement Report’ from Facebook.
It revealed that across Facebook and now Instagram, it removed 11.6 million pieces of content related to child nudity and child sexual exploitation, between July and September this year.
The shocking stats now also include for the first time Facebook’s actions against suicide and self-injury, following an active campaign by the family of 14-year-old Molly Russell, who took her own life in 2017 after viewing self-harm images on Instagram.
Earlier this year a government initiative saw tech giants such as Google, Facebook and Snapchat working alongside suicide prevention experts from the Samaritans.
The move was part of an initiative seeking to improve the response of tech giants to unsafe online content, such as child abuse, self-harm, suicide and terrorism, as well as anti-vaccinating content.
Facebook said it had taken action on about 2 million pieces of self-harm content in Q2 2019, of which 96.1 percent it detected proactively.
In the third quarter Facebook said that it had removed 2.5 million pieces of self harm content, of which 97.3 percent it detected proactively.
On Instagram, it said that had removed about 835,000 pieces of content in Q2 2019, of which 77.8 percent it had detected proactively.
During the third quarter, it removed about 845,000 pieces of self harm content, 79.1 percent of which was detected proactively.
Facebook said that it had expanded its clampdown on terrorist propaganda, as it only previously reported on its efforts specifically against al Qaeda, ISIS and their affiliates.
Now, this has been expanded to include policing actions against “against all terrorist organisations.”
Facebook also revealed it was making good progress with child exploitation, as during the third quarter, it removed about 11.6 million pieces of content.
This was up from Q1 2019 when it removed about 5.8 million pieces of content. It said it proactively detected over 99 percent of this type of content.
On Instagram in Q2 2019, it removed about 512,000 pieces of content, but in Q3 it removed 754,000 pieces of content.