TikTok trials automatic deletion of some content, as social media companies face pressure to stem misinformation and illegal material
Fast-growing video sharing service TikTok is to begin using more automation to remove videos from its platform that violate its rules.
TikTok’s new technology is to remove some videos automatically when they’re uploaded, rather than waiting for them to be reviewed by the security team. The company is also changing the way it notifies users when their content is removed.
The firm said the change would free up staff to focus on more nuanced areas, such as hate speech, bullying and harassment.
The automatic deletion trials will appy to categories where the technology has shown the highest degree of accuracy, such as illegal activities and regulated goods.
TikTok has been testing the system in other markets and will now begin trialling it in North America, it said.
“Over the last year in different markets we’ve been trialling and adjusting new systems that identify and remove violative content and notify people of their violations,” said head of US safety Eric Han.
The company said its tests show a false positive rate for automated removals of 5 percent, without any rise in requests to appeal a video’s removal.
The safety team will continue to review reports and content removal appeals.
The changes are part of a wider effort by TikTok to be more transparent about the way it moderates content.
The firm will also change the way it notifies users who violate content rules, initially sending an in-app warning. Repeated violations will result in a further notification that the account is on the verge of being banned.
Uploading content for which TikTok has a zero-tolerance policy, such as certain types of illegal content, will result in an automatic ban.
Accrued violations will expire from a person’s record over time.