Twitter Targets Trolls And Abuse With ‘Safety Mode’

ManagementMarketingMobile AppsMobilitySocial lawsSocialMedia
twitter

Making Twitter a little bit less toxic? Micro blogging platform unveils ‘Safety Mode’ designed to crack down on abuse and trolling

Twitter has unveiled what it calls ‘Safety Mode’, as part of its ongoing efforts to make the platform a bit less toxic.

Safety Mode is designed to crack down on abuse and trolling, and it will flag accounts that are uing hateful remarks. It will also flag those who are bombarding people with uninvited comments, and block them for seven days.

Once enabled, Safety Mode will work automatically, and remove the need for users to confront and take action against unwelcome tweets.

Safety Mode

Twitter unveiled the option in a blog post on the matter, by senior product manager Jarrod Doherty.

“Feeling safe on Twitter looks different for everyone,” Doherty wrote. “We’ve rolled out features and settings that may help you to feel more comfortable and in control of your experience, and we want to do more to reduce the burden on people dealing with unwelcome interactions.”

“Unwelcome Tweets and noise can get in the way of conversations on Twitter, so we’re introducing Safety Mode, a new feature that aims to reduce disruptive interactions,” wrote Doherty. “Starting today, we’re rolling out this safety feature to a small feedback group on iOS, Android, and Twitter.com, beginning with accounts that have English-language settings enabled.”

Doherty explained the way Safety Mode works is by temporarily blocking accounts for seven days for using potentially harmful language – such as insults or hateful remarks – or sending repetitive and uninvited replies or mentions.

“When the feature is turned on in your Settings, our systems will assess the likelihood of a negative engagement by considering both the Tweet’s content and the relationship between the Tweet author and replier,” wrote Doherty. “Our technology takes existing relationships into account, so accounts you follow or frequently interact with will not be autoblocked.”

Doherty said authors of Tweets found by Twitter’s technology to be harmful or uninvited will be autoblocked, meaning they’ll temporarily be unable to follow a person’s account, their Tweets, or send the user Direct Messages.

“We’ll observe how Safety Mode is working and incorporate improvements and adjustments before bringing it to everyone on Twitter,” he concluded.

Toxic atmosphere

Twitter has long been known for its toxic atmosphere, with plenty of high profile examples over the years.

Comedian Ricky Gervais for example has previously lamented about Twitter’s atmosphere, pointing out that “if you’re mildly conservative on Twitter, you’re Hitler.”

In the last few years, the platform has been experimenting with options to reduce the amount of abuse on the platform, including limiting who can reply to a person’s tweets.

Twitter last year announced it was testing sending users a prompt, warning them when their tweet reply uses “harmful language”.

Co-founder and chief executive officer (CEO) of Twitter Jack Dorsey in April 2019, said he wanted to change the platform and move “away from outrage and mob behaviour and towards productive, healthy conversation.”

One of those measures to stop its platform being used to distort the political landscape for example, saw Twitter in November 2019 ban all political advertising worldwide.

Read also :
Author: Tom Jowitt
Click to read the authors bio  Click to hide the authors bio