Twitter To Warn Users About Their ‘Harmful’ Tweets

Twitless - twitter down © Fotolia

Toxic Twitter. Microblogging service tests sending users a prompt, warning them when their tweet reply uses “harmful language”

Twitter is well known for its toxic environment, but the microblogging service continues its efforts to make its platform a less hostile environment.

Twitter is now giving users a chance to rethink an offensive or hurtful reply to a tweet, by testing a prompt for users when they reply to a tweet using “harmful language.”

In January this year, Twitter warned it would experiment with limiting replies to a user’s tweet, in effort to combat online abuse.

Toxic Twitter?

The experiment is required as Twitter has a deserved reputation for its toxic environment.

Indeed, co-founder and chief executive officer (CEO) of Twitter Jack Dorsey in April 2019, said he wanted to change the platform and move “away from outrage and mob behaviour and towards productive, healthy conversation.”

One of those measures to stop its platform being used to distort the political landscape for example, saw Twitter in November 2019 ban all political advertising worldwide.

Prior to that in October 2019 Twitter had clarified the rules for banning world leaders using the micro-blogging platform to push their views, after calls for the suspension of President Donald Trump’s Twitter account.

But now Twitter announced in a Tweet on Tuesday that is experimenting with giving users of iOS devices a chance to rethink a potentially offensive or nasty tweet.

“When things get heated, you may say things you don’t mean,” Tweeter said. “To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.”

The way it works is when a user hits “send” on their reply, they will be told if the words in their tweet are similar to those in posts that have been reported, and asked if they would like to revise it or not.

Second chance

“We’re trying to encourage people to rethink their behavior and rethink their language before posting because they often are in the heat of the moment and they might say something they regret,” Sunita Saligram, Twitter’s global head of site policy for trust and safety, said in an interview with Reuters.

Of course, Twitter’s policies do not allow users to target individuals with slurs, racist or sexist tropes, or degrading content.

But nevertheless, Twitter can be toxic and bruising environment at times.

Twitter reportedly ook action against almost 396,000 accounts under its abuse policies and more than 584,000 accounts under its hateful conduct policies between January and June of last year, according to its transparency report.

When Reuters asked Saligram whether the experiment would instead give users a playbook to find loopholes in Twitter’s rules on offensive language, Saligram said that it was targeted at the majority of rule breakers who are not repeat offenders.

Twitter said the experiment will start on Tuesday and last at least a few weeks.

The test will run globally but only for English-language tweets.

How well do you know Twitter? Try our quiz!