Twitter To Hide Tweets That Share False Information During A Crisis

twitter, social media

Potentially risking Elon’s wrath over free speech, Twitter says it will hide tweets spreading misinformation during a crisis

Twitter has announced a content moderation policy change at a time when this is a very sensitive subject for the platform, in light of Elon Musk’s free speech stance.

Although Elon Musk and his $44 billion takeover attempt, currently on hold due to a disagreement over the number of fake accounts which could lead to a legal battle, Musk has made clear his desire free speech on the platform, and has even said he will reserve the ban on Donald Trump.

This makes Twitter’s decision to change its crisis misinformation policy noteworthy, and is sure to scrutinised by many observers and interested parties.

Censorship, freedom of speech © Melinda Fawver Shutterstock 2012

Crisis Misinformation

The new policy for dealing with misinformation during a period of crisis, was revealed in a blog post by Yoel Roth, Twitter’s head of safety and integrity, on Thursday.

It seeks to establish new standards for gating or blocking the promotion of certain tweets if they are seen as spreading misinformation.

“During periods of crisis – such as situations of armed conflict, public health emergencies, and large-scale natural disasters – access to credible, authoritative information and resources is all the more critical,” noted Twitter’s Roth.

“Today, we’re introducing our crisis misinformation policy – a global policy that will guide our efforts to elevate credible, authoritative information, and will help to ensure viral misinformation isn’t amplified or recommended by us during crises,” he wrote.

He noted that teams at Twitter have worked to develop a crisis misinformation framework since last year, and has gathered input from global experts and human rights organisations on the matter.

Twitter defines a crisis as situations where there is a widespread threat to life, physical safety, health, or basic subsistence.

Twitter will initially apply the policy to content concerning the illegal Russian invasion of Ukraine, but the company expects to apply the rules to all emerging crises going forward.

“During moments of crisis, establishing whether something is true or false can be exceptionally challenging,” wrote Roth. “To determine whether claims are misleading, we require verification from multiple credible, publicly available sources, including evidence from conflict monitoring groups, humanitarian organisations, open-source investigators, journalists, and more.”

Donald Trump was noted for tweeting factually incorrect information during his time on Twitter, but this was mostly not during life threatening crisis, other than the Covid-19 pandemic. Despite that Twitter applied ‘fact-checking alerts’ on his tweets on multiple occasions.

Actual steps

Twitter’s Roth said that in order to reduce potential harm, as soon as it has evidence that a claim may be misleading, the platform won’t amplify or recommend content.

In addition, it will prioritise adding warning notices to highly visible Tweets and Tweets from high profile accounts, such as state-affiliated media accounts, verified, official government accounts.

Some examples of Tweets that we may add a warning notice to include:

  • False coverage or event reporting, or information that mischaracterizes conditions on the ground as a conflict evolves;
  • False allegations regarding use of force, incursions on territorial sovereignty, or around the use of weapons;
  • Demonstrably false or misleading allegations of war crimes or mass atrocities against specific populations;
  • False information regarding international community response, sanctions, defensive actions, or humanitarian operations.

People on Twitter will be required to click through the warning notice to view the Tweet, and the content won’t be amplified or recommended across the service

In addition, Likes, Retweets, and Shares will be disabled.

“Content moderation is more than just leaving up or taking down content, and we’ve expanded the range of actions we may take to ensure they’re proportionate to the severity of the potential harm,” noted Roth.

“We’ve found that not amplifying or recommending certain content, adding context through labels, and in severe cases, disabling engagement with the Tweets, are effective ways to mitigate harm, while still preserving speech and records of critical global events,” he wrote.