Twitter Finds Algorithms Give More Prominence To Political Right

Twitter has found that its own algorithms give more prominence to content from the political right than to the left.

The research, led by Twitter’s own machine-learning ethics team, comes as social media platforms are facing increasing scrutiny of their practices, including allegations of anti-conservative bias.

The company found that its system for recommending content to users tended to give higher levels of “algorithmic amplification” to messages from mainstream political parties and news outlets on the right than to their counterparts on the left.

But Twitter said it did not know the reason behind the phenomenon, saying this was “a more difficult question to answer”.

Political content

The study examined millions of tweets from elected officials and hundreds of millions of tweets by users sharing links to articles from news outlets in seven countries – Canada, France, Germany, Japan, Spain, the UK, and the US – from 1 April to 15 August 2020.

In all countries except Germany, tweets containing messages from the political right were given more prominence on Twitter’s algorithmically ordered news feed, when compared with a reverse-chronological feed, which users also have the option of viewing.

The company used a classification system from third-party researchers to assign a political affiliation to parties and outlets.

The company hasn’t yet examined what the reason for the phenomenon might be, but plans to look at that question next, said Rumman Chowdhury, director of Twitter’s machine-learning, ethics, transparency, and accountability (Meta) team.

“In six out of seven countries, tweets posted by political-right elected officials are algorithmically amplified more than the political left,” she said. “Right-leaning news outlets… see greater amplification compared to left-leaning.

“Establishing why these observed patterns occur is a significantly more difficult question to answer and something Meta will examine.”

Social biases

She said the team would seek to mitigate any inequity the algorithm might be causing, but that the phenomenon was not “problematic by default”.

“Algorithmic amplification is problematic if there is preferential treatment as a function of how the algorithm is constructed versus the interactions people have with it,” she said.

Researchers said the contrast could result from the “differing strategies” political parties use to interact with audiences on the platform.

The study also found no evidence that the algorithms amplified “extreme ideologies more than mainstream political voices”.

Social media platforms have faced criticism over the proliferation of extremist content on their sites.

In May Twitter discontinued an automatic image-cropping system after finding it emphasised white individuals over black people, and women over men.

In August the company began offering bounties for researchers who could discover biases in its AI systems.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Microsoft Xbox Marketing Chief Leaves For Roblox

Microsoft loses Xbox marketing chief amidst executive changes in company's gaming division, broader layoffs and…

22 hours ago

YouTube Test Community ‘Notes’ Feature For Added Context

YouTube begins testing Notes feature that allows selected users to add contextual information to videos,…

23 hours ago

FTC Sues Adobe Over Hidden Fees, Termination ‘Resistance’

US regulator sues Photoshop maker Adobe over large, hidden termination fees, intentionally difficult cancellation process

23 hours ago

Tencent To Ban AI Avatars From Livestream Commerce

Chinese tech giant Tencent to ban AI hosts from livestream video platform as it looks…

24 hours ago

TikTok US Ban Appeal Gets 16 September Court Date

Action by TikTok, ByteDance and creators against US ban law gets 16 September hearing date,…

1 day ago

US Surgeon General Calls For Warning Labels On Social Media

US surgeon general calls for cigarette-style warning labels to be shown on social media advising…

1 day ago