New research from Center for Countering Digital Hate alleges X is failing to remove posts containing “extreme hate”, despite being notified
The UK-based research group targetted by Elon Musk in a lawsuit is not backing down, and alleges that X (formerly Twitter) is failing in its content moderation practices.
The Center for Countering Digital Hate (CCDH), a non-profit that fights hate speech and disinformation, wrote in a blog post on Wednesday, that it “reported 300 tweets containing extreme hate to X, it left up 259 of them.”
Elon Musk and CCDH have history. Last month X Corp followed through on its legal threat and sued CCDH, accusing it of false claims and spooking advertisers.
The Center for Countering Digital Hate in turn accused Elon Musk’s company of intimidation and said its allegations had no basis in fact.
CCDH said it would fight the lawsuit and keep holding “Twitter’s feet to the fire”.
Now in a new blog post on Wednesday, CCDH wrote that new research “shows that X (formerly Twitter) continues to host nearly 86 percent of a set of 300 posts reported for hate speech – which included tweets promoting and glorifying antisemitism, anti-Black racism, neo-Nazism, white supremacy and/or other racism.”
CCDH also said it found dozens of advertisements for household brands, such as Apple and Disney, appearing next to hate speech – despite X CEO’s Linda Yaccarino’s claims to have “built brand safety and content moderation tools that have never existed before at this company.”
CCDH researchers collected a sample of 300 posts which were categorised as containing hate speech from 100 accounts (three posts per account). It said that taken together the 100 accounts identified in the research have a sum total of 1,060,106 followers.
One week after the posts were reported to moderators (on August 30 and 31) via official reporting tools, researchers found that X left up 259 of 300 posts (86.33 percent).
90 of 100 accounts also remained active, CCDH alleged.
CCDH alleged that each post was in clear violation of at least one of X’s policies against hateful conduct, which prohibit incitement and harassment of others on the basis of protected characteristics.
According to CCDH, some posts were also in violation of rules against slurs, dehumanisation, hateful imagery and the targeting of others by referencing genocide.
Examples of extreme posts left online by Twitter include:
- Posts denying the Holocaust, or mocking victims of the Holocaust;
- Posts glorifying the Nazis, including one describing Hitler as “a hero who will help secure a future for white children”;
- Memes accusing Black people of being harmful to “A quiet, peaceful, functioning society”;
- Posts claiming “Blacks don’t need provoking before becoming violent. It’s in their nature”;
- Posts condemning interracial relationships – specifically, encouraging others to “Stop Race mixing” and “break up with your non-white gf today”
No data scrapping
“What this shows is that it takes out any excuses of this being about capacity to detect problematic content,” CCDH’s CEO Imran Ahmed was quoted as telling CNBC. “We’ve done the detection for you, and here’s how you responded, or here’s how we can see that you responded.”
Ahmed added, “Leaving up content like this is a choice, and that invites the question: Are you proud of the choices you’re making?”
Ahmed reportedly said the CCDH did not use data-scraping tools to conduct its latest research and instead “simply went in and had a look.”
X did not respond to a request for comment from CNBC, and instead pointed to a post saying that “based on the limited information we’ve seen, the CCDH is asserting two false claims – that X did not take action on violative posts and that violative posts reached a lot of people on our platform.”
“We either remove content that violates our policies or label and restrict the reach of certain posts,” the company said in the X post, adding that it would review the report when it is released and “take action as needed.”
Tomorrow the Center for Countering Digital Hate (CCDH) will release a report on how X allegedly moderates content. While we wish the CCDH would have sent us the full report for a fair review, the choice was made to share their purported findings with journalists.
To be clear, we…
— Safety (@Safety) September 13, 2023
This row about hate speech and anti-semitic content on X since Elon Musk took over has been ongoing for a while now.
X CEO Linda Yaccarino in August had publicly affirmed the company’s commitment to brand safety for advertisers, when she confirmed that the Elon Musk firm was close to break even.
But two major brand names recently confirmed they will suspend advertising on X after their ads appeared alongside an account which has shared content celebrating Hitler and the Nazi Party.
Last month Elon Musk threatened to sue the Anti-Defamation League (ADL), citing the impact on Twitter’s advertising revenue.
New York-based Anti-Defamation League is an international Jewish non-governmental organisation that specialises in civil rights law and combats antisemitism and extremism.
It had alleged that antisemitic posts on X increased sharply after Musk bought the site in October 2022, and the platform subsequently reinstated extremists and conspiracy theorists, while allowing the harassment of former members of its now-dissolved trust and safety council.
But Musk blamed the ADL for Twitter’s advertising woes and threatened to sue the group.
Musk then offered an insight into the advertising problem at Twitter, revealing that advertising sales for the business were down 60 percent.
Meanwhile the Auschwitz Memorial, which preserves the death camp set up by the Nazis during World War Two, criticised X in August for failing to remove an anti-semitic post on the site.
The Auschwitz Memorial tweeted a reply from Twitter, which said that after reviewing an account’s a blatantly anti-Semitic tweet, that the content did not violate its rules.
X/Twitter later suspended the account.