Facebook is reportedly maintaining a list of elite public figures who are allowed to flout the rules of the online platform.
The WSJ reported that Facebook is giving high-profile users special treatment via its XCheck or “CrossCheck” system, which sometimes routes reviews of posts by well-known users including celebrities, politicians, sportsmen and journalists into a separate ‘whitelist’ system,
Under the programme, some users are reportedly “whitelisted” – i.e. not subject to enforcement action – while others are allowed to post material that violates Facebook rules, pending content reviews that often do not take place.
The WSJ reported that people are placed on the XCheck list (which allows the platform to carry out extra scrutiny), providing they meet certain criteria such as being “newsworthy”, “influential or popular” or “PR risky”.
According to the WSJ, there are currently 5.8 million users on the XCheck list including Donald Trump, US senator Elizabeth Warren and even Mark Zuckerberg. It is not clear if any of these figures were ‘whitelisted.’
The WSJ cited an example of Brazilian football star Neymar, who allegedly responded to a rape accusation in 2019 by posting Facebook and Instagram videos defending himself, which included showing viewers his WhatsApp correspondence with his accuser.
The WhatsApp clips allegedly included the accuser’s name and nude photos of the women in question.
The WSJ reported that instead of immediately deleting the material, which is Facebook’s procedure for “nonconsensual intimate imagery”, moderators were allegedly blocked for more than a day from removing the video.
An internal review of the Neymar posts found that the video was viewed 56m times on Facebook and Instagram before its removal.
Neymar was allegedly not subjected to the normal Facebook procedure for someone who posts unauthorised nude photos, which is to have their account deleted, it is reported.
The WSJ investigation alleged the Facebook process known as “whitelisting”, means that some high-profile accounts are not subject to enforcement at all.
An internal review in 2019 allegedly stated that whitelists “pose numerous legal, compliance, and legitimacy risks for the company and harm to our community”. The review found favouritism to those users to be both widespread and “not publicly defensible”.
The WSJ also reported that the system suffered from enforcement delays that allowed posts to stay up before they were eventually prohibited.
The WSJ reported that in 2020, posts being reviewed by XCheck were viewed at least 16.4 billion times before being removed.
A Facebook spokesperson was quoted as saying criticisms of how XCheck was used were “fair” but the system had been created to deal with content that could require “more understanding” such as reports from conflict zones.
“A lot of this internal material is outdated information stitched together to create a narrative that glosses over the most important point: Facebook itself identified the issues with cross check and has been working to address them. We’ve made investments, built a dedicated team, and have been redesigning cross check to improve how the system operates,” the spokesperson reportedly said.
The WSJ report has given more ammunition to long-term Facebook critics.
Amnesty International condemned the Facebook process, and warned it was allowing hate and abusive content to spread.
“These shocking allegations show, once again, how Facebook’s platform fuels the spread of harmful and abusive content globally,” said Agnès Callamard, Amnesty International’s secretary general.
“Facebook claims it ‘does not profit from hate’,” said Callamard. “Yet according to company documents provided by an anonymous whistleblower, it has created a system that gives powerful users free rein to harass others, make false claims and incite violence.”
“The message from Facebook is clear – if you’re influential enough, they’ll let you get away with anything,” said Callamard.
“The fact these internal documents seem to contradict the assurances that Facebook has made publicly also calls into question how much we can trust the company and what they tell us,” said Callamard. “Ultimately, we need to fix the root cause of this problem: Facebook’s surveillance advertising business model that relies on aggressive data harvesting and profiling on a vast scale.
“Facebook’s algorithms amplify misinformation and divisive content and fuel racism because such content is most likely to keep us engaged for longer and increase their advertising revenues,” said Callamard.
“Urgent government regulation is needed to ensure the online world is one in which human rights are effectively protected,” said Callamard. “These disclosures underscore the fact that we simply cannot rely on companies to self-regulate.”
Square chief executive Jack Dorsey says fintech company looking into custom-built Bitcoin mining hardware that…
Facebook wants to make wearable tech more useful with artificial intelligence trained on massive set…
IT and email systems at Sunderland University offline since last week following 'major cyber-attack' that…