X Blocks Taylor Swift Searches After Spread Of Fake AI Images

Social media platform X, formerly Twitter, has blocked searches related to Taylor Swift after AI-generated explicit images of the singer were viewed tens of millions of times last week.

X head of business operations Joe Benarroch said in a statement to Silicon UK that the move was “a  temporary action” undertaken with “an abundance of caution as we prioritise safety on this issue”.

The White House on Friday called the spread of the AI-generated images “alarming” and called for legislation controlling such content.

Over the weekend searches such as “Taylor Swift” or “Taylor AI” returned the message: “Something went wrong. Try reloading.”

Content moderation

X was purchased in October 2022 by entrepreneur Elon Musk, who has reduced its content moderation resources to a bare minimum and loosened moderation rules, citing his free-speech ideals.

This has raised questions over whether the platform is capable of restricting the spread of illegal content, misinformation or hate speech.

One set of explicit AI-generated fake images of Swift were viewed 27 million times in 19 hours before the account that posted them was suspended, NBC News reported.

Another of the fake images was reportedly viewed 47 million times on X before being taken down.

X’s official safety account said in a post on Friday that it has a “zero-tolerance policy” toward such content and had teams “actively removing all identified images and taking appropriate actions against the accounts responsible for posting them”.

‘Lax enforcement’

The images are not deepfakes as such – in which someone’s face is digitally added to real footage of another person – but were created using a generative AI system.

Generative AI, which has become massively popular over the past year due to the success of OpenAI’s ChatGPT, produces text or images based on a user’s prompts, after being “trained” on vast amounts of raw material.

Governments have expressed concern that such systems could be used to disrupt numerous elections taking place around the world this year by creating realistic misinformation.

White House press secretary Karine Jean-Pierre, asked about the matter at a Friday press briefing, criticised “lax enforcement” and said there should be legislation targeting the misuse of AI technology on social media.

Rise of generative AI

She said platforms also “have an important role to play in enforcing their own rules to prevent the spread of misinformation and non-consensual, intimate imagery of real people”.

Several US politicians also called for new laws following last week’s controversy.

With “advancements in AI, creating deepfakes is easier & cheaper”, wrote Democratic Rep Yvette D Clarke on X, while Republican Congressman Tom Kean Jr wrote that it is “clear that AI technology is advancing faster than the necessary guardrails”.

Robocalls impersonating US President Joe Biden deployed last week in an effort to disrupt New Hampshire primaries are thought to have been created using AI.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Microsoft Beats Expectations Thanks To AI Investments

Customer adoption of AI services embedded in cloud services continues to deliver results for Microsoft,…

24 hours ago

Google Delays Removal Of Third-Party Cookies, Again

For third time Google delays phase-out of third-party Chrome cookies after pushback from industry and…

2 days ago