Meta To Begin Labelling Other Companies’ AI Images

Meta is to begin detecting and labelling AI-generated images made by other companies’ AI systems on Facebook, Instagram and Threads as works to counter “people and organisations that actively want to deceive people”.

The move comes amidst growing concern over the potential for misuse generative AI systems that can create fake visual content that appears authentic.

Meta’s independent oversight board on Monday urged the company to begin labelling doctored videos, and while Tuesday’s announcement is not directly related to that decision, it shows that the company was already moving in that direction, Meta’s president of global affairs Nick Clegg said in an interview with Reuters.

Meta already labels images created by its own AI services, which include invisible watermarks and metadata that can alert other companies that the image was artificially generated.

Watermarks

The firm is now developing tools to identify such watermarks and metadata in images made by generative AI from other companies, including Adobe, Google, Microsoft, Midjourney, OpenAI  and Shutterstock.

“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” Clegg wrote in a Tuesday blog post announcing the policy.

“People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology. So it’s important that we help people know when photorealistic content they’re seeing has been created using AI.”

He said the feature was still in development and would be introduced in the coming months.

Election interference

Clegg noted that “a number of important elections are taking place around the world” this year, spurring concerns of AI being used to interfere in electoral procesess.

The policy is limited to images, as tools for detecting AI-generated video and audio will take longer to develop, Clegg said.

In the meantime Meta is to begin requiring users to label AI-generated audio and video, he said.

He said there is no prospect of being able to flag text generated by services such as ChatGPT. “That ship has sailed,” he told Reuters.

Deception

Clegg said Meta intends to place more prominent labels on “digitally created or altered” images, video or audio that “creates a particularly high risk of materially deceiving the public on a matter of importance”.

An “especially important” endeavour is developing means of detecting AI content where watermarks are absent or have been removed, Clegg said.

“People and organisations that actively want to deceive people with AI-generated content will look for ways around safeguards that are put in place to detect it. Across our industry and society more generally, we’ll need to keep looking for ways to stay one step ahead,” he wrote.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Anthropic Launches Enterprise-Focused Claude, Plus iPhone App

Two updates to Anthropic's AI chatbot Claude sees arrival of a new business-focused plan, as…

1 hour ago

TikTok Viewed As Chinese Influence Tool By Most Americans – Poll

Most people in the United States view TikTok as a Chinese influence tool a poll…

15 hours ago

Ofcom Confirms OnlyFans Investigation Over Age Verification

UK regulator confirms it is investigating whether OnlyFans is doing enough to prevent children accessing…

16 hours ago

Ex Google Staff Fired Over Israel Protest File NLRB Complaint

Dismissed staff file complaint with a US labor board, and allege Google unlawfully terminated their…

17 hours ago

Tesla Axes Entire Supercharger Team, Plus Senior Executives

Elon Musk dismisses two senior Tesla executives, plus the entire division that runs Tesla's Supercharger…

18 hours ago

Microsoft, OpenAI Sued By More Newspaper Publishers

Eight newspaper publishers in the US allege Microsoft and OpenAI used their millions of their…

20 hours ago