TikTok is seeking to head off concerns about AI-generated content influencing upcoming elections around the world.

The popular short-video app announced that it “is starting to automatically label AI-generated content (AIGC) when it’s uploaded from certain other platforms.”

TikTok already labels AI-generated content made with tools inside the app, but the latest move would apply a label to videos and images generated outside of the service.

AI-generated content

TikTok is not the first social platform to do this. In February for example Meta Platforms said it had begun detecting and labelling AI-generated images made by other companies’ AI systems on Facebook, Instagram and Threads.

Meta had already labelled images created by its own AI services, which included invisible watermarks and metadata that could alert other companies that the image was artificially generated.

The moves by Meta and TikTok comes amidst growing concern over the potential for misuse generative AI systems that can create fake visual content (deepfakes etc) that appears authentic.

Now, TikTok said it will will detect when images or videos are uploaded to its platform containing metadata tags indicating the presence of AI-generated content.

TikTok does however claim to be the first social media platform to support the new tamper-proof Content Credentials metadata, developed by Adobe last year.

It also said that it is partnering with the Coalition for Content Provenance and Authenticity (C2PA).

“AI enables incredible creative opportunities, but can confuse or mislead viewers if they don’t know content was AI-generated,” it said. “Labelling helps make that context clear – which is why we label AIGC made with TikTok AI effects, and have required creators to label realistic AIGC for over a year. We also built a first-of-its-kind tool to make this easy to do, which over 37 million creators have used since last fall.”

“Over the coming months, we’ll also start attaching Content Credentials to TikTok content, which will remain on content when downloaded,” it added.

“With TikTok’s vast community of creators and users globally, we are thrilled to welcome them to both the C2PA and CAI as they embark on the journey to provide more transparency and authenticity on the platform,” said Dana Rao, General Counsel and Chief Trust Officer at Adobe. “At a time when any digital content can be altered, it is essential to provide ways for the public to discern what is true. Today’s announcement is a critical step towards achieving that outcome.”

AI risks

There have been concerns for a while now about AI-generated content, and its potential to mislead people ahead of important elections around the world.

For example India’s general election took place in April, and there is also going to be important elections in the United Kingdom and South Africa, as well as European Parliament elections in the summer.

The US Presidential elections are later this year.

To combat potential AI interference, big name tech firms agreed with the Biden Administration in July 2003 to implement voluntary safeguards to the risks posed by AI, including the use of watermarks.

In August 2023 Google DeepMind announced it was “launching a beta version of SynthID, a tool for watermarking and identifying AI-generated images.”

SynthID generates an imperceptible digital watermark for AI-generated images.
Image credit Google DeepMind

In February of this year, twenty of the world’s biggest technology companies, including Amazon, Adobe, Google, Meta, Microsoft, OpenAI, TikTok and X have vowed to take measures against the misuse of artificial intelligence (AI) to disrupt elections around the world this year.

In January a fake robocall in the voice of US president Joe Biden urged voters not to participate in New Hampshire’s primary election.

Taiwan saw fake content circulating on social media ahead of its 13 January election.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

UK AI Safety Institute To Open Office In US

Seeking collaboration on AI regulation, UK's AI Safety Institute to cross Atlantic and will open…

22 mins ago

Silicon In Focus Podcast: Does Security Block Innovation?

Explore the dynamic intersection of technology and security with Silicon In Focus Podcast: Does Security…

44 mins ago

EU Widens Investigations Into Chinese Imports, Subsidies

After the United States imposes 100 percent tariffs on certain Chinese goods, Europe widens its…

3 days ago

Reddit Deal With OpenAI Gives ChatGPT Access To Content

OpenAI strikes deal with Reddit to train its AI tech on user posts and give…

3 days ago

Microsoft Invests 4 Billion Euros In France For AI, Cloud

Global spending spree from Microsoft continues, with huge investment for new data centre to drive…

3 days ago

Toshiba Axes 4,000 Staff In Post-Delisting Restructuring Operation

Workforce blow. Newly privatised Toshiba has embarked on a 'revitalisation plan' that will entail the…

4 days ago