TikTok Sister Site Douyin Mandates Labels For AI Content

TikTok’s Chinese sister service Douyin has published new rules for creators requiring them to clearly label content generated by artificial intelligence (AI) tools, as China and other countries prepare AI regulatory frameworks.

Critics have warned that increasingly sophisticated generative AI systems, such as ChatGPT, are capable of creating content that appears authentic but can be used to spread misinformation.

Douyin’s new rules published on Tuesday say clear AI labels will “help other users differentiate between what’s virtual and what’s real”, according to local media reports.

The rules say creators will be held responsible for the consequences of posting AI-generated content.

Image credit: Tara Winstead/Pexels

AI rules

Douyin and TikTok are both operated by Beijing-based ByteDance.

The firm said the rules are based on new regulations called the Administrative Provisions on Deep Synthesis for Internet Information Service that went into effect on 10 January.

The rules are seen as imposing obligations on the providers and users of “deep synthesis” tools, including technology that produces synthetic content such as deepfakes.

A February blog post by international law firm Allen & Overy noted that such technology could be “used by criminals to produce, copy and disseminate illegal or false information or assume other people’s identities to commit fraud”.

The regulation covers technologies that generate or edit text content, video and audio, as well as those that produce virtual scenes or 3D reconstructions.

Deepfakes

Digital avatars are permited on Douyin, but must be registered with the platform and users are required to verify their real names.

The company said on Tuesday that those who use generative AI to create content that infringes on other people’s portrait rights and copyright, or contain falsified information will be “severely penalised”, the South China Morning Post reported.

Regulator the Cyberspace Administration of China (CAC) last month proposed regulations covering generative AI services in the country that aim to prevent discriminatory content, the spread of false information and content that harms personal privacy or intellectual property.

Under a 2018 regulation such tools must pass a CAC security assessment before being made available to the public.

AI risk

The CAC is soliciting public feedback for the new rules until 10 May.

Apple co-founder Steve Wozniak warned this week that AI could be used to aid scammers by making them appear more convincing and said AI-generated content should be labelled as such.

He said the humans who publish AI-generated content should be held responsible for their publications and called for regulation of the sector.

US president Joe Biden last week met with the chief executives of Google, Microsoft and two major AI companies as the US government seeks to ensure artificial intelligence products are developed in a safe and secure way.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Russia Accused Of Cyberattack On Germany’s Ruling Party, Defence Firms

German foreign minister warns Russia will face consequences for “absolutely intolerable” cyberattack on ruling party,…

2 days ago

Alphabet Axes Hundreds Of Staff From ‘Core’ Organisation

Google is reportedly laying off at least 200 staff from its “Core” organisation, including key…

2 days ago

Apple Announces Record Share Buyback, Amid iPhone Sales Decline

Investor appeasement? Apple unveils huge $110 billion share buyback program, as sales of iPhone decline…

2 days ago

Tesla Backs Away From Gigacasting Manufacturing – Report

Tesla retreats from pioneering gigacasting manufacturing process, amid cost cutting and challenges at EV giant

3 days ago

US Urges No AI Control Of Nuclear Weapons

No skynet please. After the US, UK and France pledge human only control of nuclear…

3 days ago