Google Adds AI Disclosure Requirements For Political Ads

Google has added an additional disclosure requirement for election advertisers using digitally altered or AI-generated content, in its latest effort to avoid misinformation ahead of key elections this year.

The company will now require advertisers to select a checkbox in the “altered or synthetic content” section of their campaign settings if they use altered or synthetic content to depict real or realistic-looking people or events, Reuters reported.

Google said it will generate an in-ad disclosure for feeds and shorts on mobile phones and in streams on computers and televisions.

For other formats users will be required to provide a prominent disclosure, with acceptable language varying according to the ad’s context.

Ad disclosures

In September Google said it would require disclosures for synthetic and digitally altered content in election ads, and would require YouTube creators to disclose when they have used realistic altered or synthetic content, displaying a label indicating this to users.

The company in March said its AI chatbot Gemini would be restricted from answering election-related queries in order to avoid accusations of spreading misinformation.

Facebook and Instagram parent Meta Platforms said last year advertisers would need to disclose if AI or other digital tools were used to alter or create political, social or election-related ads on the two social media platforms.

The company said in May it had found a Facebook and Instagram propaganda network designed to sway public opinion in Israel’s favour over the war in Gaza and using material that appeared to be generated using AI as part of its operations.

AI labels

In February Meta said it would begin detecting and labelling AI-generated images made by other companies’ AI systems on Facebook, Instagram and Threads as works to counter “people and organisations that actively want to deceive people”.

Meta already labelled images created by its own AI services, which include invisible watermarks and metadata that can alert other companies that the image was artificially generated.

The policies come amidst growing concern over the potential for misuse generative AI systems that can create fake visual content that appears authentic.

AI-generated content was widely distributed during recent elections in India, including synthetic content creating fake images of Bollywood celebrities.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Elon Musk Issued Summons By SEC Over Failure To Disclose Twitter Stake

As Musk guts US federal agencies, SEC issues summons over Elon's failure to disclose ownership…

18 mins ago

Alphabet Spins Outs Taara To Challenge Musk’s Starlink

Moonshot project Taara spun out of Google, and uses lasers and not satellites to provide…

2 hours ago

Pebble Creator Debuts New Watches As ‘Labour Of Love’

Pebble creator launches two new PebbleOS-based smartwatches with 30-day battery life, e-ink screens after OS…

1 day ago

Amazon Loses Appeal To Record EU Privacy Fine

Amazon loses appeal in Luxembourg's administrative court over 746m euro GDPR fine related to use…

1 day ago

Nvidia, xAI Join BlackRock AI Infrastructure Project

Nvidia, xAI to participate in project backed by BlackRock, Microsoft to invest $100bn in AI…

1 day ago

Google Agrees To $28m Settlement In Bias Case

Google agrees to pay $28m to settle claims it offered higher pay and more opportunities…

1 day ago