Meta Restricts AI Tools For Political Ads In Deepfake Clampdown

Meta Platforms has announced efforts to ensure that its adverts going forward will publicly disclose when they have used AI-created or altered content.

Reuters reported Meta confirming that from 2024, advertisers will have to disclose when artificial intelligence (AI) or other digital methods are used to alter or create political, social or election related advertisements on Facebook and Instagram.

The Meta move comes after the world’s first AI Safety Summit was held at Bletchley Park last week, which resulted in the first international declaration on AI, when countries such as UK, US, EU, Australia, and China all agreed that artificial intelligence poses a potentially catastrophic risk to humanity.

Bletchley Park. Image may be subject to copyright

AI disclosures

That AI Safety Summit came amid ongoing concern from experts, regulators and governments over the potential misuse of AI technologies in the years ahead.

According to the Reuters report, Meta says it would require in 2024 advertisers to disclose if their altered or created adverts portray real people as doing or saying something that they did not, or if they digitally produce a real-looking person that does not exist.

Reuters reported that Meta will also ask advertisers to disclose if these ads show events that did not take place, alter footage of a real event, or even depict a real event without the true image, video, or audio recording of the actual event.

Meta is the second largest platform for digital advertising in the world, and it already blocks its user-facing Meta AI virtual assistant from creating photo-realistic images of public figures.

The Meta move comes after Google announced the launch of image-customising generative AI ads tools last week.

The search engine giant also reportedly said it planned to keep politics out of its products by blocking a list of “political keywords” from being used as prompts.

Deepfake concerns

Meanwhile the issue of AI being used to create content that falsely depict candidates in political ads has already been raised by US lawmakers.

Image may be subject to copyright

In July the Biden administration announced a number of big name players in the artificial intelligence sector had agreed voluntary safeguards to the risks posed by AI.

The White House said it had secured voluntary commitments underscore “safety, security, and trust and mark a critical step toward developing responsible AI” from the likes of Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.

Last week President Biden signed a wide-ranging executive order on AI that amongst other measures obliges companies developing the most powerful models to submit regular security reports to the federal government.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

China Tells Telecom Carriers To Phase Out Foreign Chips – Report

Tit-for-tat. Another blow for Intel and AMD in China, after Beijing orders telecom carriers to…

3 hours ago

Sam Bankman-Fried Appeals FTX Fraud Sentence Of 25 Years

Disgraced crypto billionaire and former FTX CEO Sam Bankman-Fried appeals 25 prison sentence for masterminding…

6 hours ago

UK Regulator Flags Competition Risks Of AI Foundation Models

British competition regulator has “real concerns” regarding AI foundation models controlled by small number of…

8 hours ago

Micron Notes DRAM Supply Hit After Taiwan Earthquake

Concerns realised. Memory maker Micron admits hit to DRAM supply following Taiwan's biggest earthquake in…

1 day ago

US Senator Hints At TikTok Divestiture Deadline Extension

China's ByteDance may be given up to a year to divest itself of TikTok, used…

1 day ago