Next year Meta says advertisers will have to disclose when AI tools are used for political, social or election related advertisements
Meta Platforms has announced efforts to ensure that its adverts going forward will publicly disclose when they have used AI-created or altered content.
Reuters reported Meta confirming that from 2024, advertisers will have to disclose when artificial intelligence (AI) or other digital methods are used to alter or create political, social or election related advertisements on Facebook and Instagram.
The Meta move comes after the world’s first AI Safety Summit was held at Bletchley Park last week, which resulted in the first international declaration on AI, when countries such as UK, US, EU, Australia, and China all agreed that artificial intelligence poses a potentially catastrophic risk to humanity.
According to the Reuters report, Meta says it would require in 2024 advertisers to disclose if their altered or created adverts portray real people as doing or saying something that they did not, or if they digitally produce a real-looking person that does not exist.
Reuters reported that Meta will also ask advertisers to disclose if these ads show events that did not take place, alter footage of a real event, or even depict a real event without the true image, video, or audio recording of the actual event.
Meta is the second largest platform for digital advertising in the world, and it already blocks its user-facing Meta AI virtual assistant from creating photo-realistic images of public figures.
The Meta move comes after Google announced the launch of image-customising generative AI ads tools last week.
The search engine giant also reportedly said it planned to keep politics out of its products by blocking a list of “political keywords” from being used as prompts.
Meanwhile the issue of AI being used to create content that falsely depict candidates in political ads has already been raised by US lawmakers.
In July the Biden administration announced a number of big name players in the artificial intelligence sector had agreed voluntary safeguards to the risks posed by AI.
The White House said it had secured voluntary commitments underscore “safety, security, and trust and mark a critical step toward developing responsible AI” from the likes of Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI.
Last week President Biden signed a wide-ranging executive order on AI that amongst other measures obliges companies developing the most powerful models to submit regular security reports to the federal government.