Tech Giants Vow To Combat AI Misuse In Election Year

Twenty of the world’s biggest technology companies, including Amazon, Adobe, Google, Meta, Microsoft, OpenAI, TikTok and X have vowed to take measures against the misuse of artificial intelligence (AI) to disrupt elections around the world this year.

AI has already been deployed this year to create false content aimed at manipulating voters, in a year when around 4 billion people in 40 countries are expected to participate in elections.

Generative AI tools that are increasingly accessible and powerful have surged in popularity over the past year, following the debut of OpenAI’s ChatGPT in late 2022.

Such tools can create photorealistic images or videos or convincing written content from text prompts.

Sam Altman. Image credit: OpenAI

AI misuse

OpenAI last week showed samples created by its upcoming text-to-video tool Sora, which is not currently available to the public and is being vetted by safety experts.

Experts fear such tools could be used to manipulate elections by creating false information around candidates.

In January a fake robocall in the voice of US president Joe Biden urged voters not to participate in New Hampshire’s primary election.

Taiwan saw fake content circulating on social media ahead of its 13 January election.

Voluntary measures

At the Munich Security Conference on Friday the tech firms announced their “Tech Accord to Combat Deceptive Use of AI in 2024 Elections”, a voluntary agreement with eight specific commitments to deploy technology against harmful AI content.

Kent Walker, president of global affairs at Google, said AI misuse threatens not only election integrity, but also the “generational opportunity” presented by positive uses of the technology.

“We can’t let digital abuse threaten AI’s generational opportunity to improve our economies, create new jobs, and drive progress in health and science,” he said.

Lisa Gilbert, executive vice president of non-profit Public Citizen, which has been advocating for legislation around political and explicit AI-generated content, said voluntary measures were “not enough”.

‘Not enough’

“The AI companies must commit to hold back technology – especially text-to-video – that presents major election risks until there are substantial and adequate safeguards in place to help us avert many potential problems,” she said.

US senators Mark Warner and Lindsey Graham said in a joint statement the move was a “constructive step forward”.

“Time will tell how effective these steps are and if further action is needed,” they said.

The accord’s initial signatories are Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic, and X.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

OpenAI Tests Search Engine Prototype Called ‘SearchGPT’

Google's dominance of online search is being challenged, after OpenAI unveiled a search prototype tool…

15 hours ago

Elon Musk To Discuss $5 Billion xAI Investment With Tesla Board

Conflict of interest? Elon Musk to talk with Tesla board about making $5 billion Tesla…

18 hours ago

Amazon Developing Cheaper AI Chips – Report

Engineers at Amazon's chip lab in Austin, Texas, are racing ahead to develop cheaper AI…

1 day ago

Apple Smartphone Sales In China Drop 6.7 Percent, Canalys Finds

China woes. Apple's China smartphone shipments decline during the second quarter, dropping it down into…

2 days ago

Meta Ordered To Clean Up AI-Generated Porn By Oversight Board

Oversight Board orders Meta to clarify rules over sexually explicit AI-generated images, after two fake…

2 days ago