Open Letter Urges AI Deepfake Regulation

AI godfather and others sign open letter warning of risk to society from AI deepfakes, and urge more regulation

Experts from the artificial intelligence (AI) industry, as well as tech executives, have warned about the dangers of AI deepfakes.

The letter, entitled “Disrupting the Deepfake Supply Chain”, calls for more regulation for AI deepfakes, and was signed by over 440 people including one of the godfathers of AI, Yoshua Bengio, and other academics, as well as Frances Haugen (former Facebook whistleblower), a research scientist at Google Deepmind, a researcher from OpenAI, and Jaan Tallinn (cofounder of Skype).

Dr Geoffrey Hinton, Yoshua Bengio and Yann LeCun, are considered by many to be the three godfathers of artificial intelligence (AI) due to their many years of developing AI and deep learning.

AI In Your Pocket
AI In Your Pocket

Open letter

“Today, deepfakes often involve sexual imagery, fraud, or political disinformation,” states the letter. “Since AI is progressing rapidly and making deepfakes much easier to create, safeguards are needed for the functioning and integrity of our digital infrastructure.”

“Deepfakes are a growing threat to society, and governments must impose obligations throughout the supply chain to stop the proliferation of deepfakes,” the letter states.

The letter calls for new laws, including:

  1. Fully criminalize deepfake child pornography, even when only fictional children are depicted;
  2. Establish criminal penalties for anyone who knowingly creates or knowingly facilitates the spread of harmful deepfakes; and
  3. Require software developers and distributors to prevent their audio and visual products from creating harmful deepfakes, and to be held liable if their preventive measures are too easily circumvented.

If designed wisely, such laws could nurture socially responsible businesses, and would not need to be excessively burdensome.

Deepfakes are realistic but fabricated images, audios and videos created by AI algorithms. Recent developments have made them more and more indistinguishable from human-created content.

It comes after OpenAI last week launched a new tool that can create short form videos simply from text instructions, which could interest content creators, but also have a significant impact on the digital entertainment market.

Deepfake problem

The problem posed by deepfakes has been known for a while now.

In early 2020 Facebook announced it would remove deepfake and other manipulated videos from its platform, but only if it met certain criteria.

Then in September 2020, Microsoft released a software tool that could identify deepfake photos and videos in an effort to combat disinformation.

The risks associated with deepfake videos was demonstrated in March 2022, when both Facebook and YouTube removed a deepfake video of Ukranian President Volodymyr Zelensky, in which he appeared to tell Ukranians to put down their weapons as the country resists Russia’s illegal invasion.

Deepfake cases have also involved Western political leaders, after images of former US Presidents Barak Obama and Donald Trump were used in a various misleading videos.

More recently in January 2024 US authorities began an investigation when a robocall received by a number of voters, seemingly using artificial intelligence to mimic Joe Biden’s voice was used to discourage people from voting in a primary election in the US.

Also last month AI-generated explicit images of the singer Taylor Swift were viewed millions of times online.

Last July the Biden administration had announced a number of big name players in the artificial intelligence market had agreed voluntary AI safeguards.

Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI made a number of commitments, and one of the most notable surrounds the use of watermarks on AI generated content such as text, images, audio and video, amid concern that deepfake content can be utilised for fraudulent and other criminal purposes.