Creating Deepfake Porn Without Consent To Become A Crime

People who create sexually explicit ‘deepfakes’ of adults will face prosecution under a new law in England and Wales

The UK Government’s Ministry of Justice is to crack down on the creation of deepfake porn images of adults in a new law.

The government announced that the new offence will apply to deepfake images of adults, because the law already covers this behaviour where the image is of a child (under the age of 18).

It comes as the latest report from iProov, revealed that the rapid advancement and availability of generative AI tools to bad actors – namely deepfakes – have created an urgent, rising threat to governments and security-conscious organisations worldwide.

Image may be subject to copyright

New offence

This was evidenced in February this year when experts from the AI industry, as well as tech executives, had warned in an open letter about the dangers of AI deepfakes and called for more regulation.

The UK government therefore said it will be a new offence to make a sexually explicit ‘deepfake’ image.

It added that those convicted will face prosecution and an unlimited fine, and the measure is part of its efforts to better protect women and girls.

And the government warned that if the deepfake image is then shared more widely offenders could be sent to jail.

The new law will mean that if someone creates a sexually explicit deepfake, even if they have no intent to share it but purely want to cause alarm, humiliation or distress to the victim, they will be committing a criminal offence.

It will also strengthen existing offences, as if a person both creates this kind of image and then shares it, the CPS could charge them with two offences, potentially leading to their sentence being increased.

The government said reforms in the Online Safety Act had already criminalised the sharing of ‘deepfake’ intimate images for the first time.

Criminal Justice Bill

But this new offence, which will be introduced through an amendment to the Criminal Justice Bill, will mean anyone who makes these sexually explicit deepfake images of adults maliciously and without consent will face the consequences of their actions.

“The creation of deepfake sexual images is despicable and completely unacceptable irrespective of whether the image is shared,” said Minister for Victims and Safeguarding, Laura Farris.

“It is another example of ways in which certain people seek to degrade and dehumanise others – especially women,” said Farris. “And it has the capacity to cause catastrophic consequences if the material is shared more widely. This government will not tolerate it.”

“This new offence sends a crystal clear message that making this material is immoral, often misogynistic, and a crime,” said Farris.

Deepfake problem

The problem posed by deepfakes has been known for a while now.

In early 2020 Facebook announced it would remove deepfake and other manipulated videos from its platform, but only if it met certain criteria.

Then in September 2020, Microsoft released a software tool that could identify deepfake photos and videos in an effort to combat disinformation.

The risks associated with deepfake videos was demonstrated in March 2022, when both Facebook and YouTube removed a deepfake video of Ukranian President Volodymyr Zelensky, in which he appeared to tell Ukranians to put down their weapons as the country resists Russia’s illegal invasion.

Deepfake cases have also involved Western political leaders, after images of former US Presidents Barak Obama and Donald Trump were used in a various misleading videos.

More recently in January 2024 US authorities began an investigation when a robocall received by a number of voters, seemingly using artificial intelligence to mimic Joe Biden’s voice was used to discourage people from voting in a primary election in the US.

Also in January AI-generated explicit images of the singer Taylor Swift were viewed millions of times online.

taylor swift

 

Last July the Biden administration had announced a number of big name players in the artificial intelligence market had agreed voluntary AI safeguards.

Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI made a number of commitments, and one of the most notable surrounds the use of watermarks on AI generated content such as text, images, audio and video, amid concern that deepfake content can be utilised for fraudulent and other criminal purposes.

It comes after OpenAI recently launched a new tool that can create AI generated short form videos simply from text instructions.