Online Safety Bill Tweak To Combat Russian Misinformation

Evil parliament (c) pisaphotography, Shutterstock 2014

Foreign interference and misinformation to be designated a priority offence under Online Safety Bill, the government says

The UK government is to amend the recently introduced Online Safety Bill, so it has provisions to combat disinformation campaigns from Russia and other hostile nation states.

The government announced that social media platforms will be required to proactively look for and remove disinformation from foreign state actors which harms the UK.

And it seems that content moderation processes will therefore become an increasingly important function for social networking firms, after the government warned that firms failing to tackle online interference by rogue states face huge fines or being blocked in the UK.

Russian internet © Pavel Ignatov Shutterstock 2012

Misinformation campaigns

The changes to the Online Safety Bill, and the onus it places on social media platforms to proactively tackle Russian and other state-sponsored disinformation, particularly following Russia’s brutal invasion of Ukraine, will not make for easy reading for Mark Zuckerberg and co.

The government said it will table an amendment to link the National Security Bill with the Online Safety Bill – “strengthening this landmark and pioneering internet legislation to make the UK the safest place in the world to go online.”

And it seems that a new Foreign Interference Offence created by the National Security Bill will be added to the list of priority offences in the Online Safety Bill.

What all this means is that social media platforms, search engines and other apps and websites allowing people to post their own content will have a legal duty to take proactive, preventative action to identify and minimise people’s exposure to state-sponsored or state-linked disinformation aimed at interfering with the UK.

This will be a very tall order considering the sheer scale of content people are now putting out online,

Firms will be required to tackle material from fake accounts set up by individuals or groups acting on behalf of foreign states to influence democratic or legal processes, such as elections and court proceedings, or spread hacked information to undermine democratic institutions.

Strengthening protections

“The invasion of Ukraine has yet again shown how readily Russia can and will weaponise social media to spread disinformation and lies about its barbaric actions, often targeting the very victims of its aggression,” noted digital secretary Nadine Dorries.

“We cannot allow foreign states or their puppets to use the internet to conduct hostile online warfare unimpeded,” said Dorries. “That’s why we are strengthening our new internet safety protections to make sure social media firms identify and root out state-backed disinformation.”

“Online information operations are now a core part of state threats activity,” added security minister Damian Hinds. “The aim can be variously to spread untruths, confuse, undermine confidence in democracy, or sow division in society.”

“Disinformation is often seeded by multiple fake personas, with the aim of getting real users, unwittingly, then to ‘share’ it,” said Hinds. “We need the big online platforms to do more to identify and disrupt this sort of coordinated inauthentic behaviour. That is what this proposed change in the law is about.”

The government said that online platforms “will need to do risk assessments for content which is illegal under the Foreign Interference Offence and put in place proportionate systems and processes to mitigate the possibility of users encountering this content.”

This could include measures such as making it more difficult to create large scale fake accounts or tackling the use of bots in malicious disinformation campaigns.

OSB amendments

Since the first draft was published back in December, the OSB faced pre-legislative scrutiny by MPs and peers.

This resulted in a number of additions and amendments to the Bill.

And what will be concerning for the likes of Facebook, Twitter and Google, is that a new legal duty in the Bill will require them to prevent paid-for fraudulent adverts appearing on their services.

Under a previous draft of the online safety bill, those platforms would have had a “duty of care” to protect users from fraud by other users.

The government has previously said the change will improve protections for internet users from the potentially devastating impact of fake ads, including where criminals impersonate celebrities or companies to steal people’s personal data, peddle dodgy financial investments or break into bank accounts.

And now the government is adding to the legal duties of social media platforms, who will be required to police misinformation campaigns by rogue nations.