Government Considers Revisiting Online Safety Act

Ofcom has urged social media companies to take action against posts inciting violence, as the government said it may revisit the Online Safety Act, which is due to come into full force next year.

The media regulator urged platforms to address content that depicts “hatred and disorder” and promotes violence and misinformation.

Existing powers enable Ofcom to suspend or restrict video-sharing platforms such as YouTube or TikTok that fail to protect the public from “harmful material” and the regulator is expected to be given more powers across social media platforms in general under the Online Safety Act next year.

“There is no need to wait to make your sites and apps safer for users,” said Ofcom safety director Gill Whitehead.

Image credit: Pexels

Online misinformation

Policing minister Dame Diana Johnson said tech firms “have an obligation now” to “deal with” material that incites violence.

Speaking on the Today programme on BBC Radio Four, Johnson said the government was considering revisiting the upcoming legislation.

“The events of the last few days have meant that we need to look very carefully at what more we can do,” she said.

She said a possible plan to ban convicted rioters from football matches was “being looked at”.

The government said last week social media platforms “clearly need to do far more” after a list supposedly containing the names and addresses of immigration lawyers was spread online, apparently originating from Telegram.

Azzurra Moores of fact-checking organisation Full Fact said online misinformation was a “clear and present danger spilling across into unrest on UK streets in real-time” and urged Ofcom and the government to take “bolder, stronger action”.

‘Not fit for purpose’

London mayor Sadiq Khan said last week the Online Safety Act is “not fit for purpose” due to the way misinformation spreads rapidly on social media and urged ministers to act “very, very quickly” to review it.

Platforms X and Telegram have been notably slow to act on harmful material, with X owner Elon Musk repeatedly relaying posts containing misinformation to his 193 million followers on the service and criticising the government for cracking down on hate speech.

Telegram said its moderators were “actively monitoring the situation and are removing channels and posts containing calls to violence”, adding that “calls to violence” are forbidden in its terms of service.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Huawei Launches World’s First Double-Hinged Smartphone

Huawei launches Mate XT, world's first tri-fold or double-hinged smartphone, as it challenges Apple for…

11 mins ago

X Updates Grok AI Chatbot Over Election Misinformation

X makes changes to xAI's Grok AI chatbot after five US secretaries of state take…

41 mins ago

China Says New Dutch Chip Export Rules Result Of ‘Coercion’

China says new Dutch export controls on chipmaking equpment result of US 'coercion' design to…

1 hour ago

iPhone 16 Gets Generative AI, Siri Upgrade

Apple launches iPhone 16 range with generative AI features, plus camera-based 'visual intelligence', new AirPods,…

2 hours ago

Google Goes On Trial In US Over Ad Tech Dominance

US trial of Google over ad tech market power begins, with forced divestiture of ad…

16 hours ago

US DOJ To Propose Google Penalties By End Of Year

US judge gives Justice Department until end of year to formulate plan for Google punishment…

23 hours ago