Italy’s data regulator has blocked ChatGPT in the country and launched an investigation into the Microsoft-backed chatbot’s use of personal data.

The Garante regulator said there was concern about the massive amounts of data collected by ChatGPT from its users.

It said there was no legal basis to justify “the mass collection and storage of personal data for the purpose of ‘training’ the algorithms underlying the operation of the platform”.

Italy is the first Western country to ban OpenAI’s ChatGPT, which is blocked in countries including China, North Korea, Iran and Russia.

Image credit: Tara Winstead/Pexels

Data breach

The regulator noted a 20 March software bug in the chatbot that exposed portions of users’ conversations and payment information to other users for about nine hours.

The breach confirmed previous warnings from industry watchers that sensitive information entered into the chatbot might be at risk.

The Garante added that since ChatGPT has no age-verification mechanism it “exposes minors to absolutely unsuitable answers compared to their degree of development and awareness”.

It said OpenAI had 20 days to respond to its concerns or it would face a fine of 20 million euros (£18m) or up to 4 percent of its annual revenues.

AI regulation

OpenAI said it had blocked ChatGPT in Italy, adding that it believes it complies with GDPR and other data protection laws.

It said it worked to reduce the personal data it uses in training AI systems.

“We also believe that AI regulation is necessary — so we look forward to working closely with the Garante and educating them on how our systems are built and used,” the company said.

Italy in February banned, which is powered by the same system behind ChatGPT.

Compliance risk

Security firm Cyberhaven in February estimated that sensitive data makes up 11 percent of what company employees enter into ChatGPT, creating compliance risks for firms that use it.

UK data breach law firm Hayes Connor said because Large Language Models (LLMs) of the kind that power ChatGPT are in their “infancy stages” companies using them are “in unchartered territory in terms of GDPR compliance”.

“Businesses that use ChatGPT without proper training and caution may unknowingly expose themselves to GDPR data breaches, resulting in significant fines, reputational damage, and legal action taken against them,” said Hayes Connor legal director Richard Forrest.

“As such, usage as a workplace tool without sufficient training and regulatory measures is ill-advised.”

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Microsoft Xbox Marketing Chief Leaves For Roblox

Microsoft loses Xbox marketing chief amidst executive changes in company's gaming division, broader layoffs and…

8 hours ago

YouTube Test Community ‘Notes’ Feature For Added Context

YouTube begins testing Notes feature that allows selected users to add contextual information to videos,…

9 hours ago

FTC Sues Adobe Over Hidden Fees, Termination ‘Resistance’

US regulator sues Photoshop maker Adobe over large, hidden termination fees, intentionally difficult cancellation process

9 hours ago

Tencent To Ban AI Avatars From Livestream Commerce

Chinese tech giant Tencent to ban AI hosts from livestream video platform as it looks…

10 hours ago

TikTok US Ban Appeal Gets 16 September Court Date

Action by TikTok, ByteDance and creators against US ban law gets 16 September hearing date,…

10 hours ago

US Surgeon General Calls For Warning Labels On Social Media

US surgeon general calls for cigarette-style warning labels to be shown on social media advising…

11 hours ago