OpenAI Hit By Austrian Complaint Over ChatGPT ‘False Data’

OpenAI should be held accountable under European Union data protection regulations for false information repeatedly supplied on individuals by the company’s ChatGPT artificial intelligence-powered chatbot, privacy rights group Noyb has said in a formal complaint to the Austrian data regulator.

The organisation said the well-known tendency of AI large language models (LLMs) to generate false information, known as “hallucination”, conflicts with the EU’s General Data Protection Regulation (GDPR), which requires personal data to be accurate.

The regulation also requires organisations to respond to requests to show what data they hold on individuals or to delete information, but OpenAI said it was unable to do either, Noyb said.

“Simply making up data about individuals is not an option,” the group said in a statement.

Sam Altman. Image credit: OpenAI

False data

It said the complainant in its case, a public figure, found ChatGPT repeatedly supplied incorrect information when asked about his birthday, rather than telling users that it didn’t have the necessary data.

OpenAI says ChatGPT simply generates “responses to user requests by predicting the next most likely words that might appear in response to each prompt” and that “factual accuracy” remains an “area of active research”.

The company told Noyb (which stands for None Of Your Business) that it was not possible to correct data and could not provide information about the data processed on an individual, its sources or recipients, which are all requirements under the GPDR.

Noyb said OpenAI told it that requests for information on individuals could be filtered or blocked, but this would result in all information about the complainant being blocked.

“It seems that with each ‘innovation’, another group of companies thinks that its products don’t have to comply with the law,” said Noyb data protection lawyer Maartje de Graaf.

Access requirement

Noyb said it is asking for the Austrian data protection authority to investigate OpenAI’s data processing and the measures taken to ensure accuracy of personal data processed in the context of OpenAI’s LLMs, and to order OpenAI to comply with the complainant’s access request and issue a fine to ensure future compliance.

The Italian data protection agency issued a temporary ban on ChatGPT last year over data processing concerns and in January told the company’s business practices may violate the GDPR.

At the time OpenAI said it believes “our practices align with GDPR and other privacy laws, and we take additional steps to protect people’s data and privacy”.

The company said it “actively” works to reduce personal data in training systems such as ChatGPT, “which also rejects requests for private or sensitive information about people”.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

OpenAI Tests Search Engine Prototype Called ‘SearchGPT’

Google's dominance of online search is being challenged, after OpenAI unveiled a search prototype tool…

15 hours ago

Elon Musk To Discuss $5 Billion xAI Investment With Tesla Board

Conflict of interest? Elon Musk to talk with Tesla board about making $5 billion Tesla…

18 hours ago

Amazon Developing Cheaper AI Chips – Report

Engineers at Amazon's chip lab in Austin, Texas, are racing ahead to develop cheaper AI…

1 day ago

Apple Smartphone Sales In China Drop 6.7 Percent, Canalys Finds

China woes. Apple's China smartphone shipments decline during the second quarter, dropping it down into…

2 days ago

Meta Ordered To Clean Up AI-Generated Porn By Oversight Board

Oversight Board orders Meta to clarify rules over sexually explicit AI-generated images, after two fake…

2 days ago