FTC Launches Probe Into ChatGPT False Results

Sam Altman OpenAI ChatGPT

FTC demands data from OpenAI in firm’s use of data and whether false information from ChatGPT can harm individuals’ reputations

The US trade regulator has opened an investigation into the way ChatGPT parent company OpenAI uses data and whether false information from its products puts personal reputations at risk.

The FTC sent a formal 20-page document to OpenAI asking for records related to such risks, as lawmakers in the US and around the world seek to formulate laws governing AI amidst its exploding popularity this year.

New AI-dedicated rules are thought to be months away at a minimum in the US, and in the meantime the FTC has been warning AI companies that existing laws apply to them.

The regulator asked OpenAI to provide descriptions of complaints it had received of its products making false or misleading statements about people, saying in the document it is investigating practices that result in “reputational harm” to consumers.

Understanding the New European Union AI Law chatgpt openai

Data privacy

It also asked for information relating to the way OpenAI obtains data to train its AI and provide it with information.

AIs must be trained on large amounts of data, which enables them to simulate human responses and interactions.

But the use of that data is a controversial issue, with companies including Twitter and Reddit seeking to take action against AI firms.

Italy temporarily banned ChatGPT in April over concerns about the way private data ingested by it could later appear in responses delivered to other users, and Google had to postpone the launch of its Bard chatbot after receiving requests for privacy assessments from the Irish Data Protection Commission.

‘Safety research’

The probe was first reported by the Washington Post, which published a copy of the FTC document.

OpenAI chief executive Sam Altman said on Twitter that the company had spent years carrying out “safety research” and months making its AI technology “safer and more aligned before releasing it”.

“We protect user privacy and design our systems to learn about the world, not private individuals,” he wrote.

In another post he said it is important to the company that its technology is “safe and pro-consumer” and said the firm would work with the FTC.

Regulatory pressure

Under chair Lina Khan the FTC has taken an active role in policing large tech firms, but has faced political criticism and legal setbacks.

A federal judge last week rejected the regulator’s attempt to block Microsoft’s $69 billion (£53bn) takeover of game company Activision Blizzard.