OpenAI chief executive Sam Altman to tell US Congress that ‘regulation of AI is essential’, but companies must have flexibility to adapt
OpenAI chief executive Sam Altman on Tuesday is to call for regulation of AI before the US Congress – with the caveat that that regulation should remain flexible.
He is to tell lawmakers that “the regulation of AI is essential”, according to prepared remarks released ahead of his scheduled testimony to the Senate judiciary subcommittee on privacy, technology and the law.
Altman is to say he is “eager to help policymakers as they determine how to facilitate regulation that balances incentivising safety while ensuring that people are able to access the technology’s benefits”.
In his testimony he plans to recommend that AI companies adhere to an “appropriate set of safety requirements, including internal and external testing prior to release” and licensing or registration conditions for AI models.
But he plans to emphasise that such safety requirements should “have a governance regime flexible enough to adapt to new technological developments”.
Legislators around the world have increasingly focused on AI in the six months since Microsoft-backed OpenAI’s ChatGPT was released to the public, generating an intense wave of public interest.
Since then OpenAI has released an even more advanced large language model (LLM) called GPT-4, while Microsoft rivals including Google and several Chinese tech giants have pushed chatbots out to the public.
In the meantime AI critics including Elon Musk and Steve Wozniak, along with more than 1,000 other researchers and executives, in March signed a letter calling for a six-month pause on development of AI models more advanced than GPT-4 in order to allow society to catch up.
Generative AI tools have attracted interest and concern for their ability to create human-like text and images, with potentially far-reaching implications for jobs and economic competition.
The tools also require vast amounts of text and imagery upon which they can be trained – including data from their users – and the rules around their collection and reuse of this material remain unclear.
US politicians, including President Joe Biden, have indicated they are planning rules for AI, but are also wary of stifling domestic innovation at the expense of firms’ ability to compete with China.
Last week EU lawmakers agreed tough rules over the use of AI, including restrictions on ChatGPT-style chatbots.
Earlier this month the US Federal Trade Commission said it was “focusing intensely on how companies may choose to use AI technology”, while the UK’s Competition and Markets Authority launched a competition review into the market.
“Artificial intelligence will be transformative in ways we can’t even imagine, with implications for Americans’ elections, jobs, and security,” said senator Josh Hawley ahead of the hearing.
Senator Ruchard Blumenthal, chair of the committee, said: “Artificial intelligence urgently needs rules and safeguards to address its immense promise and pitfalls.
“This hearing begins our subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful technology . . . as we explore sensible standards and principles to help us navigate this uncharted territory.”