The head of one of the leading providers of artificial intelligence (AI) systems is to testify before US lawmakers next week.

CNBC reported that OpenAI CEO Sam Altman will testify before the US Congress for the first time next week, and the hearing will also IBM VP and chief privacy and trust officer Christina Montgomery, as well as New York University Professor Emeritus Gary Marcus.

The move comes as lawmakers around the world scramble to develop appropriate regulations and rules to cope with the sudden uptake of AI technology.

Image credit: Andrew Neel/Pexels

Congress testimony

OpenAI of course is the firm that created the generative AI chatbot known as ChatGPT.

Sam Altman became CEO of OpenAI in 2020, after the startup was initially funded by Altman, Greg Brockman, Elon Musk, Jessica Livingston, Peter Thiel, Microsoft, Amazon Web Services, Infosys, and YC Research.

Elon Musk resigned from the OpenAI board in 2018, citing a potential conflict of interest due to Tesla developing AI self-driving technology.

Indeed, Musk no longer owns a stake in OpenAI, and since then he has been critical of the company and warned its strayed from its initial goals.

Sam Altman will testify before the Senate Judiciary subcommittee on Privacy, Technology and the Law at 10am EST next Tuesday.

The hearing, entitled Oversight of AI: Rules for Artificial Intelligence, will reportedly take place the day after Altman joins US lawmakers for a dinner following votes on Monday.

AI regulation

Last week, Altman had joined other tech CEOs for a meeting at the White House with Vice President Kamala Harris to discuss risks associated with AI.

That meeting came after the US government in April began seeking public input on potential rules to govern the use of artificial intelligence (AI) in the years ahead.

In March the US Chamber of Commerce called for the regulation of artificial intelligence technology – a surprising move considering its traditional anti-regulatory stance of the business lobby group.

The US lobby group was concerned that AI technology could potentially become a national security risk, and is also concerned that its arrival could hurt business growth in the years ahead.

Soon after that AI researchers and tech industry figures including Elon Musk signed an open letter warning that AI systems can pose “profound risks to society and humanity”.

The open letter, also signed by Apple co-founder Steve Wozniak, called for a “pause” on the development of cutting-edge AI systems for at least six months to “give society a chance to adapt”.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Russia Accused Of Cyberattack On Germany’s Ruling Party, Defence Firms

German foreign minister warns Russia will face consequences for “absolutely intolerable” cyberattack on ruling party,…

12 hours ago

Alphabet Axes Hundreds Of Staff From ‘Core’ Organisation

Google is reportedly laying off at least 200 staff from its “Core” organisation, including key…

13 hours ago

Apple Announces Record Share Buyback, Amid iPhone Sales Decline

Investor appeasement? Apple unveils huge $110 billion share buyback program, as sales of iPhone decline…

16 hours ago

Tesla Backs Away From Gigacasting Manufacturing – Report

Tesla retreats from pioneering gigacasting manufacturing process, amid cost cutting and challenges at EV giant

1 day ago

US Urges No AI Control Of Nuclear Weapons

No skynet please. After the US, UK and France pledge human only control of nuclear…

1 day ago