Sam Altman Ouster Adds Fuel To EU AI Regulation Debate

The sudden ouster of former OpenAI chief executive Sam Altman has given a renewed sense of urgency to the EU’s negotiations over its draft artificial intelligence (AI) regulations, which have stalled over how powerful foundation models such as those developed by OpenAI should be handled.

Negotiations around the upcoming AI Act have stalled since mid-November when EU Parliament negotiators walked out of a meeting with government representatives from the EU Council and European Commission in protest against the position of France, Germany and Italy that foundation models should be left to self-regulate.

Those three countries reached a more formal agreement on Monday with a joint paper shared with fellow EU governments that supports “mandatory self-regulation through codes of conduct”, rather than formal regulation, for foundation models, according to reports from multiple media outlets.

Image credit: Levart Photographer/Unsplash

Self-regulation

“Together we underline that the AI Act regulates the application of AI and not the technology as such,” the paper says.

“The inherent risks lie in the application of AI systems rather than in the technology itself.”

The reliance on self-regulation is an effort to ensure Europe’s own technology in the hyper-competitive area is not unduly hampered by red tape, the paper said.

The three countries said in the paper that Europe needs a “regulatory framework which fosters innovation and competition, so that European players can emerge and carry our voice and values in the global race of AI”.

Image credit: European Commission

‘Declaration of war’

For its part, the European Parliament wants regulation applied to foundation models as well as less powerful forms of AI, with an unnamed member of the parliament’s negotiating team telling Politico the paper was “a declaration of war”.

Altman’s abrupt sacking by the OpenAI board led to renewed calls from lawmakers for formal regulation for foundation models.

The situation at OpenAI shows that “we cannot rely on voluntary agreements brokered by visionary leaders”, Brando Benifei, one of the two European Parliament negotiators, told Reuters.

“Regulation, especially when dealing with the most powerful AI models, needs to be sound, transparent and enforceable to protect our society,” he said.

‘Lack of transparency’

Alexandra van Huffelen, Dutch minister for digitalisation, said the events at OpenAI underscored the “lack of transparency” in the industry and its “dependence on a few influential companies”, which “clearly underlines the necessity of regulation”.

Gary Marcus, an AI expert at New York University, wrote on X, formerly Twitter, “We can’t really trust the companies to self-regulate AI where even their own internal governance can be deeply conflicted.

“Please don’t gut the EU AI Act; we need it now more than ever.”

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Russia Accused Of Cyberattack On Germany’s Ruling Party, Defence Firms

German foreign minister warns Russia will face consequences for “absolutely intolerable” cyberattack on ruling party,…

3 hours ago

Alphabet Axes Hundreds Of Staff From ‘Core’ Organisation

Google is reportedly laying off at least 200 staff from its “Core” organisation, including key…

3 hours ago

Apple Announces Record Share Buyback, Amid iPhone Sales Decline

Investor appeasement? Apple unveils huge $110 billion share buyback program, as sales of iPhone decline…

7 hours ago

Tesla Backs Away From Gigacasting Manufacturing – Report

Tesla retreats from pioneering gigacasting manufacturing process, amid cost cutting and challenges at EV giant

23 hours ago

US Urges No AI Control Of Nuclear Weapons

No skynet please. After the US, UK and France pledge human only control of nuclear…

1 day ago