Human-Like AI ‘May Face Ban’, Says Government Adviser

Powerful general-purpose artificial intelligence (AI) systems may need to be banned, a government advisor has said.

Marc Warner, chief executive of Faculty AI and member of the government’s AI Council advisory committee, said artificial general intelligence (AGI) systems – intended to have intelligence similar to that of a human – are of real concern.

He told the BBC that “sensible decisions” on the technology would be required in the next six months to a year.

It requires strong transparency and audit requirements and more built-in safety mechanisms, he said.

‘Caution’

Warner said humanity is in a position of primacy on this planet primarily because of its intelligence.

“If we create objects that are as smart or smarter than us, there is nobody in the world that can give a good scientific justification of why that should be safe,” he said.

He said the technology isn’t necessarily “terrible” but “there is risk” requiring “caution”.

Warner suggested “at the very least” there should be “strong limits” on the amount of computing power that can be devoted to systems that are designed to compete with human at a wide range of tasks.

Regulatory limits

“There is a strong argument that at some point, we may decide that enough is enough and we’re just going to ban algorithms above a certain complexity or a certain amount of compute,” he added.

“But obviously, that is a decision that needs to be taken by governments and not by technology companies”.

General-purpose AI systems are less widely discussed than those with a particular function, such as translating text or searching for cancers in medical images, Warner acknowledged.

But he said AGI is more worrying and will require a new legal framework.

‘Extinction’ risk

Warner added his name to a statement released last week that warns AI poses a “risk of extinction” to humans.

Faculty AI was one of the companies whose representatives met with technology minister Chloe Smith at Downing Street on Thursday to discuss the risks, opportunities and rules needed to ensure safe and responsible AI.

The US and the EU have said they are pushing for a voluntary code of practice for AI companies in the near term, as the EU prepares a legal framework for the technology that is likely to take years to come into effect.

The UK government was criticised for failing to envisage any dedicated regulation for AI in a white paper on the technology published in March.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

SoftBank-Owned ARM To Develop AI Chips

SoftBank-owned UK chip design firm ARM Holdings to develop AI accelerator chips for data centres…

4 hours ago

MGM Hackers Launch New Campaign Targeting Financial Sector

Aggressive hackers behind hacks on Las Vegas MGM and Caesars casinos launch new campaign as…

10 hours ago

X Accused Of Overruling Australian Law On Knife Attack Posts

Lawyer for Australia's eSafety Commissioner says X wants to overrule government on what are 'reasonable'…

11 hours ago

Shares In Chinese EV Firm Zeekr Soar On Debut

EV maker Zeekr, controlled by car giant Geely, valued at nearly $7bn as investors heartened…

11 hours ago

Musk: Tesla ‘To Spend $500m’ On Charger Expansion This Year

Elon Musk says Tesla to spend more than $500m on charger network expansion this year,…

11 hours ago

Judge Dismisses X Lawsuit Against Data-Scraping Firm

San Francisco judge says social media platforms such as X have no right to arbitrarily…

12 hours ago