AI Researchers In West, China Identify AI ‘Red Lines’

Leading Western and Chinese AI researchers have said China and the West must work together to mitigate existential risks associated with artificial intelligence (AI), following the International Dialogue on AI Safety (IDAIS) in Beijing last week.

In a joint statement the researchers, who include some of the field’s most prominent names, compared such an effort to bilateral efforts during the Cold War to avert a world-ending nuclear conflict.

“In the depths of the cold war, international scientific and governmental co-ordination helped avert thermonuclear catastrophe. Humanity again needs to co-ordinate to avert a catastrophe that could arise from unprecedented technology,” said the statement, issued following the conference.

AI could pose “catastrophic or even existential risks to humanity within our lifetimes”, they wrote.

Researchers issued a statement on AI ‘red lines’ following the International Dialogue on AI Safety in Beijing on 10-11 March 2024. Image credit: IDAIS

‘Red lines’

The signatories included Geoffrey Hinton and Yoshua Bengio, who won a Turing Award for their work on neural networks and are at times described as the “godfathers of AI”.

Others included Stuart Russell, a leading professor of computer science at the University of California, Berkeley, and Andrew Yao, one of China’s top computer scientists.

The statement identified “red lines” that AI systems should not cross.

It said, for instance, that AI systems should not “be able to copy or improve itself without explicit human approval and assistance” or “take actions to unduly increase its power and influence”.

Deception

No system should be able to “substantially increase the ability of actors to design weapons of mass destruction, violate the biological or chemical weapons convention” or be able to “autonomously execute cyber attacks resulting in serious financial losses or equivalent harm”, the scientists said.

The scientists also felt AIs should be restricted from being able to deceive their own creators into misunderstanding the likelihood that they would cross one of the other red lines.

Toby Ord, senior research fellow at Oxford University, said he attended the forum and found that “when it comes to AI safety (and the red lines humanity must never cross) there was remarkable agreement”.

AI safety

IDAIS is a series of events supported by Berkeley, California-based non-profit AI research group Far AI, with the first conference having been held in the UK last year.

The UK also last year hosted the AI Safety Summit, which was attended by political, technology and business figures from around the world.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Microsoft Beats Expectations Thanks To AI Investments

Customer adoption of AI services embedded in cloud services continues to deliver results for Microsoft,…

23 hours ago

Google Delays Removal Of Third-Party Cookies, Again

For third time Google delays phase-out of third-party Chrome cookies after pushback from industry and…

2 days ago