AI Leaders Warn Of ‘Extinction’ Risk To Humans

Sam Altman OpenAI ChatGPT

AI poses human extinction risk on par with a nuclear war, AI leaders and academics warn in joint statement

Leading figures continue to warn governments and regulators of the risks associated with artificial intelligence (AI), in a fresh warning this week.

A statement published on the webpage of the Centre for AI Safety, warns that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

And many of the leading figures including CEOs, academics and researchers in the AI sector have signed the statement, adding weight to its warning.

ChatGPT: The End of Data Analytics as We Know It?
ChatGPT: The End of Data Analytics as We Know It?

Warning signatories

Signatories include Dr Geoffrey Hinton, who recently resigned from Google and who is currently Emeritus Professor of Computer Science at University of Toronto.

Dr Hinton, Professor Yoshua Bengio and NYU Professor Yann LeCun are often described as the “godfathers of AI” for their groundbreaking work in the field – for which they jointly won the 2018 Turing Award.

Dr Hinton, aged 75, helped develop Google’s AI technology, and the approach he pioneered led the way for current systems such as ChatGPT.

He recently told the BBC that the dangers of AI chatbots were “quite scary”, warning they could become more intelligent than humans and could be exploited by “bad actors”.

“It’s able to produce lots of text automatically so you can get lots of very effective spambots. It will allow authoritarian leaders to manipulate their electorates, things like that,” he said.

Dr Hinton was also concerned that AI will eventually replace jobs like paralegals, personal assistants and other “drudge work”, and potentially more in the future.

Other signatories to the warning statement include Yoshua Bengio (Professor of Computer Science, U. Montreal / Mila); Demis Hassabis (CEO, Google DeepMind); Sam Altman (CEO, OpenAI); and Dario Amodei (CEO, Anthropic).

Others such as Kevin Scott (CTO, Microsoft); Eric Horvitz (Chief Scientific Officer, Microsoft); and Ian Goodfellow (Principal Scientist, Google DeepMind) have also signed the statement.

OpenAI’s CEO Sam Altman had admitted in March that he is a “little bit scared” of AI as he worries that authoritarian governments would develop the technology.

Altman also recently called for regulation of AI, during his testimony before the US congress.

AI concerns

This latest warning comes amid ongoing concerns about AI, its impact on the world, and potential risk to humanity.

In March the US Chamber of Commerce called for the regulation of artificial intelligence technology – a surprising move considering its traditional anti-regulatory stance of the business lobby group.

Also in March AI researchers and tech industry figures including Elon Musk signed an open letter warning that AI systems can pose “profound risks to society and humanity”.

The open letter, also signed by Apple co-founder Steve Wozniak, called for a “pause” on the development of cutting-edge AI systems for at least six months to “give society a chance to adapt”.

Both Elon Musk and Steve Wozniak, and others including the late Professor Stephen Hawking have warned about the dangers of AI in previous years.

Indeed Professor Hawking warned artificial intelligence could spell the end of life as we know it on Planet Earth. Professor Hawking also predicted that humanity has just 100 years left before the machines take over.

Musk meanwhile was a co-founder of OpenAI – though he resigned from the board of the organisation in 2018.

Musk no longer owns a stake in OpenAI, and since then he has been critical of the company and warned its strayed from its initial goals.

Musk previously stated he believes AI poses a real threat to humans if unchecked, and in 2014 tweeted that artificial intelligence could evolve to be “potentially more dangerous than nukes”.

In 2015 Musk donated $10 million to the Future of Life Institute (FLI) – a non-profit advisory board dedicated to weighing up the potential of AI technology to benefit humanity.

Not that bad

But some leading figures disagree with these gloom and doom warnings.

Professor Yann LeCun for example tweeted his frustration at these warnings.

He tweeted that these apocalyptic warnings are overblown tweeting that “the most common reaction by AI researchers to these prophecies of doom is face palming”.

In April Lastminute.com founder Martha Lane-Fox warned against AI ‘hysteria’ and called for ‘reasonable conversations’ about the technology.