Godfather Of AI, Geoffrey Hinton, Quits Google


Pioneer of neutral networks resigns from Google, to speak openly about misinformation and the dangers of AI

The man labelled by some as the godfather of artificial intelligence (AI), Dr Geoffrey Hinton, has resigned from Google and opened up about his concerns about the tech.

The New York Times reported that Dr Hinton cited concerns over the flood of misinformation, the possibility for AI to upend the job market, and the “existential risk” posed by the creation of a true digital intelligence.

British-Canadian Dr Hinton, coupled with two of his students at the University of Toronto, had built a neural net in 2012. He is viewed as a leading figure in the deep learning community, and in 2018 he received the Turing Award, together with Yoshua Bengio and Yann LeCun, for their work on deep learning.

ai artificial intelligence data network pexels
Image credit: Pexels

Google resignation

Dr Hinton, aged 75, reportedly announced his resignation from Google so as to be able to “freely speak out about the risks of AI” and added that a part of him now regrets his life’s work.

He actually joined Google back in March 2013 when his company, DNNresearch, was acquired by the search engine giant. He said at the time that he planning to “divide his time between his university research and his work at Google.”

The Guardian reported that whilst at Google, he helped develop the company’s AI technology, and the approach he pioneered led the way for current systems such as ChatGPT.

Dr Hinton reportedly told the New York Times that until last year he believed Google had been a “proper steward” of the technology, but that changed once Microsoft started incorporating a chatbot into its Bing search engine, and the company began becoming concerned about the risk to its search business.

Some of the dangers of AI chatbots were “quite scary”, he told the BBC, warning they could become more intelligent than humans and could be exploited by “bad actors”. “It’s able to produce lots of text automatically so you can get lots of very effective spambots. It will allow authoritarian leaders to manipulate their electorates, things like that.”

But, he added, he was also concerned about the “existential risk of what happens when these things get more intelligent than us.

“I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have,” he reportedly said. “So it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

Dr Hinton was also concerned that AI will eventually replace jobs like paralegals, personal assistants and other “drudge work”, and potentially more in the future.

Best wishes

Google’s chief scientist, Jeff Dean, was quoted by the Guardian in a statement as saying that Google appreciated Hinton’s contributions to the company over the past decade.

“I’ve deeply enjoyed our many conversations over the years. I’ll miss him, and I wish him well!” he reportedly said.

“As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly,” Dean reportedly said.

Alphabet announced it is merging its internal AI research team (Google Brain) with UK-based DeepMind, which it acquired in 2014 for $500m.

AI concerns

Dr Hinton is not alone in his concerns about AI, its impact on the world, and potential risk to humanity.

Last month the US Chamber of Commerce called for the regulation of artificial intelligence technology – a surprising move considering its traditional anti-regulatory stance of the business lobby group.

Also last month AI researchers and tech industry figures including Elon Musk signed an open letter warning that AI systems can pose “profound risks to society and humanity”.

The open letter, also signed by Apple co-founder Steve Wozniak, called for a “pause” on the development of cutting-edge AI systems for at least six months to “give society a chance to adapt”.

Both Elon Musk and Steve Wozniak, and others including the late Professor Stephen Hawking have warned about the dangers of AI in previous years.

Indeed Professor Hawking warned artificial intelligence could spell the end of life as we know it on Planet Earth. Professor Hawking also predicted that humanity has just 100 years left before the machines take over.

Musk meanwhile was a co-founder of OpenAI – though he resigned from the board of the organisation in 2018.

Musk no longer owns a stake in OpenAI, and since then he has been critical of the company and warned its strayed from its initial goals.

Musk previously stated he believes AI poses a real threat to humans if unchecked, and in 2014 tweeted that artificial intelligence could evolve to be “potentially more dangerous than nukes”.

In 2015 Musk donated $10 million to the Future of Life Institute (FLI) – a non-profit advisory board dedicated to weighing up the potential of AI technology to benefit humanity.