Google’s seven year ban on the use of artificial intelligence (AI) for developing weapons and surveillance systems has been ended by parent Alphabet.

In a blog post on Tuesday about “responsible AI”, written by James Manyika (SVP, Research, Labs, Technology & Society) and Demis Hassabis (CEO and co-founder, Google DeepMind), the firm dropped the promise not to use AI for weaponry, but said it would “work together to create AI that protects people, promotes global growth, and supports national security.”

Specifically, Alphabet’s ethical guidelines around AI no longer refer to not pursuing technologies that could ‘cause or are likely to cause overall harm’.

2018 AI principles

It should be remembered that Google had in June 2018 published its AI principles, following staff protests against the company’s involvement in developing artificial intelligence for military drones for the Pentagon.

That came a week after Google told its staff that it would not renew a contract with the US Department of Defence when it expired in 2019.

Those developments happened seven years ago after almost 4,000 Google staffers had signed an internal petition asking Google to end its participation in Project Maven. They felt the project would “irreparably damage Google’s brand and its ability to compete for talent.”

Project Maven was a Pentagon project that used artificial intelligence (AI) to process data and identify targets for military use.

At least dozen staffers at Google had resigned over the matter, as they felt involvement clashed with Google’s “don’t be evil” ethos – a motto first touted when Google was floated back in 2004.

But this “don’t be evil” motto was later downgraded in 2009 to a “mantra”, and was not included in the code of ethics of Alphabet when the parent company was created in 2015.

Updated AI principles

In the blog post, both Manyika and Hassabis pointed out that Google was “among the first organisations to publish AI principles in 2018 and have published an annual transparency report since 2019, and we consistently review our policies, practices and frameworks, and update them when the need arises.”

But they then provided an update to the firm’s “AI Principles.”

“Since we first published our AI Principles in 2018, the technology has evolved rapidly,” they wrote. “Billions of people are using AI in their everyday lives. AI has become a general-purpose technology, and a platform which countless organisations and individuals use to build applications. It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself; one with numerous beneficial uses for society and people around the world, supported by a vibrant AI ecosystem of developers.”

But they then admitted that “there’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organisations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”

Against that backdrop, the firm said it is updating its own AI Principles to focus on three core tenets:

  • Bold Innovation: We develop AI to assist, empower, and inspire people in almost every field of human endeavor, drive economic progress and improve lives, enable scientific breakthroughs, and help address humanity’s biggest challenges.
  • Responsible Development and Deployment: Because we understand that AI, as a still-emerging transformative technology, poses new complexities and risks, we consider it an imperative to pursue AI responsibly throughout the development and deployment lifecycle — from design to testing to deployment to iteration — learning as AI advances and uses evolve.
  • Collaborative Progress, Together: We learn from others, and build technology that empowers others to harness AI positively.

Google’s full AI Principles can be found at AI.google.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Virgin Media O2 To Invest £700m To ‘Transform’ 4G, 5G Network

Virgin Media O2 confirms it will invest £2m a day for new mobile masts, small…

2 days ago

Tesla Cybertruck Deliveries On Hold Due To Faulty Side Trim

Deliveries of Telsa's 'bulletproof' Cybertruck are reportedly on hold, amid user complaints side trims are…

2 days ago

Apple Plots Live Translation Option For AirPods – Report

New feature reportedly being developed by Apple for iOS 19, that will allow AirPods to…

2 days ago

Binance Token Rises After Trump Stake Report

Binance BNB token rises after WSJ report the Trump family is in talks to secure…

3 days ago

iRobot Admits ‘Substantial Doubt’ Over Continued Operation

After failed Amazon deal, iRobot warns there is “substantial doubt about the Company's ability to…

3 days ago

Meta’s Community Notes To Use X’s Algorithm

Community Notes testing across Facebook, Instagram and Threads to begin next week in US, using…

3 days ago