Google Pledges AI Halt For Weapon Systems

Artificial IntelligenceData StorageInnovationResearch
Drone (c) Esteban De Armas, Shutterstock 2014

Alphabet signals u-turn after military project, and publishes its core principals for AI development going forward

Google and its parent company Alphabet have pledged to end the use of artificial intelligence (AI) for weapons systems,

The decision comes right from the top, after Alphabet CEO wrote a blog post in which he pledges not to use AI for technology that causes injury to people.

It comes after Google last week told its staff that it would not renew its contract with the US Department of Defence when it expires next year.

Project Maven

Google said last week it would not renew a contract to do artificial intelligence work for the US Pentagon, after internal pressure from staff, some of whom have quit of the matter.

Almost 4,000 Google employees signed an internal petition asking Google to end its participation in Project Maven.

They felt the project would “irreparably damage Google’s brand and its ability to compete for talent”, and that Google’s involvement in the project clashed with the “don’t be evil” ethos of the search engine giant.

Google’s involvement in Project Maven aimed to speed up the analysis of drone footage. Essentially, the search engine giant is said to be using machine-learning algorithms and AI to help the US military assess drone footage quickly, in order to distinguish people and objects in drone videos.

But now new guidelines for AI use at Google were outlined in a blog post from CEO Sundar Pichai.

“At Google, we use AI to make products more useful – from email that’s spam-free and easier to compose, to a digital assistant you can speak to naturally, to photos that pop the fun stuff out for you to enjoy,” wrote Pichai.

“Beyond our products, we’re using AI to help people tackle urgent problems”, he wrote citing the use of Google AI in wildfire spotting, farming applications and for health purposes.

AI principles

“We recognise that such powerful technology raises equally powerful questions about its use,” wrote Pichai. “How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward.”

The first principle is that AI has to be ‘socially beneficial.’ The second is that it has to “avoid creating or reinforcing unfair bias.” Thirdly it has to be “built and tested for safety.”

Fourthly AI systems have to be “accountable to people”, and fifthly it has to “incorporate privacy design principles.”

The last two core principle is that AI has to “uphold high standards of scientific excellence,” and be “made available for uses that accord with these principles.”

AI weapons

Pichai then went to pledge that Google would not design or deploy AI in the following application areas.

These include “technologies that cause or are likely to cause overall harm”; “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”; “technologies that gather or use information for surveillance”; and finally “technologies contravenes international law and human rights.”

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” wrote Pichai.

“These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue,” he concluded. “These collaborations are important and we’ll actively look for more ways to augment the critical work of these organisations and keep service members and civilians safe.”

Put your knowledge of artificial intelligence (AI) to the test. Try our quiz!

Read also :

Author: Tom Jowitt
Click to read the authors bio  Click to hide the authors bio