US Air Force Denies Simulated AI Drone ‘Attacked’ Operator

The US Air Force has said widely reported remarks about an AI-powered drone attacking its operator in a simulation to achieve its objectives were “taken out of context”, while the Air Force colonel who delivered the remarks said he “mis-spoke”.

The simulation was actually a thought experiment from outside the military, said Colonel Tucker Hamilton, chief of AI test and operations at the US Air Force, in a statement.

Speaking at a conference hosted by the RoRyal Aeronautical Society in London late last month, Hamilton described an experiment in which an AI-enable drone was tasked to destroy missile sites, with final approval for attacks given by a human operator.

The drone noted that the operator at times told it not to go ahead with an attack, meaning it would gain less points, and so it attacked the operator, Hamilton said at the time.

A General Atomics Predator drone. Image credit: USAF

AI ethics

When reprogrammed not to attack the operator, it instead destroyed the communications tower so that the operator would not be able to prevent it from carrying out attacks, he said.

Hamilton said at the time the example was meant to illustrate that ethics was a critical part of AI design.

“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” he told the conference, according to highlights posted by the RAeS.

In a statement on Friday from the RAeS Hamilton clarified that the story of the rogue AI was a “thought experiment” that came from outside the military, and was not based on actual testing.

‘Anecdotal’

“We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” he said. “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability.”

The Air Force said in a statement that the remarks were meant to be “anecdotal”.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” the Air Force said.

“It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

Rapid shift

The rapid advance of AI, highlighted by the popularity of OpenAI’s ChatGPT since its public release late last year, has spurred concerns even as it has kicked off a massive wave of investment in the field.

In an interview last year with Defense IQ Hamilton said that while the rise of AI poses challenges – in part because it is “easy to trick and/or manipulate” – the technology is not going away.

“AI is not a nice to have, AI is not a fad,” he said. “AI is forever changing our society and our military.”

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

OpenAI Hit By Austrian Complaint Over ChatGPT ‘False Data’

Rights group argues ChatGPT tendency to generate false information on individuals violates GDPR data protection…

20 hours ago

EU Designates Apple’s iPad OS As DMA ‘Gatekeeper’

European Commission says Apple's iPadOS is 'gatekeeper' due to large number of businesses 'locked in'…

21 hours ago

Beating the Barbarians in the Cloud

As the cloud continues to be an essential asset for all businesses, developing and maintaining…

21 hours ago

Austria Conference Calls For Controls On ‘Killer Robots’

Internatinal conference in Vienna calls for controls on AI-powered autonomous weapons to ensure humans remain…

21 hours ago

Taiwanese Chip Giant Exits China Mainland

Major Taiwan chip assembly and test firm KYEC to sell Jiangsu subsidiary, exit mainland China…

22 hours ago

Deepfakes: More Than Skin Deep Security

As deepfake technology continues to blur the lines between reality and deception, businesses and individuals…

22 hours ago