US Air Force Denies Simulated AI Drone ‘Attacked’ Operator

The US Air Force has said widely reported remarks about an AI-powered drone attacking its operator in a simulation to achieve its objectives were “taken out of context”, while the Air Force colonel who delivered the remarks said he “mis-spoke”.

The simulation was actually a thought experiment from outside the military, said Colonel Tucker Hamilton, chief of AI test and operations at the US Air Force, in a statement.

Speaking at a conference hosted by the RoRyal Aeronautical Society in London late last month, Hamilton described an experiment in which an AI-enable drone was tasked to destroy missile sites, with final approval for attacks given by a human operator.

The drone noted that the operator at times told it not to go ahead with an attack, meaning it would gain less points, and so it attacked the operator, Hamilton said at the time.

A General Atomics Predator drone. Image credit: USAF

AI ethics

When reprogrammed not to attack the operator, it instead destroyed the communications tower so that the operator would not be able to prevent it from carrying out attacks, he said.

Hamilton said at the time the example was meant to illustrate that ethics was a critical part of AI design.

“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” he told the conference, according to highlights posted by the RAeS.

In a statement on Friday from the RAeS Hamilton clarified that the story of the rogue AI was a “thought experiment” that came from outside the military, and was not based on actual testing.

‘Anecdotal’

“We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” he said. “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability.”

The Air Force said in a statement that the remarks were meant to be “anecdotal”.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” the Air Force said.

“It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

Rapid shift

The rapid advance of AI, highlighted by the popularity of OpenAI’s ChatGPT since its public release late last year, has spurred concerns even as it has kicked off a massive wave of investment in the field.

In an interview last year with Defense IQ Hamilton said that while the rise of AI poses challenges – in part because it is “easy to trick and/or manipulate” – the technology is not going away.

“AI is not a nice to have, AI is not a fad,” he said. “AI is forever changing our society and our military.”

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

MWC 2024: BT To Switch On Stand-alone 5G Network In 2024

Stand-alone 5G network in the UK, without a 4G core or anchor, will be switched…

19 hours ago

Judge Signals Elon Musk May Lose Lawsuit Against Non-profit

Elon Musk's high profile lawsuit against a hate speech non-profit may be in trouble, after…

22 hours ago

OpenAI Sued For Unauthorised Use Of Journalist Content, Again

Three US online news outlets sue OpenAI, alleging the AI pioneer used thousands of their…

2 days ago

Microsoft Investment In Mistral AI Prompts EU Scrutiny Calls

European lawmakers call for investigation after Microsoft this week made small investment in French startup…

2 days ago

Apple To Disclose AI Plans This Year, Says Tim Cook

CEO Tim Cook once again indicates Apple will open up about its generative artificial intelligence…

2 days ago