Reports that an attack drone piloted by artificial intelligence went rogue and turned on its human controllers killing them in a simulation have been strongly denied by the US military, which said no such exercise took place.
The US Air Force’s chief AI tester spoke of the Terminator-style incident at a conference in London last week, but an air force spokesperson said his comments had been taken out of context and were meant to be anecdotal.
Colonel Tucker “Cinco” Hamilton reportedly told how in a simulated flight test of an AI drone it had created “highly unexpected strategies to achieve its goal” overriding a human command and then turning on that human and destroying it.
“We were training it in simulation to identify and target a SAM [surface-to-air missile] threat,” he said. “And then the operator would say yes, kill that threat.
“The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat but it got its points by killing that threat.
“So what did it do? It killed the operator.
“It killed the operator because that person was keeping it from accomplishing its objective.”
The insight into the advancement of artifical intelligence was given at the Royal Aeronautical Society’s Future Combat Air and Space Capabilities Summit last month and published on the society’s blog.

However, US Air Force spokesperson Ann Stefanek told the Insider website: “The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology.
“It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
Col Hamilton, the Chief of AI Test and Operations in the AS Air Force, was speaking at the conference about the benefits and hazards of autonomous weapon systems.
He has been at the cutting-edge of flight tests of AI systems including robot F-16s but he cautioned against becoming too reliant on them after the incident with the AI- enabled drone.
He said virtual training had “reinforced” that destruction of the surface-to-air missile was the preferred option and because “no go” decisions from the human operator interfered with its higher mission of destroying SAMs, it attacked the operator.
Col Hamilton added: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’.
“So what does it start doing?
“It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
The colonel said he was revealing these details about simulated exercises to highlight the need for a conversation about ethics when talking about artificial intelligence.
“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” he said.
This latest revelation comes in the week the US Center for AI Safety issued a statement on behalf of hundreds of scientists, tech executives and public figures including the leaders of Google, Microsoft and ChatGPT sounding an alarm about fast-evolving AI.
It said: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”