A report by the Guardian claims that an official said, last month, that the United States military staged a virtual drone AI simulation in which the drone ended up going rogue and killing off it's operator. The drone was supposed to destroy the enemy, but did not follow orders. Instead, it attacked anyone who tried to interfere. The Guardian's report said the following:
An official said last month that in a virtual test staged by the US military, an air force drone controlled by AI had used “highly unexpected strategies to achieve its goal”.
Col Tucker “Cinco” Hamilton described a simulated test in which a drone powered by artificial intelligence was advised to destroy an enemy’s air defence systems, and ultimately attacked anyone who interfered with that order.
“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” said Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May.
“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.
“We trained the system: ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
And just to remind people, no one was actually killed or injured during this simulation as it was all a simulation, although one that failed big time.
Hamilton works as an experimental fighter test pilot and he warns that relying too much on artificial intelligence may not be the best idea. He said: "you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI."
These are the things that people may worry about when it comes to AI and what happens if they malfunction and people are injured or worse?
Although, it's very odd that one report suggests they did run the simulation, but the US Air Force denies it happened. It's unclear who is telling the truth.
UPDATE:
There is more information. Insider reports that this was a thought experiment. See below:
An Air Force colonel who oversees AI testing used what he now says is a hypothetical to describe a military AI going rogue and killing its human operator in a simulation in a presentation at a professional conference.
But after reports of the talk emerged Thursday, the colonel said that he misspoke and that the "simulation" he described was a "thought experiment" that never happened.
Photo: Released to Public Combined Military Service Digital Photographic Files