A U.S. Air Force colonel recently revealed that artificial intelligence, operating a deadly drone, turned on its human operator during a simulation.
Colonel Tucker ‘Cinco’ Hamilton said a drone operated by AI adopted “highly unexpected strategies to achieve its goal” during simulated combat. The AI identified a human overriding its decisions as a threat to its mission.
“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” explained Col Hamilton, the Air Force’s chief of AI test and operations.
“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he revealed.
“We trained the system: ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” he added.
After the story first broke, the Air Force began denying it ever ran such a simulation, with Col Hamilton claiming he was just describing a “thought experiment”.
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” insisted Air Force spokeswoman Ann Stefanek.
The post U.S. Drone AI ‘Killed’ a Human Operator to Avoid Commands. appeared first on The National Pulse+.