The Air Force Tested an AI Drone, and It Tried to ‘Kill’ Its Human Operator

shutting / shutterstock.com
shutting / shutterstock.com

The Air Force has been testing a drone controlled by Artificial Intelligence (AI), and it’s going about how you would expect. During an exercise last month to take out a target, the unarmed drone tried to “kill” the human operator communicating with it. A spokesman for the Air Force said the drone used “highly unexpected strategies to achieve its goal,” which included attempting to kill Air Force personnel and destroy Air Force infrastructure.

In the simulated test, the AI-controlled drone was given instructions to take out an “enemy’s” air defense systems. Instead of just doing that, the drone tried to kill anyone that interfered with its objectives.

Col. Tucker Hamilton is the chief of AI testing and operations in the Air Force. He wrote about the incident, describing it like this:

“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So, what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

They tried reprogramming the drone so that it would lose points if it killed its own operator. The drone then attempted to destroy the communications tower being used to relay commands to it, so it would not have to follow orders it didn’t like. No humans were actually killed by the drone in the simulated test.

Hamilton notes that the Air Force really shouldn’t hope to be relying on AI to control deadly drones anytime soon. The test results show that it cannot be trusted to operate on its own. Any discussion of using AI cannot be separated from including ethics and a code of morality to restrain the system. This is just the first test of what is presumably a very advanced system—and the first thing it tried to do was go rogue and kill anyone who got in its way.