During a summit hosted by the Royal Aeronautical Society of the United Kingdom last month, an official from the United States Air Force (USAF) commented that an AI-piloted drone that was commissioned to take down an enemy’s air defenses ended up killing its operator.
Colonel Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations for the USAF, explained that the artificial intelligence was tasked to destroy a surface-to-air missile (SAM) target but that the ultimate decision of whether or not to complete the mission would come from its human operator.
The AI-Piloted Drone Was Just Aiming to Get Rewarded for Its Actions
During its training, the AI was told that taking down the SAM was the preferred alternative, which created a conflict with the instructions that the human operator was giving, which was not to pursue the endeavor.
To solve the riddle, the AI-powered drone ultimately found and killed the operator in a rather unexpected and frightening turn of events. No man was harmed as this was just a simulation but the incident highlighted the risks of using sophisticated AI technology in military operations.
Also read: AI Will Transform Warfare and the Race Between US and China Has Already Started
Meanwhile, after being told that killing the operator would result in losing points, the AI decided to take down the communications tower that allowed its mission control to provide instructions and ultimately brought down the SAM.
“The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective”, Colonel Hamilton asserted.
Hamilton further reflected that any conversation about artificial intelligence, autonomous systems, and machine learning must include discussions of the ethics and risks involved in the use of the technology.
Both the USAF and RAeS Clarified That The Experiment Did Not Occur
UPDATED: Highlights from the Future Combat Air and Space Capabilities Summit #AI #drones #GCAP #Tempest #USAF #RAF #FCAS #FCAS23 https://t.co/cNgqzIP50g pic.twitter.com/DU2XQLrPPj
— Royal Aeronautical Society (@AeroSociety) June 2, 2023
Even though both USAF and RAeS declined to comment on the discussion initially, they were forced to issue statements and made clarifications to the nature and context of the remarks later today as the development gained traction among top media outlets.
In the case of USAF, they sent a statement to Business Insider claiming that the institution has “not conducted any such AI-drone simulations”. They further commented that the Colonel’s comments “were taken out of context and were meant to be anecdotal”.
In addition, the RAeS added a statement to the summary they provided on the key talking points and discussions that took place during the event just a few hours ago.
Also read: 50+ ChatGPT Statistics on Usage & Revenue for May 2023
“Col Hamilton admits he “mis-spoke” in his presentation at the Royal Aeronautical Society FCAS Summit and the ‘rogue AI drone simulation’ was a hypothetical “thought experiment” from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation”, the statement reads.
“We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome”, the Colonel highlighted in his statement sent to the RAeS.
“Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI”, he concluded.
In any case, the risks and ethical considerations that come with the development of sophisticated AI technologies have been widely discussed by experts in the field and personalities from the tech world including the founder of OpenAI – the developer of ChatGPT – and the billionaire head of Tesla (TSLA) Elon Musk.
Recently, a statement signed by hundreds of prominent figures in the computer sciences field and published by the Center of AI Safety (CAIS) acknowledged that the risks associated with AI are similar to those brought up by nuclear weapons and pandemics.
Other Related Articles: