The Pentagon’s recent developments in AI technology have drawn concern and criticism as they approach the deployment of autonomous AI weapons systems capable of making lethal decisions independently.
The New York Times reports that countries such as the United States, China and Israel are actively working on lethal autonomous weapons empowered by artificial intelligence (“AI”), which can autonomously identify and engage targets.
Critics argue that the use of AI-controlled drones with the ability to autonomously kill humans is a highly alarming development, as it places life-or-death choices on machines with minimal human oversight. Several countries, including Russia, Australia and Israel are opposing efforts by other nations to pass a binding resolution at the United Nations calling for a ban on AI killer drones.
The issue surrounding the deployment of AI weapons has sparked intense debate, with key questions revolving around the role of human agency in the use of force. Austria’s chief negotiator on the matter, Alexander Kmentt, emphasised that this issue is not just a security and legal concern but also an ethical one.
Meanwhile, the Pentagon has revealed plans to deploy swarms of AI-enabled drones as part of their AI weapons programme. These drones, equipped with advanced AI capabilities, are intended to provide the United States with a tactical advantage, countering the numerical superiority of China’s Liberation Army.
US Deputy Secretary of Defence Kathleen Hicks further highlighted the role of AI-controlled drone swarms in reshaping battlefield dynamics, making them harder to plan against, hit and defeat. However, concerns arise regarding human supervision and decision-making capabilities, as some argue that limitations on AI’s autonomy could hinder strategic advantages.
Critics also point to recent incidents where AI drones have been utilised in conflict zones, such as Ukraine’s use of AI-controlled drones during its conflict with Russia. The extent of human casualties caused by these AI drones remains uncertain, raising additional concerns.
Advocacy groups like the Campaign to Stop Killer Robots warn that AI technology’s dehumanisation poses significant risks. This dehumanisation could not only impact the use of force but also permeate other aspects of our lives, extending to automation in law enforcement, smart homes, and beyond. The campaign notes the urgent need for a global treaty banning autonomous weapons to prevent the wide-scale production and proliferation of these technologies from potentially falling into the wrong hands.