Amid the war in Ukraine and rising geopolitical tensions across the globe, it has never been more critical for military organisations to explore cutting-edge technologies.
The threat of a peer-level conflict will force military organisations to modernise, seeking novel ways of organising and operating in conventional and non-conventional conflicts.
AI is the latest battleground technology for major military superpowers like the US, China, and Russia. It promises to automate and enhance all aspects of modern warfare, including command, control, communications, computer intelligence, electronic warfare, frontline service, reconnaissance, and training and simulation.
There are significant barriers to integrating AI into defence
As militaries worldwide pursue the development of increasingly advanced AI algorithms, the number of ethical questions surrounding their use increases. Lethal autonomous weapons (LAWs) represent the ultimate application of AI in a frontline role.
Presently, there is particular concern over the capacity of autonomous systems to identify, target, and eliminate perceived hostile threats without substantial human oversight.
Major military superpowers are keen to develop LAWs. In September 2018, both the US and Russia blocked UN talks on an international ban on LAWs. More recently, in December 2021, the US, Russia, India, and Israel blocked further talks on prohibiting LAWs at the UN Sixth Review Conference of the Convention on Conventional Weapons.
Target misidentification remains a dominant concern in the field of LAWs, as image recognition and machine learning tools have produced flawed conclusions, which are then propagated at far greater speed and scale than most human errors.
This issue has several knock-on effects, such as legal accountability for the actions of LAWs and the ‘black box’ model, which describes the difficulty in explaining the decision-making processes of many AI algorithms.
The lack of explainability of many AI algorithms raises major ethical concerns, especially if these algorithms control multimillion-dollar pieces of lethal military hardware.
Understanding the decision-making process underlies our ability to trust AI, but a lack of transparency undermines confidence in this technology. Explainable AI models will go some way to restoring that confidence; it refers to an AI system that allows humans to understand how the AI arrives at a decision and offers explanations for its decision-making process.
Competition between nations will drive AI development
Public interest in AI has surged since the release of OpenAI’s ChatGPT in November 2022; however, AI has been a part of countries’ larger military strategies for decades.
Rapid progress in AI has made it a key battleground technology for countries like the US and China, who have each enacted policies to limit the other’s access to materials and technology associated with AI research and development.
In August 2022, the US signed into law the CHIPS and Science Act, which includes measures designed to limit China’s access to US chip manufacturing technology.
Further export restrictions were also announced in October 2022 to prevent the export of US semiconductors to China. In response, in July 2023, China enacted export controls on gallium and germanium, rare earth elements crucial to the manufacture of semiconductors and solar cells.
The escalation of the US-China tech war demonstrates how crucial AI is to global superpowers’ political, economic, and military strategies. AI integration presents many ethical challenges across the defence sector ranging from humanitarian to regulatory concerns raised by lethal autonomous weapons and disinformation.
These issues raise serious questions about the use of AI within the military and the extent to which governments should regulate its development or restrain its employment.
Despite these concerns, as with any military technology, the prospect of falling behind may put those who do not recognise AI’s potential at a clear disadvantage.
Related Company Profiles
OpenAI LLC