At the end of 2021, NATO officially implemented its first artificial intelligence (AI) strategy. The paper sets out the challenges the military faces when using AI and recognises six basic principles for using it responsibly, with collaboration at the forefront. Similarly, the UK Ministry of Defence’s (MOD) executive research agency the Defence Science and Technology Laboratory (Dstl) stresses the importance of responsible and trustworthy use of AI applications.
NATO strategy
The NATO document says AI will support the alliance in its three core tasks: collective defence, crisis management and cooperative security. The strategy also presses the importance of committing to collaboration and cooperation among member states if NATO is to retain the technological edge. This would include allies and members building on existing adoption efforts to ensure interoperability, and standardisation, which is integral to NATO.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataAI technology is evolving at a radical pace and the potential capabilities it offers are clear. But how NATO will manage to harmonise different countries’ approaches remains an issue, International Institute for Strategic Studies (IISS) research fellow for defence and military analysis Dr Simona R. Soare believes.
“For countries like the United States, it is a priority that allies agree practical guidelines for the operational use of AI-enabled systems and the necessary data-sharing, a challenge that should not be underestimated,” she wrote in an IISS blog post.
“Some allies, meanwhile, are not satisfied with the granularity of the six principles of responsible use, while others consider that overemphasising the normative approach risks ceding technological advantage to peer competitors.”
The six principles are lawfulness, responsibility and accountability, explainability and traceability, reliability, governability and bias mitigation. While the strategy aims to provide a foundation for NATO and its allies to encourage the progress of AI applications by responsible means, it also aims to protect and monitor its related technologies and innovative aspect by addressing security policy considerations.
Soare thinks, however, that the extent to which NATO is willing to adopt AI is also questionable.
She says: “The strategy is meant to be implemented in a phased approach, partly to build political support for AI military projects. Initial ambitions seem modest, reportedly focusing on mission planning and support, smart maintenance and logistics for NATO capabilities, data fusion and analysis, cyber defence, and optimisation of back-office processes. As political acceptance grows and following periodic reviews of the strategy’s implementation, the goal is to also include more complex operational applications.”
The strategic document, Soare says, is not clear on the allocation of roles and resources of the different NATO and national innovation bodies, and how each body would coordinate to implement the AI strategy.
“While NATO has adopted the AI strategy, there is no dedicated line of funding for it. Finance will depend on a combination of common budget funding and off-budget mechanisms such as the NATO Innovation Fund. Besides the uncertainty over the availability of funding, some Alliance agencies are concerned that their budgets could be cut and redistributed towards the implementation of the AI strategy,” she writes in her blog.
Delivering AI for the UK
“Future conflicts may be won or lost on the speed and efficacy of the AI solutions employed,” says the 2021 Defence Command Plan.
Drawing upon the MOD’s Integrated Review, Defence Command Plan and Integrated Operating Concept, Dstl’s vision sets out the roadmap of how to deliver AI and automation capabilities rapidly. Dstl believes that the one thing that connects future physical systems, virtual systems and data science is AI.
At the heart of the delivery approach lies continuous development and experimentation that involves agile software development and close end-user engagement. The vision recognises the importance AI will play in multi-domain integration, but equally understands the importance of collaboration between the UK Government, the military, industry and tech sector in delivering the AI vision.
The approximate funding allocated for suppliers to work with Dstl on AI projects is £7m in the financial year 2021/22 but is set to increase to £29m in the next one.
Similarly to the NATO strategy, Dstl aims to help the MOD understand how AI can be adopted ethically and responsibly while it enhances defence capabilities. Potential applications developed under this vision include autonomous platforms, computer network defence, sensing, logistics and security. To achieve these, however, requires establishing user confidence and making adoption easier.
Successful AI applications
Dstl opened its new National Innovation Centre for Data in Newcastle (NICD) last year, expanding the UK’s capabilities by creating a new AI and data science unit. The NICD addresses the shortage of data analytics skills in the country and helps companies and organisations to exploit data.
But it is not all about concepts and research. At a successful sea demonstration in 2021 during a three-week NATO exercise called Formidable Shield, the Royal Navy tested two AI-based applications on the Type 45 Destroyer HMS Dragon and Type 23 Frigate HMS Lancaster.
Canadian IT expert CGI’s System Coordinating Integrated Effect Assignment (SYCOIEA) platform detects supersonic missile threats earlier than conventional systems and provides a rapid hazard assessment to commanders so they can take the most adequate countermeasure option.
The second system was British engineering company Roke’s autonomous Startle system, designed to support sailors with continuous monitoring. It can detect contacts that exhibit anomalous or suspicious behaviours and create alerts to operators to enhance preparedness and counter-action.
Meanwhile, on the ground, soldiers from the 20th Armoured Infantry Brigade deployed an AI engine prototype specifically designed for British Army operations. The system was developed by Dstl and the Army Headquarters Directorate of Information and defence industry partners, including IBM, ESRI and Janes.
It aims to save time and effort, and can help personnel operate more effectively by quickly analysing the surroundings and providing information. The demonstration aimed to explore ways to build trust in the AI system.