There aren’t any easy answers to potential risks posed by the AI-Automation-Nuclear nexus, but a starting point has to be a greater understanding of the key concepts and an appreciation of how the deployment of such technologies could play out. In the nuclear realm, and especially now in the increasing digital global nuclear realm, perceptions and the potential for misunderstanding will be as important as reality, writes Andrew Futter, Professor of International Politics at the University of Leicester.
The possible negative impact of Artificial Intelligence (AI) and Automation is the latest iteration in a growing concern about the impact of “disruptive emerging technology” on nuclear politics and stability. AI can be traced back to the 1950s, but hype about nuclear-armed “killer robots” or nuclear weapons systems acting autonomously are increasingly dominating military planning and debates. While some applications might sound like something from the plot of a science fiction movie, the reality is that the impact of AI and Automation (including systems that range from Automated to Autonomous ) is likely to be more diverse and perhaps subtle than is often portrayed. Indeed, AI and Automation already play a role in certain nuclear operations, especially in support systems, and this will almost certainly increase and expand in the years ahead. The key to understanding the impact therefore is in breaking down the phenomenon into is component parts, looking at where AI and Automation are currently applied, where they might be applied, and where it might be possible but unlikely or at least undesirable to be applied. AI and Automation don’t necessarily have to undermine strategic stability and nuclear security, and in some cases they could enhance it, but there is certainly the potential for detrimental future implications if development across the nuclear realm is left unchecked.
What are AI and Automation?
There seems to be some confusion when the terms Artificial Intelligence and Automation are used, especially in relation to international politics. This is because they can refer to quite different activities and applications, which in turn have quite different implications for nuclear stability and security.
AI is essentially coding, computer systems and software capable of performing tasks that usually require intelligence if carried out by humans. Thus, it is not really one discrete system, but rather something than can be applied in many different ways depending on the particular task at hand. It is useful though to distinguish between narrow and general AI. Narrow AI has specific goals and is limited by the boundaries of its programming and the specific “problem” to be solved. General AI or “machine learning” involves writing software that allows systems to “learn” through analysing vast datasets to "train’ and then to make their own decisions. The vast majority of what we term AI, and especially the systems currently used across the nuclear enterprise are rules-based narrow “if-then” types (principally because they are predictable), but the computer and information technology revolution, or as some have termed it “the fourth industrial age” mean that the requisite processing power and expertise has created the possibility for wider applications and especially intelligent machines.
Autonomy/Automation is effectively the application of a type of AI to particular tasks, some of which might involve robotics, and therefore Automated or Autonomous weapons systems. Like AI, there are different variations of Autonomy when it comes to weapons and support systems. Automation can vary considerably in terms of levels of autonomy, function and sophistication. We can think of these distinctions as existing along a continuum from discrete Automated Systems to more capable and goal-orientated Autonomous Systems). We can think of these distinctions as existing along a continuum from discrete Automated Systems to more capable and goal-orientated Autonomous Systems). It also ranges from Autonomy at rest (which is computer software) and Autonomy in motion (which is software that might be used in robotics and machines ). Also, like AI, Automation has been used for decades in high-risk applications such as airlines and nuclear power plants, and in nuclear early warning, targeting and delivery systems (though most involve human control) . AI essentially allows robotic systems to operate without human intervention, based on interaction with their environment, albeit to different extents.
Applications of AI, robotics and general neural machine learning are theoretically endless and could be applied right across the nuclear enterprise. However, at the moment, the applications of these technologies are limited by the huge datasets (and security of data) required for training (especially for systems performing functions where there simply isn’t much data that can be used ), the problem of control and unpredictability, computational power, and by a desire to keep humans “in the loop” (though as is discussed below this can be a double-edged sword).
How might these technologies be applied in nuclear and strategic systems?
The US, Russia, China and others already use narrow AI and various levels of Autonomy for certain functions within their nuclear enterprise. But plans seem to be afoot to deploy AI and increasingly Autonomous weapons and support systems across a greater variety of roles and across different military domains in the future.
One area where AI and Autonomy are likely to play an important role is in the software, computer and associated systems that support decision making and nuclear command, control and communications. There is some precedent here: both the US and Russia built nuclear early warning systems during the Cold War that contained a degree of Automation, with the most extreme example being the semi-automated Dead Hand nuclear response system. But it is likely that AI and Automation could become increasingly important in data collection and data cleaning, and maybe complex data analysis, for enhanced warning systems, targeting plans, and to support situational awareness for commanders and leaders.
A second area of nuclear operations that seems likely to benefit enormously from AI and greater Autonomy is in the ability to locate, track and target an adversary’s concealed and mobile nuclear systems. The combination of enhanced sensor capabilities across all domains (potentially deployed on semi or autonomous platforms or in “swarms”), the ability to transfer enormous caches of data quickly and analyse in real-time, and to deploy uninhabited systems to attack targets, is changing the game of “nuclear hide and seek”. Two applications in particular stand out: first, the potential ability to target mobile land-based missiles (especially important for Russia), and second, the possibility of locating and tracking very quiet nuclear-armed ballistic missile submarines under the ocean. If possible, this would have considerable implications for strategic stability based on “secure second strikes” (discussed below).
A third impact of AI and Automation will be for guidance and accuracy of both nuclear and conventional weapons systems. This could be achieved through making missiles and bombs “smarter” and able to respond to their environment potentially before and after being launched. A basic version of this type of AI is included in current cruise missiles , and will almost certainly be key to future hypersonic missiles. If weapons can become more accurate, it raises the possibility of carrying own surgical long-range counterforce strikes with conventional rather than nuclear weapons.
Fourth, and linked to the point above, AI and Automation could facilitate the development and deployment of increasingly autonomous nuclear and non-nuclear delivery platforms. The best example here is the Russian Status-6 nuclear-armed torpedo, but it is possible that other nuclear delivery platforms in the future could have a degree of autonomy (or at least be uninhabited), such as the US B21 bomber. In the future, nuclear delivery platforms could conceivably be able to “loiter” stealthily near targets waiting to strike like the autonomous "Harpy’ UAV fielded by Israel. Though this would pose significant issues for command and control.
Other applications could include more effective and powerful cyber operations (both in terms of defending nuclear networks and essential computer systems, but also for offensive means such as “left of launch” attacks on an adversaries nuclear, missile and command and control systems. It is also possible that AI might be used to create “Deep Fakes” that might be used in disinformation campaigns that precipitate or deepen a nuclear crisis.
What is the likely impact on arms control and nuclear stability?
Each of the applications discussed above appear to have potentially damaging implications for nuclear and strategic stability. In particular, the deployment of weapons systems that might undermine secure second-strike forces or create new pressures and unforeseen pathways towards escalation may well necessitate a re-thinking of how to maintain a peaceful global nuclear order. Moreover, perceptions of technical trajectories will probably matter as much, if not more than, technical realities when it comes to these challenges.
It is at least conceivable that advances in sensing and processing capabilities, perhaps deployed on autonomous platforms, combined with new and more accurate kinetic and digital weapons could be seen as a major threat to stable deterrence, and drive arms racing across a range of technologies. Military planners might have to adopt worse-case scenario interpretations of the risk environment, not least due to the intangibility of the key driving technologies (in stark contrast to the much more tangible weapons of the past). It is also possible that the threats posed—both real and perceived—could create new problems and pressures for escalation and crisis management. In an absolute worst-case scenario, military planners might become so concerned about the vulnerability of their nuclear forces that waiting to strike second may no longer be an option.
Of course, the deployment of AI-enabled weapons systems is unlikely to happen unopposed, and the software and programming that makes these weapons so capable and attractive may also prove to be their Achilles Heel. All AI would be vulnerable to hacking, spoofing and data poisoning, and the risk would presumably increase the closer any system comes to more general machine-learning type-AI. Likewise, the Automated/Autonomous platforms used for sensing, communications and weapons delivery would also be vulnerable to opposing forces, whether they be air defence against Uninhabited Aerial Vehicles, jammers, cyber-attacks, or similar techniques that might be deployed underwater. Such high-value targets are unlikely to be unprotected.
What this all means for arms control is less clear, but future agreements will need to take into account this more complex and entangled nuclear picture. Specific AI arms control will be difficult if not impossible given its ubiquitous applications and intangible nature, but a focus on types of delivery systems might be more practicable, or an agreement not to deploy fully Autonomous nuclear weapons without any human control. On the flip side, as Michael Horowitz et al note, it is at least conceivable that AI and Autonomy could have a role to play in enhancing “reliability, reduce the risk of accidents, and buy more time for decision-makers in a crisis.”
Conclusion: Towards a more automated nuclear future
Artificial Intelligence and Automation are not going away, and it is difficult to see how both won’t play an ever-greater role in all aspects of nuclear operations and global nuclear politics going forward. However, so far nuclear-armed states have appeared determined to keep a “human in the loop” and are reluctant to delegate the most safety critical nuclear operations to machines. But as the technical ability to do this increases, and if key strategic nuclear relationships continue to deteriorate, we could see a move from specific applications of “if-then” narrow AI in the nuclear realm to increasingly Autonomous weapons and support systems that rely on general AI and machine learning.
An often-quoted “solution” to the risks posed by AI and Automation across the nuclear enterprise is to “keep humans in the loop” and keep away from “Terminator-style” Autonomous nuclear systems. But human control is not necessarily a panacea: there are risks that humans become too trusting of machines (automation bias) or that data from machines may not be trusted because of the difficulty in understanding how a decision was made (trust gap). There is also the issue of how much knowledge military operators are likely to have about the context on which to base any assessment of the veracity of choices made by machines.
For sure, infusing nuclear weapons complexes with AI and Automation won’t be cheap, and perhaps maybe some of the most worrying developments might be curtailed as much due to budgetary pressure as to strategic wisdom. But given that the much of the drive for AI and Automation will come from the commercial and in some cases private sector, controlling the impact may not be straightforward as for different disruptive technologies in the past.
There aren’t any easy answers to potential risks posed by the AI-Automation-Nuclear nexus, but a starting point has to be a greater understanding of the key concepts and an appreciation of how the deployment of such technologies could play out. In the nuclear realm, and especially now in the increasing digital global nuclear realm, perceptions and the potential for misunderstanding will be as important as reality.