AI in Warfare: the Realities of Compressing the ‘Kill Chain’
AI – what is it good for? War, apparently. Last I checked, large language models weren’t building killer robots and roaming the streets for their next victims – it’s been a while since I last saw Terminator. And yet, recent years have harkened our attention to eerie happenings in the security underworld.
From Israel’s bloody campaign in Gaza to the Ukraine War to the US raid on Venezuela, AI is finding its way into more and more military and intelligence capabilities. AI-assisted drones are a growing trend. Kirichenko, from the Henry Jackson Society, writes that Ukraine’s drone strike accuracy has increased from as low as 30% to almost 80%. Moreover, military decision-making and execution are increasingly becoming AI-informed. Machines are now involved, not merely in the elimination of human beings, but in the decisions and reasoning which led to eliminating them.
How does this appear in practice? The Guardian voices claims that AI speeds up decision-making in warfare; shortening the time between a target being spotted and destroyed. With the likes of Palantir, the US is using AI to process masses of data from drones, satellites, and other sensors across what is known as a cyber ‘kill chain’. Such data, while a laborious task for humans to sift through, is made easy to interpret by a machine – one could even say, a killer robot…
Jones and Kinsella elaborate on AI’s use as one technological development in a long history of the US’s quest for increasing military speed and efficiency: the lightning-fast 1991 Gulf War was but a precursor to the ever-more swiftness of warfare seen today. The overabundance of intelligence and military-related data promotes the dependence on AI in order to reduce analyst teams in the thousands down to a few bits of software.
Image courtesy of Tech. Sgt. Joshua Strang / U.S. Air Force via Wikimedia Commons, ©2015. Some rights reserved.
The most recent example of the AI-military marriage is, of course, the 2026 Iran War. This article highlights the US-Iran war as the first time AI-assisted targeting has been deployed by a great power in a conventional interstate conflict. The US – whether by strategic intent or mere haphazard warmongering – are demonstrating their strength with AI, almost too well since there are few high officials to negotiate an end to the conflict with. With 2,000 targets being struck in just the first four days, the US is demonstrating an unprecedented speed at which precision-strikes can neutralize hostile threats. Truly, we are seeing an evolution in the character of warfare.
Yet, all is far from rosy. In the first two weeks of the war, 40,000 civilian buildings were struck by US-Israeli munitions. And, of course, the shocking details of 153 dead, emerged from a girls’ primary school being bombed in Minab. How is it, then, that these AI-military capabilities, so praised for their precision and data-collating abilities, could permit such catastrophic consequences? As noted by the FT, although it is unclear how much human error and how much AI error contribute to these atrocities, it is highly likely that the US has had buildings, such as the girls’ school, on their target list for years. So, it remains to be confirmed whether AI-military means are as precise as they are described, or whether the reduction of a military decision down to a matter of minutes or seconds is proving – while an ‘efficient’ operational capacity – a serious risk to civilian life. The latter view is more likely.
What’s more, whilst AI-infused military capabilities may offer operational efficiency with questionable ethics, they are also failing to bring strategic coherence. AI-optimised targeting is merely an instrument of war. Speed and lethality are not ends — they are ways. If there is no coherent political end motivating them then the result is war for its own sake: war “divorced from policy,” Clausewitz warned. Suffice to say, that efficient precision-targeting, increased lethality, and quick decision-making (quicker than thought itself), does not bring about any meaningful end other than highly efficient destruction. While the Trump regime’s wish of speedy strikes has been granted, it is the very speed at which decisions (or lack thereof) are being made that is diminishing both ethical and strategic common sense. Therefore, whatever the reasons were for the 2026 Iran war – whatever ends are purportedly being pursued – the current trajectory is one of AI-powered instability.
AI alone cannot win wars. With the erosion of a viable political-diplomatic solution to Iran, and the increasing reliance on software (run by corporate firms such as Palantir) for life-or-death decisions, AI is proving to be both a threat to international order, and of human life. If it is as easy as pressing a button provided by an enthusiastic machine, like the Maven Smart System, then the cost of war has been lowered to a matter which is more akin to sending an email or playing a video game than venturing through the valley of the shadow of death. And, as the kill chain is compressed, the ability for humans to remain in control of their weapons becomes ethically dubious and strategically unsound. Efficiency may be how the Trump regime measures their success, but this war will be remembered, not for how quickly a target was pulverized, but how many homes were destroyed, and how many lives were lost.
Image courtesy of Lt. Col. Leslie Pratt / U.S. Air Force via Wikimedia Commons, ©2008. Some rights reserved.
The views and opinions expressed in this article are those of the author and do not necessarily reflect those of the wider St Andrews Foreign Affairs Review team.
