AI and International Security: Technological Acceleration Overtaking Norms
In the past two hundred years, every major technology boom has accelerated human capabilities faster than society could regulate them. Industrialization came with a steep learning curve both economically in the form of cars and militarily in the form of tanks and planes. Similarly, the internet evolved too quickly for any attempts of regulation in the 1990s to really stick, and today online dis and mis information pose a serious threat to many states’ national security. While we have grappled with all this, AI appeared on the scene. Anyone who has witnessed its more recent ability to generate ultra-realistic photographs, ‘deep fakes’, understands that innovation is outpacing the societal reckoning we must have about AI to make it a more effective and predictable tool. Unfortunately, while most people have a rough idea of where they fall on the spectrum between accelerationist and anti-AI , it is hard to keep up with exactly what kind of capacities we are dealing with. In that light, it seems worthwhile to examine what the possibilities of AI are for international relations and security. Not only is it an interesting topic, but public understanding of AI potential is essential to keeping it from becoming a black box or a tool for those with un-democratic aims.
As citizens of states and the global community more broadly, to make fair decisions about the future of AI the majority of people should have a basic understanding of its pros and cons. The most dramatic topics in both categories are the military and political implications. Both those issues can be further broken down into physical capabilities such as drones and automated weapons, and intangible ones, like surveillance and misinformation. In the online realm, the surveillance potential of AI, while beneficial for states and their law enforcement, raises questions around fairness. Past emerging technologies like DNA testing and fingerprinting had their accuracy overstated in the beginning, resulting in them amplifying existing prejudices. In the United States especially, DNA testing has exacerbated the racial bias of the justice system. In an alarming continuation, AI gait analysis has already been used to support accusations against criminal suspects even when accompanied by little other evidence. Aside from justice for individuals, AI surveillance has risks in political and security contexts. AI surveillance of internet activity and CCTV footage automates many of the tactics authoritarian and anti-democratic regimes already use to identify dissent and target dissidents. Making these activities easier strengthens bad actors and is thus has potential to be a destabilizing force in the international political system.
Misinformation is also a rising threat to global democracy. In 2024, several national elections were the subject of attempted influence operations aided by AI. However, there are some success stories. AI labs themselves took steps to limit the abilities of their commercial models to prevent them from creating content that could be used to sway election results. This reflects a new attitude from tech companies that were historically resistant to regulation during other periods of advancement, such as the dawn of the internet. If it proves to be lasting, this voluntary self-moderation could take pressure off governments to create stricture rules for models they have yet to fully understand. However, that doesn’t entirely solve the problem. In addition to specific issues like deepfakes that pose a threat to single elections, AI as a concept is inherently anti-democratic. Though it widens access to many things for the public in general, the black-box model is not transparent, and leaves us vulnerable to a kind of trickle-down effect whereby it has become a major presence in the lives of most, despite the fact that only a select few really understand the technology behind it. This is not necessarily malicious, not everyone needs to be an expert in everything - especially something as complex as AI - but the somewhat inexplicable mechanics of AI makes it less likely that its inherent biases can all be comprehensively addressed and prevented from being replicated.
Aside from ideological and political issues, AI is creating ripples in the physical sphere and particularly in the world’s militaries. Drones guided by AI have received attention in the past few years for their role in Ukraine’s defense against Russia. In that conflict, it is becoming increasingly clear that manpower is no longer the only major indicator of fighting power. In fact, not only are combatants becoming less important, but strategy is also being handed over to AI models. In 2023, many major powers endorsed a statement affirming, among other things, that AI should be used in military operations only as a part of a ‘responsible human chain of command and control’. However, the agreement acknowledges and does not prohibit automated weapons systems and other advancements that would limit human involvement in military action. Those abilities seem somewhat inevitable and given that AI has been shown to be fallible and biased, they are among the most concerning in the context of security and good governance.
Just as there are benefits and drawbacks to AI in international security, there are pros and cons to both approaches to regulating its role there. From an accelerationist standpoint, AI is already so bound up in the fabric of both defense and discourse that it would be impossible to remove it and also probably foolish to do so given that some of the existing issues with AI like hallucinations and biases can only be corrected by putting time and resources into advancing it. It isn’t going anywhere, so we might as well improve it rather than try to tamp down use. This is an optimistic view, but it most likely oversimplifies things. Yes, AI can be corrected, but the ways in which we use it require restriction not growth. In a post-9/11 world, oversurveillance is already an issue in democratic countries, not to mention authoritarian ones. Adding AI to the mix has the potential to cause significant problems, even if it helps streamline things. In the same vein, as many countries are experiencing democratic backsliding and increased military engagement, AI may prove to cause just as much harm as good. In order to prevent that, it must remain just one of several tools in the human kit, rather than a solitary fix-all of future goals.
Image courtesy of Getty Images, © 2018. Some rights reserved.
The views and opinions expressed in this article are those of the author and do not necessarily reflect those of the wider St Andrews Foreign Affairs Review team.
