Welcome

Welcome to the official publication of the St Andrews Foreign Affairs Society. Feel free to reach out to the editors at fareview@st-andrews.ac.uk

Innovation or Regulation: The Challenges of AI Diplomacy

Innovation or Regulation: The Challenges of AI Diplomacy

In the current climate of rapidly shifting geopolitical and technological power, the challenges of regulation and diplomacy are increasingly prevalent. The first ever global AI Safety Summit took place in Bletchley Park, United Kingdom, a location symbolic of technological innovation in the name of peace and cooperation.

Taking place on the 1st and 2nd of November, the Bletchley Summit has been heralded as a diplomatic breakthrough as over a hundred delegates, including government officials, executives of prominent AI companies, academics and civil society representatives, gathered to discuss the future of artificial intelligence (AI) technology. The Summit concluded with a joint Declaration committing 28 signatory nations and leading tech companies to cooperation over AI usage, as well as confirming the establishment of AI Safety Institutes in the UK. and the US to test new types of frontier AI technology before public rollout. 

So what does the Summit mean for the balance of regulation and innovation in this rapidly developing field? 

UK Prime Minister Rishi Sunak’s lead not only the European Union and the United States, but also China and other developing countries such as Brazil, Indonesia and India, into signing the Bletchley Declaration. The Prime Minister’s leadership has been hailed a ‘remarkable achievement’ in diplomatic terms by Tino Cuéllar, the president of the Carnegie Endowment for International Peace. China’s inclusion has been especially acclaimed, with a delegation from China’s Ministry of Science and Technology signing the Declaration and pledging to attend future meetings. There are still questions, however, whether real coordination materialises beyond promises of intent and how the interests of this diverse range of actors will be coordinated and balanced. 

The Declaration includes strategies for international cooperation over the potential risks of AI models through ‘building a shared scientific and evidence based understanding of these risks’ and creating ‘respective risk-based policies across our countries to ensure safety.’ Such international focus guarantees AI development becomes something the global community can benefit from and monitor collectively, particularly through inclusive policies such as £80 million to accelerating development of AI technology in some of the world’s poorest countries. 

Moreover, the new AI Safety Institutes also provide state-backed organisations designed to facilitate innovation in the development of AI systems while ensuring public safety, hopefully the first of a future network of similar institutes. While they are not regulating bodies and operate on a voluntary basis, the UK government hopes that such evidence-based approaches to AI risks will keep the UK at the forefront of technical advancement, highlighting the importance of monitoring development as well as practice. Large companies such as Meta, Google DeepMind, and OpenAI have reportedly agreed to engage with this new body. 

This comes in the wake of criticisms of the UK government’s lack of its own AI regulatory policy when other nations have begun to enact legislation in attempts to set groundwork for the domestic practices of AI-based systems. This includes US President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence regulating practices to promote privacy and cybersecurity while monitoring the use of AI by governmental agencies and industries. In the EU, legislators are in talks over an EU-wide AI Act designed to promote trustworthy AI practices. The UN has created an international panel of experts to advise the Secretary General on AI governance. Dr. Ana Valdiva from the Oxford Internet Institute stated, ‘Despite discussions on AI risks such as misuse, unpredictable advances, and loss of control, Rishi Sunak argued it was premature for the UK government to legislate on AI. However, considering the international context where the EU, US, and China, amongst others, are already implementing regulations to mitigate algorithmic risks, it's imperative for the UK to follow suit.’ 

Furthermore, while symbolic cooperation indicates a general acceptance of the need for AI regulation, questions remain about the practicalities of translating these promises into effective regulations. There have been complaints about the lack of focus on issues such as the impact the energy demands of AI have on the environment and the potential abuses of AI technology in terms of deepfakes and disinformation. Calls for increased attention to existing AI rather than hypothetical future frontier technology therefore seek hard regulations on today’s systems rather than merely that which is in development. Policy Director of Data & Society Brian Chen wrote that what is needed is a ‘more holistic, human-centred vision of AI systems — their impact on workers, their extraction of data, their massive consumption of energy and water. This was lacking at last week’s summit.

There is also disquiet over whether such regulations will result in limitations on public institutions to break into an industry so dominated by private tech companies, as shown by the presence of Elon Musk as a key player at the Summit. Introducing new practices of governmental approval could arguably create over-regulation that stunts open-source innovation as well as enable the monopolisation of markets by existing powerful tech firms. Amanda Brock, CEO of OpenUK, stated, ‘Recognizing the value of open source to global economies, its role in democratizing technology,  and building trust through transparency is critical to the evolution of AI. It is the only way to ensure that our digital future is equitable and that we learn the lessons from our recent digital history.’

The Bletchley Summit provided a necessary first step in codifying the urgency for the global community to generate appropriate solutions to the array of security and safety challenges AI technologies require. What remains to be solved is unenviable task of creating equitable international mechanisms for the enforcement of standards on the development and use of AI. Two further AI safety summits are scheduled to continue this momentum, with South Korea hosting a virtual mini-summit in six months, and France holding the next in-person AI Safety Summit next year. This demonstrates awareness by political leaders of the enduring importance of ensuring the opportunity for similar practices across borders, crucial for solidifying newly formed ties and leading to greater public safety as well as cohesive policy for multi-national businesses. Such gatherings should include voices from across society, rather than merely governmental leaders and representatives from large tech companies, to encourage fair competition and innovation in the field of AI.


Image courtesy of UK Government via Wikimedia, ©2023. Some rights reserved.

The views and opinions expressed in this article are those of the author and do not necessarily reflect those of the wider St. Andrews Foreign Affairs Review team.

What is ‘Food Tech’?: Silicon Valley's attempt to overthrow the traditional food industry

What is ‘Food Tech’?: Silicon Valley's attempt to overthrow the traditional food industry

The international community’s failure of aid to Palestine

The international community’s failure of aid to Palestine