Welcome

Welcome to the official publication of the St Andrews Foreign Affairs Society. Feel free to reach out to the editors at fareview@st-andrews.ac.uk

Navigating the Dual Nature of AI: The Key Role of Regulation

Navigating the Dual Nature of AI: The Key Role of Regulation

On November 30, 2022, OpenAI, an AI research company, introduced a natural language processing tool called Chat Generative Pre-Trained Transformer (ChatGPT). This innovation not only exhibits human-like capacity, such as writing jokes, fixing computer codes, and generating college-level essays, but also represents a paradigm shift in the realm of traditional search engines. Within two months of its launch, ChatGPT reached 100 million users. However, our increasing comfort with this new technology often leads to ignorance of its inherent limitations and potential that must be managed with caution.

Technology can be a powerful asset when used with care, but it can turn catastrophic when caution is disregarded. The dual nature of artificial intelligence (AI) is demonstrated in the case of the “ChatGPT lawyer”, Steven Schwartz, who was found to have constructed a legal brief using fabricated judicial opinions and legal citations generated by ChatGPT. He argued that he was unaware of AI’s ability to fabricate cases and regarded ChatGPT as some powerful “super search engine”. Indeed, beyond its potential for disseminating misinformation, AI plays with the fact that users are often unaware of its capacity to ‘lie’. But ChatGPT is not only misleading, but it can also be misled. Another video, where a user attempted to convince ChatGPT that “two plus two equals five”, highlighted the unsettling aspect that unlike human, AI lacks a sense of right and wrong, making it easy to be manipulated. These examples altogether illustrated a worrying future where ChatGPT, which is not even OpenAI’s best model, may eventually misguide and confuse human, resulting in a blurred line between right and wrong, as evidenced in the ChatGPT lawyer and potentially many of us in the future.

Fortunately, governments appear to be alert of the potential threats posed by AI to humanity, and they are taking actions to address these risks. In June, the European Union (EU) took a significant step toward one of the first major laws – the EU AI act – to regulate AI. The draft law would restrict the use of some of the “riskiest technologies” and requires AI systems to disclose data used in their programming. The “risk-based approach”, which categorized AI into four risk levels, reflected the EU’s vision that AI presents greater risks than the benefits it claims to offer.

On the other hand, the United Kingdom (UK) adopted a pro-innovation stance in its approach to AI regulation, conveying a more optimistic outlook on its ability to harness AI for boosting the nation’s productivity. In the latest Government White Paper published in March, it outlines the government’s aspiration to establish “clear and consistent guidelines” for AI management in supporting business investment and building confidence in innovation. Placing the UK in the role of the “global AI leader”, Prime Minister Rishi Sunak wished the public to see “how the benefits of AI can outweigh the risks”. In contrast to the predominately cautious approach of the EU, the UK’s approach implies a positive view of the future of AI.

As demonstrated above, there exists a varied approach to managing and regulating AI risks when it comes to different government and their leaders. And, as I have stated above, AI has both its benefits and drawbacks. Whereas it carries risks and even a potential threat to humanity if not managed properly, this does not necessarily lead to the result of stopping wider use of ChatGPT or halting AI development. Instead, the benefits it offers underscores the notion that, to use AI safely and leverage its considerable potential for enhancing productivity, it is crucial to exercise proper management in order to explore its capabilities to the fullest extent.

One way to looking at the positives of the AI is by examining its real-life applications. In his article, Andrew Moore speaks of AI’s role in peacekeeping. He suggested that diplomats used ChatGPT to prepare for negotiations. For example, the U.S. ambassador for cyberspace and digital policy, Nathaniel Fick, suggested that ChatGPT can generate briefings that are ‘qualitatively close’ to those prepared by his staff. Moreover, the increasing use of automated language processing tools during negotiations, such as Google’s language-translating glasses, also reduces the time required by live language interpreters for interpretation. While these systems still require some level of human oversight, they indicate a future where AI plays an increasingly significant role in the diplomatic environment. In fact, Moore also proposed the concept of ‘AI hagglebots’, referring to computers capable of identifying trade-offs and interests on their own. This envisions a scenario where AI becomes an independent decision-making agent in negotiations without merely serving as a tool. While the notion might appear far-fetched, the innovative idea highlights the yet-to-be-explored potential held by AI.

Another major AI implication directs us towards the recent experimentation undertaken by the European Central Bank (ECB). In the past month, Eshe Nelson reported that the bank was exploring AI’s potential to enhance its understanding of inflation and to support its oversight of big banks. In a blog post, Myriam Moufakkir, the bank’s chief services officer, wrote that the use of AI can assist in preparing briefings and summaries for policy and decision-making and sorting data needed for economic analysis. Yet, even as AI insights could contribute to monetary policy, the bank stressed that ultimate decision-making remains in human hands. Furthermore, Nelson writes that the other central banks, such as the Federal Reserve Bank of New York and Bank of England, also have plans to increase their engagements with AI, including hosting AI conferences and using AI to analyze large data sets. Nevertheless, the potential hazards of AI deployment should not be dismissed entirely. Jon Danielsson, a co-director of the Systemic Risk Center at the London School of Economics, argued that AI may struggle at ‘macro problems’, referring to infrequent and rare crises for which there are no past events as reference points. By balancing the benefits and risks, Nelson summarized that the ECB is adopting a cautious approach to AI with the goal of accelerating its adoption in order to make the bank “modern and innovative”.

The exploration of AI's role in various sectors underscores the undeniable influence of AI in our world today. AI's potential benefits are undeniably vast, ranging from improved efficiency to innovative problem-solving. However, its rapid advancement also raises concerns about the potential risks and limitations that we must be aware of. Despite these challenges, it is increasingly clear that AI's development is inevitable, and its potential to benefit humanity in countless ways is truly remarkable.

In light of this, I believe that the responsible approach to AI involves both regulation and cautious use. We should embrace AI's capabilities, recognizing its potential to revolutionize industries and improve our lives. We must also be aware of the dangers it poses, such as misinformation. Finding the right balance between leveraging AI's benefits and mitigating its risks is essential. It is our collective responsibility to foster a future where AI serves as a valuable tool and where its transformative power is managed with prudence and mindfulness.

Image courtesy of the White House via Wikimedia, ©2023. Some rights reserved.

The views and opinions expressed in this article are those of the author and do not necessarily reflect those of the wider St. Andrews Foreign Affairs Review team.

The German Zeitenwende is failing

The German Zeitenwende is failing

 Why the Growing Influences of BRICS on the World Stage Should Be a Concern

Why the Growing Influences of BRICS on the World Stage Should Be a Concern