Welcome

Welcome to the official publication of the St Andrews Foreign Affairs Society. Feel free to reach out to the editors at fareview@st-andrews.ac.uk

Risk Governance in the Age of AI: Anthropic and the Pentagon’s AI Supply Risk Chain

Risk Governance in the Age of AI: Anthropic and the Pentagon’s AI Supply Risk Chain

The designation of a company as a supply-chain risk was once reserved for foreign adversaries, but now can be applied to the American-owned and operated company Anthropic. Governments are becoming increasingly concerned about vulnerabilities in their supply chains, thereby expanding risk governance to encompass software and AI systems, as well as hardware. The labelling of Anthropic as a supply-chain risk reflects a greater shift in U.S. national security strategy, where data control is treated as a key component of defense.

For the U.S, a supply chain risk is defined as “the risk that an adversary may sabotage, maliciously introduce unwanted function, or otherwise subvert the design, integrity, manufacturing, production, distribution, installation, operation, or maintenance of a covered system to surveil, deny, disrupt, or otherwise degrade the function, use, or operation of such system.” This designation is most often reserved for foreign intelligence services as it revolves around risks of espionage or sabotage. There is no precedent for the designation of an American-owned and operated company to be deemed a supply-chain risk. It must be mentioned that Section 3252, which defines supply-chain risk, is a procurement authority and not a sanctions authority. It primarily is about strategic risk and ensuring that America maintains its systemic integrity and lack of dependency on external actors, which is what makes it so unique to designate an American company as a supply-chain risk. In terms of supply-chain risks, the impact of AI must be considered. AI models can be manipulated or compromised, which is risky when they are in control of critical decision-making tools. The US has maintained an approach to AI through one with ethical guardrails, while its rival in the AI race, China, maintains a state-driven approach to AI manufacturing, with the blending of governance and surveillance 

The confrontation between Anthropic and the Pentagon started when the Pentagon ​​demanded that Anthropic let it use its models for all lawful purposes, while Anthropic insisted that it retained red lines to ensure that its AI would not be used for mass surveillance and fully autonomous weapons. The supply-chain risk designation by the Department of War is a large blow to the AI lab, which was recently valued at 380 billion dollars. The labelling arrived on March 4th, after the leak of a memo that Mr. Amodei sent to his staff, which blamed the row with the Pentagon on his failure to give ​​“dictator-style praise” to President Donald Trump. Although Mr. Amodei apologized for these comments, he still aims to challenge the supply-risk designation in court, despite the designation not being able to limit the uses of business relationships with Anthropic if they are unrelated to their specific Department of War contracts. The impact is more about what it means that the government is openly attacking an American company for refusing to compromise its own safety measures.

The consequences of the designation must be considered. Trump demanded that federal agencies stop using Anthropic, and the designation obliges companies to cut Anthropic out of their supply chains on military contracts. This effort punishes one of the leading US AI companies, which will have consequences for the US’s industrial and scientific competitiveness in the field of AI. In its lawsuit, Anthropic said its designation as a supply-chain risk was unlawful and violated its free speech and due process rights. Anthropic filed its lawsuit in federal court in California to ask a judge to undo the designation and bar federal agencies from enforcing it. Anthropic said, “These actions are unprecedented and unlawful. The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.” This legal challenge shines light on a larger dispute over how AI can be used in warfare and mass surveillance. The Pentagon maintains the idea that US law, not a private company, can determine how to defend the country and insists on having full flexibility in using AI for ‘any lawful use.’ The Pentagon maintained that Anthropic restrictions would endanger lives, yet Anthropic says that even the best AI models are not reliable enough for fully autonomous weapons. 

Lastly, we must consider what the Anthropic designation as a supply-chain risk means for the greater China vs. US AI Race. It disrupts the US-China AI race by restricting military use of top-tier US AI models due to safety, not security restrictions. The unprecedented move hampers US defense AI integration, as defense contracts must search for alternative AI models and could complicate long-term AI-driven military infrastructure. The US is sidelining the leading provider of safe AI and perhaps favoring China’s streamlined and state-backed AI development. This shows that the US is diverging from its initial AI philosophy, which focused on ethical guardrails. Does the designation of Anthropic reflect a larger shift in the U.S’s future approach to AI in defense? I believe that it showcases a new era of the US government, where they will use any means necessary to assert power over private companies.


Image courtesy of Halil Sagrikaya via Getty Images, ©2025. Some rights reserved.

The views and opinions expressed in this article are those of the author and do not necessarily reflect those of the wider St Andrews Foreign Affairs Review team.

The Future of Foreign Policy in a World Turning Towards State-Capitalism

The Future of Foreign Policy in a World Turning Towards State-Capitalism