Anthropic sues to stop Pentagon blacklisting of AI guardrails on weapons and surveillance
Anthropic sues the Pentagon to block a national security blacklist over AI guardrails on weapons and surveillance, escalating tensions over military AI use.
Anthropic sues to stop Pentagon blacklisting of AI guardrails on weapons and surveillance

AI startup Anthropic has filed a lawsuit against the U.S. Defense Department to block a national security blacklist designation, arguing the move violates its constitutional rights and threatens its government business.
Artificial intelligence company Anthropic has filed a lawsuit against the U.S. Department of Defense in an effort to block a national security blacklist designation imposed by the Pentagon, intensifying a high-profile dispute over the military use of artificial intelligence technology.
The lawsuit, filed in federal court in California on March 9, seeks to overturn the Pentagon’s decision to classify Anthropic as a supply-chain security risk. The designation could significantly limit the use of the company’s AI tools in U.S. government projects.
The Pentagon announced the designation last week after Anthropic refused to remove certain safeguards that restrict the use of its artificial intelligence systems for autonomous weapons and domestic surveillance. Defense officials argued that such restrictions could limit the military’s operational flexibility.
Anthropic, however, claims the designation is unlawful and violates its constitutional rights, including protections related to free speech and due process. In its court filing, the company asked the judge to invalidate the designation and prevent federal agencies from enforcing it.
The dispute highlights a broader debate about the role of private technology companies in shaping how AI is used in military and security operations.
The Pentagon maintains that decisions about national defense should be governed by U.S. law rather than corporate policies. Officials have argued that limiting the military’s ability to use AI technology for lawful purposes could potentially put American lives at risk.
Anthropic has taken a firm stance on the issue, stating that current AI systems are not reliable enough to safely operate fully autonomous weapons. The company has also drawn a strict boundary against the use of its technology for domestic surveillance of American citizens, citing concerns over civil liberties and potential misuse.
The blacklist designation also poses a major risk to Anthropic’s government business. U.S. President Donald Trump has directed federal agencies to stop working with the company, initiating a six-month phase-out of government contracts involving its technology.
Anthropic’s investors, which include major technology companies such as Google and Amazon, have reportedly been working to manage the potential fallout from the dispute with the Pentagon.
The conflict follows months of discussions between Anthropic and defense officials regarding the company’s AI usage policies. Anthropic CEO Dario Amodei had previously met with Defense Secretary Pete Hegseth in an attempt to reach a compromise before the designation was announced.
The outcome of the legal battle could have wider implications for the rapidly evolving artificial intelligence industry, particularly regarding how technology firms negotiate ethical and operational limits with government agencies.
The U.S. Department of Defense has increasingly partnered with AI developers, signing agreements worth up to $200 million each with companies including Anthropic, OpenAI and Google over the past year.
Shortly after Anthropic was designated a supply-chain risk, OpenAI announced a separate agreement to deploy its technology within the Defense Department’s network, emphasizing its commitment to human oversight in military AI systems.
Industry observers say the case could set an important precedent for how governments and private AI developers balance national security requirements with ethical restrictions on emerging technologies.

