Key Highlights
- The Trump administration faces a lawsuit from Anthropic following the Pentagon’s classification of the company as a “supply chain risk”
- This marks an unprecedented move as no American company has previously been given this classification, typically applied to foreign threats
- The conflict stems from Defense Secretary Pete Hegseth’s requirement for Anthropic to eliminate usage limitations on its AI systems
- The AI firm maintained its position against enabling its technology for autonomous lethal weapons or widespread surveillance operations
- Potential financial losses could reach several billion dollars in 2026 revenue according to company estimates
The artificial intelligence company Anthropic, creator of the Claude AI assistant, has initiated legal proceedings against the Trump administration through a California federal court. This action follows the Pentagon’s official classification of Anthropic as a “supply chain risk” announced Monday.
This classification requires all defense contractors working with the military to verify they are not utilizing Anthropic’s technology. The designation represents a historic first for an American-based company.
The dispute originated when Defense Secretary Pete Hegseth called on Anthropic to eliminate all operational restrictions from its AI systems. The company rejected this demand, maintaining its stance against allowing Claude to be deployed for autonomous lethal weaponry or comprehensive domestic surveillance programs.
These usage limitations were integral to Anthropic’s initial agreements with government agencies. The organization emphasized that Claude has “never been tested” for such applications and expressed doubts about the model’s reliability in these contexts.
In July 2024, Anthropic secured a $200 million agreement with the Department of Defense. The company also achieved a milestone as the first AI laboratory to implement its technology within the Pentagon’s secure classified networks.
President Trump issued a social media directive ordering all federal agencies to “immediately cease” utilizing Anthropic’s technology. His statement characterized Anthropic as a “Radical Left AI company.”
Economic Consequences
Krishna Rao, Anthropic’s Chief Financial Officer, stated in court documents that the government’s measures could decrease the company’s 2026 revenue “by multiple billions of dollars.” The organization reports that current federal agreements are being terminated.
Commercial sector agreements face similar jeopardy. The company indicates that “hundreds of millions of dollars” in immediate revenue streams are at risk.
Anthropic has petitioned the court to reverse the supply chain risk classification and issue a temporary stay during litigation. The company has also submitted a parallel filing with the U.S. Court of Appeals in Washington D.C.
Over a dozen federal departments are listed as defendants, including the Treasury Department, the State Department, and the General Services Administration.
Tech Community Response
More than 30 AI researchers and engineers from OpenAI and Google submitted a legal brief in support of Anthropic on Monday. Google’s chief scientist Jeff Dean was among the signatories.
The collective expressed concerns that penalizing a prominent U.S. AI enterprise could undermine America’s competitive position in the technology sector.
Despite the blacklist designation, Anthropic’s technology was allegedly utilized in support of U.S. military actions in Iran, based on earlier CNBC coverage.
Amazon verified that Anthropic’s Claude continues to be accessible for AWS customers for non-defense applications.
Anthropic indicated it will maintain efforts toward governmental dialogue while pursuing the legal challenge.


