Key Points
- San Francisco federal court granted preliminary injunction halting Pentagon’s prohibition on Claude AI technology
- Judge Rita Lin determined the restriction constituted “classic illegal First Amendment retaliation”
- Conflict originated when Anthropic declined to permit Claude’s deployment for autonomous lethal weapons or widespread surveillance operations
- Anthropic commanded 32% of enterprise AI sector in 2025, surpassing OpenAI’s 25% market share
- Seven-day pause implemented on injunction to permit government appeal process
A federal court has intervened to temporarily halt the Trump administration’s prohibition on federal agencies utilizing Anthropic’s artificial intelligence systems, suspending actions the technology firm claimed would result in multi-billion dollar revenue losses.
BREAKING: Anthropic has been GRANTED a preliminary injunction re: Pentagon ‘supply chain risk’ designation by Judge Rita Lin in California but is allowing a stay for one week https://t.co/1xk41AB5zQ
— Hadas Gold (@Hadas_Gold) March 26, 2026
US District Court Judge Rita Lin, presiding in the Northern District of California, granted the preliminary injunction Thursday. The order includes a seven-day stay to enable the government to pursue appellate options.
The legal battle stems from a July 2025 agreement between Anthropic and the Department of Defense. Under this arrangement, Claude would have become the inaugural frontier artificial intelligence system authorized for deployment on classified government networks.
Contract discussions collapsed in February 2026. Defense officials sought to revise terms, insisting Anthropic permit military deployment of Claude “for all lawful purposes” without operational limitations.
Anthropic declined the revised terms. The company maintained its technology must not support autonomous lethal weaponry or extensive domestic surveillance targeting American citizens.
President Trump issued an executive directive on February 27, mandating all federal entities cease utilizing Anthropic’s products. His Truth Social post accused the company of making a “DISASTROUS MISTAKE trying to STRONG-ARM the Department of War.”
The Defense Department subsequently classified Anthropic as a national security supply chain threat. On March 9, Anthropic initiated federal litigation in Washington, DC, contending Defense Secretary Pete Hegseth exceeded his statutory authority.
Court Scrutinizes Government’s Justification
Judge Lin conducted a 90-minute hearing in San Francisco on March 24, challenging government counsel regarding whether Anthropic faced punishment for public criticism of Pentagon policies.
Lin’s judicial opinion stated the prohibition appeared disconnected from legitimate national security objectives. “If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude,” her ruling noted.
The judge further characterized the measures as “designed to punish Anthropic,” describing them as “arbitrary, capricious, and an abuse of discretion.”
During proceedings, an Anthropic representative emphasized that Pentagon officials maintain authority to examine any AI system prior to operational deployment. Furthermore, Anthropic possesses no technical capability to disable a model remotely, alter its functionality, or monitor military applications.
Competing Legal Arguments
Government counsel contended that Anthropic undermined trust during contractual negotiations by attempting to influence Pentagon operational policies. The attorney expressed concerns regarding potential “future sabotage” risks from the company.
Judge Lin dismissed this rationale, stating the Justice Department lacked any “legitimate basis” for concluding Anthropic’s ethical boundaries could transform it into a security threat.
According to Menlo Ventures data, Anthropic controlled 32% of the enterprise artificial intelligence marketplace in 2025, exceeding OpenAI’s 25% share. A comprehensive federal prohibition threatened to undermine this competitive advantage.
Anthropic expressed being “grateful to the court for moving swiftly.” The company simultaneously filed additional legal challenges in a Washington, DC appellate court, concentrating on procurement regulation violations.
The litigation is identified as Anthropic v. US Department of War, 26-cv-01996, US District Court, Northern District of California.


