Key Points
- Federal authorities designated Anthropic’s AI systems as a supply-chain security threat, ordering all government agencies to cease using the technology.
- Hours later, the Pentagon finalized an agreement with OpenAI to integrate its AI systems into classified defense networks.
- A $200 million contract with Anthropic fell apart when the company declined to permit its AI for autonomous weaponry or widespread domestic monitoring.
- OpenAI claims its Pentagon agreement contains identical restrictions that Anthropic sought, though skeptics doubt the company’s commitment.
- Anthropic intends to pursue legal action against the security risk classification, describing it as without legal foundation.
On Friday, federal authorities severed their partnership with Anthropic, classifying the AI firm as a supply-chain security threat. In a rapid turn of events, competitor OpenAI revealed a fresh agreement to integrate its AI technology into the Pentagon’s secure military systems.
President Donald Trump mandated that all government departments discontinue Anthropic’s services immediately. Departments currently utilizing the company’s Claude systems have six months to complete their migration.
Defense Secretary Pete Hegseth declared via X that Anthropic represents a “Supply-Chain Risk to National Security.” Such classifications typically apply to entities connected with hostile nations such as China.
This designation may impact Anthropic’s commercial operations significantly. Defense contractors might need to demonstrate they’ve eliminated all Claude usage from their operations. Major stakeholders in Anthropic include Nvidia, Amazon, and Google.
Anthropic had achieved a milestone as the initial AI company to integrate its models within the Pentagon’s secure infrastructure. That July agreement carried a potential value of $200 million.
Negotiations collapsed when Anthropic declined to commit its AI for unrestricted lawful military applications. The company established firm boundaries against autonomous weapons systems and broad-scale domestic monitoring.
Military officials argued Anthropic should rely on the armed forces to operate within legal parameters. CEO Dario Amodei stated Thursday that his company “cannot in good conscience” accept such terms.
OpenAI Secures the Contract
OpenAI CEO Sam Altman revealed the Pentagon partnership Friday evening on X. He indicated the arrangement incorporates identical safeguards against mass monitoring and autonomous weapons that Anthropic had demanded.
Altman further stated OpenAI requested that authorities extend equivalent contract terms to competing AI firms. Elon Musk’s xAI had previously received military authorization for classified system integration.
OpenAI President Greg Brockman and his spouse contributed $25 million to a Trump-aligned political action committee in the previous year. They continue funding efforts aligned with Trump’s AI policy priorities in forthcoming electoral contests.
Anthropic Prepares Legal Response
Anthropic expressed profound disappointment regarding the security designation and announced plans for judicial challenge. The firm characterized the decision as “legally unsound” and warned it establishes concerning precedent for American technology companies in government negotiations.
The General Services Administration announced it will delete Anthropic from its authorized vendor catalogs for federal procurement.
Certain commentators criticized OpenAI’s timing. Democratic figure Christopher Hale announced on X that he terminated his ChatGPT membership and migrated to Claude Pro Max.
Anthropic emerged in 2021 when researchers departed OpenAI citing concerns about diminishing safety emphasis. Both organizations have secured tens of billions in recent funding and are exploring potential stock market debuts.
The controversy also involves a particular event. Following Claude’s deployment during a Venezuelan operation in January, an Anthropic staff member contacted a Palantir associate regarding the technology’s application. Pentagon leadership interpreted this communication as inappropriate.
Anthropic maintains the conversation represented standard technical coordination between collaborative partners.


