Key Takeaways
- The Defense Department’s Chief Technology Officer characterized Claude AI as having inherent policy biases that might undermine military capabilities.
- Anthropic made history as the first U.S.-based company to receive a supply chain risk designation from the Pentagon.
- Military contractors are now required to verify they’re not utilizing Claude for any Department of Defense projects.
- The AI company filed a lawsuit against the Trump administration on Monday, describing the designation as “unprecedented and unlawful” while noting significant contract revenue is threatened.
- Palantir’s CEO Alex Karp revealed his firm continues deploying Claude for American military missions despite the restriction.
Earlier this month, the Defense Department took the unprecedented step of designating Anthropic as a supply chain security concern—a classification previously reserved exclusively for foreign entities and adversarial nations.
During a Thursday appearance on CNBC’s “Squawk Box,” Defense Department Chief Technology Officer Emil Michael outlined the rationale behind this historic decision. According to Michael, the issue stems from Claude’s foundational “constitution”—Anthropic’s guiding document that influences the AI model’s responses and decision-making processes—which he argues introduces ideological biases incompatible with military requirements.
“We can’t have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, pollute the supply chain so our war fighters are getting ineffective weapons, ineffective body armor, ineffective protection,” Michael said.
The latest iteration of Claude’s constitutional framework was released by Anthropic in January 2026. According to the company, this document serves a “crucial role” in model training and “directly shapes Claude’s behavior.”
Under this new designation, all defense contractors and Pentagon suppliers must now provide certification that Claude is not being employed in any capacity for Department of Defense-related projects.
Michael emphasized the measure wasn’t intended as punishment and pointed out that government contracts represent just a “tiny fraction” of Anthropic’s total business operations.
Anthropic emerged in 2021 after its founders departed from OpenAI. The startup has successfully cultivated a robust enterprise client base, securing initial agreements with the Defense Department among other major organizations.
The company mounted a vigorous legal challenge to the Pentagon’s action. In its Monday lawsuit against the Trump administration, Anthropic characterized the supply chain label as both “unprecedented and unlawful.”
According to court documents, Anthropic argues it faces “irreparable” damage, with hundreds of millions in contractual agreements now hanging in the balance.
Pentagon Denies Active Outreach to Companies
Michael refuted Anthropic’s allegations that government officials were proactively contacting businesses to discourage Claude adoption. He characterized these assertions as unsubstantiated “rumors.”
“The Department of War is not reaching out to companies to tell them what to do, so long as it’s not in our supply chain,” Michael said.
He conceded that phasing out Claude won’t happen instantaneously. The Pentagon has established a structured transition strategy, Michael explained, recognizing that extracting deeply embedded AI systems is significantly more complicated than uninstalling basic software.
Claude Still in Use for Military Operations
Interestingly, Claude remains operational in certain military applications. CNBC has previously documented the AI’s deployment in supporting American military activities in Iran.
On Thursday, Palantir CEO Alex Karp—whose company ranks among the nation’s largest defense contractors—acknowledged his organization continues utilizing Claude.
Michael stated the department cannot “just rip out” Anthropic’s technology immediately and verified that a phased transition strategy is currently being implemented.


