Sunday, March 29, 2026

“Judge Halts Pentagon’s Blacklisting of Anthropic”

Share

A United States judge has temporarily halted the Pentagon’s blacklisting of Anthropic, marking a significant development in the company’s ongoing dispute with the military regarding the safety of artificial intelligence (AI) in combat settings.

Anthropic, in its lawsuit filed in a California federal court, contends that U.S. Secretary of War Pete Hegseth exceeded his authority by designating the company as a national security supply-chain risk. This designation is applied to companies that could potentially expose military systems to infiltration or sabotage by adversaries.

The company alleges that the government’s action infringed upon its right to free speech under the First Amendment and denied it the opportunity to challenge the designation, violating its Fifth Amendment right to due process. U.S. District Judge Rita Lin, appointed by former President Joe Biden, concurred with Anthropic in a 43-page ruling. However, the ruling will not go into immediate effect to allow the administration time to appeal.

Hegseth’s decision came after Anthropic opposed the military’s request to use its AI chatbot, Claude, for U.S. surveillance or autonomous weapons, resulting in the company being blocked from certain military contracts. Anthropic executives estimate that this move could lead to significant financial losses and damage to the company’s reputation.

Anthropic argues that AI models are not sufficiently reliable for use in autonomous weapons and opposes domestic surveillance as a violation of rights. The Pentagon asserts that private companies should not restrict military activities but clarifies that it has no interest in employing AI for unauthorized purposes.

In her ruling, Judge Lin indicated that the government’s actions seemed aimed at punishing Anthropic rather than safeguarding national security interests as claimed. She noted, “The record suggests that Anthropic is being penalized for criticizing the government’s contracting stance publicly,” calling it a violation of the First Amendment.

Anthropic’s spokesperson, Danielle Cohen, expressed satisfaction with the court’s decision, emphasizing the company’s commitment to collaborating with the government for the benefit of all Americans through safe and reliable AI development.

The designation of Anthropic as a supply-chain risk marks the first time a U.S. company has received such public designation under a government-procurement statute aimed at shielding military systems from foreign sabotage. The company’s lawsuit challenges the legality of the decision, citing lack of evidence and inconsistency with previous commendations of Claude by the military.

The Justice Department countered Anthropic’s arguments by suggesting that the company’s reluctance to adhere to contractual terms could create uncertainty within the Pentagon and potentially disrupt military operations involving Claude. The government maintains that the designation was prompted by contractual disputes, not Anthropic’s stance on AI safety.

Additionally, Anthropic faces another legal battle in Washington over a separate Pentagon supply-chain risk designation that could bar it from civilian government contracts.

Read more

Local News