A federal judge on March 26 temporarily blocked the Department of War from continuing to designate Anthropic as a supply-chain risk.
The designation of the artificial intelligence (AI) company, under a federal law designed to protect military systems from foreign sabotage, functions as a blacklist, preventing it from doing business with the federal government and its contractors.
This is the first time the supply-chain risk designation, which is ordinarily aimed at terrorists, foreign intelligence services, and other hostile actors, has been applied to a U.S. company.
The block allows the company to continue doing business with federal agencies and contractors while the lawsuit moves forward in court.
U.S. District Judge Rita F. Lin in San Francisco issued a preliminary injunction after holding a hearing on March 24.
Lin stayed her new order for seven days, which may allow the federal government an opportunity to appeal.
President Donald Trump and War Secretary Pete Hegseth previously announced a federal boycott of Anthropic, directing federal agencies, contractors, and suppliers to end ties with the company.
On social media, Trump previously said Anthropic was attempting to “strong-arm” the federal government and officials elected by the American people by dictating its military policy.
The company’s lawsuit came after Anthropic said it declined to change the user policy for its AI product, Claude, to remove safety guardrails preventing its use for mass surveillance and fully autonomous weapons.
The Department of War has said that it has no plans to use Claude for those purposes.
Lin said in her ruling that Anthropic has stated Claude is not ready to be safely used in fully autonomous lethal weapons or for mass-surveillance of Americans.
Anthropic says the government must agree not to use its product for such purposes. At the same time, the department says it should be the one to decide which functions are safe for its AI tools to carry out, not a private company, she said.
“This public policy question is not for this Court to answer in this litigation. It is the Department of War’s prerogative to decide what AI product it uses,” the judge said.
However, evidence shows that the department is punishing Anthropic for “criticizing the government’s contracting position in the press,” which constitutes “classic illegal First Amendment retaliation,” she said. The department’s own records indicate that it imposed the designation because of the company’s “hostile manner through the press,” she added.
Trump and Hegseth have publicly called Anthropic “out of control” and “arrogant,” characterizing the company’s “sanctimonious rhetoric” as an effort to “strong-arm” the government, Lin added.
The judge also held that the designation was likely illegal and arbitrary, and that the department has not shown that the company’s insistence on usage restrictions might lead to future sabotage.
Anthropic has also filed a separate lawsuit over the designation in the U.S. Court of Appeals for the District of Columbia Circuit.
In that proceeding, the company is also seeking an order halting the designation.
This article by Matthew Vadum appeared March 26, 2026, in The Epoch Times. It was updated March 27, 2026.
