Federal appeals court allows Pentagon to designate Anthropic as a supply-chain risk

A federal appeals court in Washington on April 8 ruled that, pending a full judicial review, the Department of War may designate Anthropic as a supply-chain risk after the artificial intelligence (AI) company asked the court to block the designation.

The ruling appears to conflict with an order by a federal district court in California last month that temporarily halted the designation while litigation plays out.

The designation of the company, under a federal law designed to protect military systems from foreign sabotage, functions as a blacklist, preventing it from doing business with the federal government and its contractors.

The federal boycott of Anthropic directed federal agencies, contractors, and suppliers to end ties with the company. It was initiated after the company declined to change the user policy for its AI product, Claude, to remove what the company described as safety guardrails preventing its use for mass surveillance and fully autonomous weapons. The Pentagon has said it does not intend to use Claude for those purposes.

On social media, President Donald Trump previously said Anthropic was attempting to “strong-arm” the federal government by dictating its military policy.

Late on April 8, a three-judge panel of the U.S. Court of Appeals for the District of Columbia Circuit ruled that it would not block the designation because Anthropic “has not satisfied the stringent requirements for a stay pending court review.”

The Department of War began using Claude for various military purposes in 2024, but on March 3 of this year, Secretary of War Pete Hegseth determined that procuring AI goods or services from Anthropic presented a supply-chain risk. He invoked Section 4713 of Title 41 of the U.S. Code to bar Anthropic from supplying goods or services to the department after the company declined to contractually authorize Claude’s use for mass domestic surveillance or autonomous weapons, according to the order.

As a result of the supply-chain risk designation, the department has terminated its contracts with Anthropic, begun removing Claude from its systems, and forbidden other contractors from using Anthropic as a subcontractor on department work, the panel said.

Anthropic argues that Hegseth’s Section 4713 designation violated the law and was unconstitutional and arbitrary. The company asked the circuit court to stay the designation pending review on the merits or to expedite consideration of the case on the merits.

“Anthropic’s petition raises novel and difficult questions, including what counts as a supply-chain risk under section 4713 and what qualifies as an urgent national-security interest justifying the use of truncated statutory procedures,” the panel wrote.

Although the company will likely experience “some degree of irreparable harm” if a stay is denied, granting a stay would force the federal government “to prolong its dealings with an unwanted vendor of critical AI services in the middle of a significant ongoing military conflict,” the panel said.

Moreover, the panel wrote, because the CEO of Anthropic has called the department’s statements on the ongoing dispute “completely false” and “straight up lies,” forcing the department to continue using the company’s technology seems to be “a substantial judicial imposition on military operations.”

“We do not lightly override the Department’s judgment on matters involving national security,” the panel wrote.

The panel denied Anthropic’s request to stay the designation, but agreed with the company that the case should be expedited because it has raised “substantial challenges” to the designation and “will likely suffer some irreparable harm during the pendency of this litigation.” The panel said it would take immediate steps to set up a briefing schedule for the case.

The panel’s order came after Judge Rita Lin of the U.S. District Court for the Northern District of California issued a preliminary injunction against the supply-chain risk designation on March 26. The designation in that case was issued separately under a different statute known as Section 3252 of Title 10 of the U.S. Code.

Lin said evidence showed that the department was punishing Anthropic for “criticizing the government’s contracting position in the press,” which constitutes “classic illegal First Amendment retaliation.”

The judge also held that the designation was likely illegal and arbitrary, and that the department had not shown that the company’s insistence on usage restrictions might lead to future sabotage.

Lin’s order allowed the company to continue doing business with federal agencies as the lawsuit continues.

The federal government appealed the ruling to the U.S. Court of Appeals for the Ninth Circuit on April 2.

Acting Attorney General Todd Blanche hailed the new ruling on X as “a resounding victory for military readiness.”

“Our military needs full access to Anthropic’s models if its technology is integrated into our sensitive systems,” he said.

An Anthropic spokesperson told The Epoch Times that the company is “grateful the court recognized these issues need to be resolved quickly and [remains] confident the courts will ultimately agree that these supply chain designations were unlawful.”

“While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI,” she said.

This article by Matthew Vadum appeared April 8, 2026, in The Epoch Times. It was updated April 9, 2026.