Skip to Main Content
 

Major Digest Home Federal judge temporarily blocks the Pentagon from branding AI firm Anthropic a supply chain risk - Major Digest

Federal judge temporarily blocks the Pentagon from branding AI firm Anthropic a supply chain risk

Federal judge temporarily blocks the Pentagon from branding AI firm Anthropic a supply chain risk
Credit: Associated Press, Local 4

A federal judge has ruled in favor of artificial intelligence company Anthropic in temporarily blocking the Pentagon from labeling the company as a supply chain risk.

U.S. District Judge Rita Lin on Thursday said she was also blocking enforcement of President Donald Trump's social media directive ordering all federal agencies to stop using Anthropic and its chatbot Claude.

Lin said the “broad punitive measures” taken against the AI company by the Trump administration and Defense Secretary Pete Hegseth appeared arbitrary, capricious and could "cripple Anthropic,” particularly Hegseth's use of a rare military authority that's previously been directed at foreign adversaries.

“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” Lin wrote.

Lin's ruling followed a 90-minute hearing in San Francisco federal court on Tuesday at which Lin questioned why the Trump administration took the extraordinary step of punishing Anthropic after negotiations over a defense contract went sour over the company’s attempt to prevent its AI technology from being deployed in fully autonomous weapons or surveillance of Americans.

Anthropic had asked Lin to issue an emergency order to remove a stigma that the company alleges was unjustifiably applied as part of an “unlawful campaign of retaliation” that provoked the San Francisco-based company to sue the Trump administration earlier this month. The Pentagon had argued that it should be able to use Claude in any way it deems lawful.

Lin said her ruling was not about that public policy debate but about the government's actions in response to it.

“If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude. Instead, these measures appear designed to punish Anthropic,” Lin wrote.

Anthropic has also filed a separate and more narrow case that is still pending in the federal appeals court in Washington, D.C. That case involves a different rule the Pentagon is using to try to declare Anthropic a supply chain risk.

Lin wrote that her order is delayed for a week and doesn't require the Pentagon to use Anthropic’s products or prevent it from transitioning to other AI providers.

Anthropic said in a statement that it was “grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits.” The company said the case was necessary to protect its business and customers but it remains focused on “working productively with the government to ensure all Americans benefit from safe, reliable AI.”

The Pentagon didn't immediately respond to a request for comment about the ruling.

A number of third parties had filed legal briefs supporting Anthropic's case, including Microsoft, industry trade groups, rank-and-file tech workers, retired U.S. military leaders and a group of Catholic theologians.

—-

O'Brien reported from Providence, Rhode Island.

Sources:
Published: