“He's [President Trump] trying to teach the AI industry to fall into line like everybody else," Computer scientist Ben Goertzel tells Euronews.
The US government's ban on Anthropic appears punitive, following the company's public dispute with the Pentagon over its refusal to allow unrestricted military use of its Claude AI model.
Anthropic made its case before a San Francisco federal court on Tuesday, seeking an injunction against the US government's decision to blacklist it as a national security risk.
The District Judge Rita F. Lin said at the outset of the hearing that "it looks like an attempt to cripple Anthropic," adding she was concerned the government could be punishing Anthropic for openly criticising the government's position, US media reported.
US President Donald Trump and Defense Secretary Pete Hegseth publicly declared in February that it was cutting ties with the artificial intelligence (AI) company after it refused to allow unrestricted military use of its Claude AI model. The restrictions in dispute include the use of lethal autonomous weapons without human oversight and mass surveillance of Americans.
In response, the US government labelled Anthropic a “supply chain risk to national security” and ordered federal agents to stop using Claude.
On March 9, Anthropic filed two lawsuits against the government over its designation as a supply chain risk. One is a case for reconsideration of the supply chain risk and the other alleges the Trump administration violated the company's First Amendment right to speech.
Lin told the courtroom that the Pentagon has a right to decide on the AI products it uses but she questioned whether the government broke the law by banning agencies from using Anthropic, and when Hegseth announced that those seeking relations with the Pentagon should cut ties with Anthropic, NPR reported.
A lawyer for the government said the Pentagon’s actions were not retaliatory and based on how Anthropic’s AI model could be used and not on the company’s decision to go public about the disagreement.
NPR also reported that Anthropic could be at risk in the future because it could update its Claude AI model in a way that endangers national security.
Euronews Next reached out to Anthropic for comment but did not receive a reply at the time of publication.
What the ruling would mean for AI companies
Being a supply chain risk usually only applies to foreign companies.
"It seems inappropriate to apply that designation here,” said Ben Goertzel, a computer scientist and CEO of SingularityNet and The Artificial Superintelligence Alliance.
“It means the executive branch can just reinterpret words and laws however they feel like," he told Euronews Next.
Goertzel added that if the strongest version of the supply chain risk designation were applied, and it meant Anthropic could not sell software to any company that did any government business, that would be “extremely bad for the company”.
He said Antropic would survive financially as there is a lot more business outside of government projects and “they would get a lot of boost among the percent of the country that's not a big fan of Donald Trump."
But he said the immediate impact of upholding the supply chain risk designation is that it would “disincentivise other companies from standing up to the Trump administration”.
“He's [President Trump] trying to teach the AI industry to fall into line like everybody else," Goertzel said.
Judge Lin said she expected to make a ruling in the coming days on whether to temporarily pause the government's ban while the court continues to examine the broader case.