AI company Anthropic and the US Pentagon are at a standoff. Here is all you need to know.
The United States government is threatening to end military contracts with the company Anthropic unless it opens its AI technology for unrestricted military use.
Anthropic makes the chatbot Claude and is the last of its peers to not supply its technology to a new US military internal network.
CEO Dario Amodei repeatedly has made clear his ethical concerns about unchecked government use of AI, including the dangers of fully autonomous armed drones and of AI-assisted mass surveillance that could track dissent.
Anthropic won a $200 million (€167 million) contract from the US Department of Defence last July to “prototype frontier AI capabilities that advance US national security,” Anthropic said. The company inked a partnership with Palantir Technologies in 2024 to integrate Claude into US intelligence software.
Defence Secretary Pete Hegseth reportedly said on Tuesday he would end the $200 million (€167 million) contract and label the company a “supply chain risk” if Anthropic did not comply.
If Anthropic is designated a supply chain risk under US procurement law, the government would be able to exclude the company from contract awards, remove the company’s products from consideration and direct prime contractors not to use that supplier.
Reports about Hegseth’s meeting with Dario Amodei, Anthropic’s CEO and cofounder, also said that Hegseth threatened to use the Defense Production Act against the company, a law that gives the US President broad authority to direct private companies to prioritise national security needs, which includes access to their technology.
Euronews Next reached out to Anthropic and the US government’s Department of Defence to confirm the allegations, but did not receive immediate replies.
Anthropic rolls back core safety promise
Anthropic has long pitched itself as the more responsible and safety-minded of the leading AI companies, ever since its founders quit OpenAI to form the startup in 2021.
On Tuesday, Anthropic said in an interview with Time Magazine that it was dropping its safety pledge that it would not release an AI system unless it could guarantee that the safety measures were adequate.
Instead, it launched a new version of its responsible scaling policy, which outlines the company’s framework for mitigating catastrophic AI risks.
Jared Kaplan, Anthropic’s chief science officer, told the publication that keeping the company from training new models while their competitors raced ahead without safeguards would not help them keep up in the AI race.
“If one AI developer paused development to implement safety measures while others moved forward with training and deploying AI systems without strong mitigations, that could result in a world that is less safe,” Anthropic’s new policy reads.
“The developers with the weakest protections would set the pace, and responsible developers would lose their ability to do safety research and advance the public benefit.”
The policy separates Anthropic’s hopes for bringing safety standards to the industry from its own goals as a company, where safety is still a priority for them.
Anthropic said its new policy means the company will set “ambitious yet achievable” safety roadmaps for its models as well as publish risk reports that will show anticipated risks and whether a model’s release is justified.