In recent weeks, Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei
have been fighting over the Pentagon's use of Anthropic's AI, called Claude. Amodei has stuck to his demands:
no surveillance of Americans, and no lethal autonomous weapons lacking human control. On Tuesday,
Hegseth issued Anthropic an ultimatum: It must allow the Pentagon to use its AI for any purpose or the Trump regime will invoke the Defense Production Act — forcing Anthropic to let the Pentagon to use Claude for free while also putting all Anthropic's government contracts at risk. Also, on Tuesday, Anthropic said it was
modifying its Responsible Scaling Policy (RSP) to lower safety guardrails.
The new version of the policy, which
TIME reviewed, includes commitments to be more transparent about the safety risks of AI, including making additional disclosures about how Anthropic's own models fare in safety testing. It commits to
matching or surpassing the safety efforts of competitors. (emphasis mine). And it promises to "delay" Anthropic's AI development if leaders both consider Anthropic to be leader of the AI race and think the risks of catastrophe to be significant.
Anthropic
says the change was motivated by a "collective action problem" stemming from the competitive AI landscape and the US's anti-regulatory approach. "If one AI developer paused development to implement safety measures while others moved forward training and deploying AI systems without strong mitigations, that could result in a world that is less safe," the new RSP reads. "The developers with the weakest protections would set the pace, and responsible developers would lose their ability to do safety research and advance the public benefit."
While this sounds reasonable enough in a vacuum, put together with Hegseth's threats and it muddies the waters around safety to a significant degree - what, if any, will be the consequences if the Pentagon does cross those red lines to use Claude for mass surveillance of Americans or autonomous killing systems with no human in the loop?
Right now Claude is the only AI model currently used for the military's most sensitive work. "The only reason we're still talking to these people is we need them and we need them now," a defense official
told Axios. "The problem for these guys is they are that good." Claude was reportedly used in the Maduro raid in Venezuela, a topic Amodei is said to have raised with its partner Palantir (who then raised the issue to Hegseth).
What can you do? Call your senators and representatives now, today, and tell them you
don't want the Defense Department to take Anthropic's AI technology, and you
do want them to enact strict controls on the future uses of AI.