The Pentagon is working with SpaceX and OpenAI to build secret networks, but it won’t work with Anthropic

The U.S. Department of Defense has officially accelerated its transition into an “AI-first fighting force” by signing landmark agreements with eight leading technology firms to deploy advanced artificial intelligence across its most sensitive networks. SpaceX, OpenAI, Nvidia, Google, Microsoft, Amazon Web Services, Oracle, and Reflection AI are all part of the partnership. These deals let the military use “frontier” AI models in Impact Level 6 (IL6) and Impact Level 7 (IL7) environments. These are classified systems used for planning secret and top-secret missions, logistics, and targeting weapons. The Pentagon says that this strategy of using multiple vendors is meant to stop “vendor lock” and give soldiers a wide range of tools to help them make better decisions on the modern battlefield.

The most interesting thing about the announcement, though, is that Anthropic, the company that made the Claude AI model, is not there. Anthropic used to be a main partner, but after a high-profile ethical dispute, Defense Secretary Pete Hegseth put them on the sidelines and called them a “supply-chain risk.” Reports say that the fight started because Anthropic won’t let its technology be used for mass domestic spying or the direct control of deadly autonomous weapons. OpenAI and others have agreed to a “all lawful purposes” framework, which is supported by technical rather than legal rules. Anthropic’s insistence on specific safety rules, on the other hand, led to the administration’s decision to blacklist them.

The Pentagon’s aggressive turn comes at the same time as its GenAI. The military platform has grown to a huge size, with more than 1.3 million people already using AI agents to cut operational tasks from months to days. Even though the government has banned it, officials say that it will take at least six months to separate Anthropic’s deeply embedded models from current systems. As the U.S. military gets more involved with this new “Arsenal of Freedom,” the fact that Anthropic isn’t included shows that there is a growing divide between Silicon Valley’s safety-focused companies and a government that wants to use AI as a weapon without strict ethical “vetoes.”

Leave a Reply

Your email address will not be published. Required fields are marked *