News In Brief Business and Economy
News In Brief Business and Economy

Anthropic CEO Dario Amodei calls OpenAI’s Pentagon AI deal “safety theatre”

Share Us

85
Anthropic CEO Dario Amodei calls OpenAI’s Pentagon AI deal “safety theatre”
06 Mar 2026
min read

News Synopsis

The growing competition among leading artificial intelligence companies has spilled into the public domain after Anthropic CEO Dario Amodei strongly criticised OpenAI’s recent partnership with the U.S. Department of Defense (DoD).

In an internal memo, Amodei accused the OpenAI leadership of misrepresenting the nature of its military partnership and described its approach to AI safety as little more than performative compliance.

The remarks surfaced shortly after OpenAI finalized a defense-related AI agreement with the United States Department of Defense, following the cancellation of a similar contract previously held by Anthropic.

The controversy highlights a widening debate in the tech industry around AI safety, military use of artificial intelligence, and government influence in AI development.

OpenAI’s Pentagon deal sparks sharp criticism from Anthropic

According to reports, Anthropic CEO Dario Amodei accused the company led by Sam Altman of misrepresenting the motivations behind its agreement with the Pentagon.

Amodei calls OpenAI’s safety messaging misleading

In the internal memo referenced by The Information, Amodei stated that OpenAI’s description of the contract was inaccurate.

He described the company’s statements as “straight up lies,” and labelled their approach to AI safety as “safety theatre.”

The remarks came after OpenAI secured the Pentagon contract just hours after Anthropic’s agreement was terminated, raising questions about the timing and motivations behind the deal.

Altman acknowledges deal looked rushed

Sam Altman later acknowledged publicly that the arrangement may have appeared opportunistic. He admitted that the rapid agreement with the Pentagon looked sloppy and rushed, although he emphasized that OpenAI had established clear restrictions on the use of its AI systems in military applications.

Dario Amodei reveals main reason OpenAI accepted Pentagon deal

Anthropic claims focus on preventing misuse of AI

Amodei argued that the core difference between the two companies lies in their stance on preventing misuse of advanced AI technologies.

In the memo, he wrote:

"The main reason [OpenAI] accepted [the DoD’s deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses.”

Anthropic had earlier indicated that it withdrew from the defense contract due to concerns that its AI models might eventually be used for:

  • Domestic mass surveillance

  • Development of autonomous weapons

  • Military decision-making systems lacking oversight

Pentagon reportedly refused strict safeguards

According to reports, the Pentagon declined to formally adopt restrictions proposed by Anthropic that would prevent such uses. Instead, the defense department insisted on retaining the option to use AI technologies for all “lawful” purposes.

OpenAI has since said its systems would not be used for such activities either. Altman recently shared an amended agreement clarifying certain red lines, although critics remain skeptical.

Allegations of political influence behind the contract dispute

Amodei points to campaign donations

In the same memo, Amodei suggested that political dynamics may have played a role in the government’s treatment of Anthropic.

He alleged that the company’s lack of political donations to the election campaign of Donald Trump may have contributed to tensions with the administration.

Amodei wrote:

“The real reasons [the Department of War] and the Trump admin do not like us is that we haven’t donated to Trump (while OpenAI/Greg have donated a lot).”

Reports indicate that OpenAI President Greg Brockman and his spouse donated $25 million to a Trump super PAC during the previous election cycle.

What happened to Anthropic’s Pentagon contract?

Contract termination and supply chain risk label

Anthropic’s defense contract was ultimately terminated by President Trump, after which the company was designated as a supply chain risk.

This designation can severely restrict a company’s ability to participate in U.S. defense projects or work with contractors linked to government programs.

Pentagon official pushes back

The claims made by Amodei drew a strong response from U.S. defense officials.

Emil Michael, the U.S. Defense Undersecretary, dismissed Amodei’s allegations and publicly criticised him, calling him a “liar” with a “god complex.”

Michael also argued that Anthropic had attempted to impose unrealistic conditions on the military, including preventing the use of data obtained from publicly available sources such as LinkedIn.

Despite the dispute, reports indicate that the U.S. military has used Anthropic’s AI tools in operations related to strikes on Iran, highlighting the complicated relationship between AI firms and defense agencies.

The Pentagon is expected to complete a six-month transition period as it shifts from Anthropic systems to OpenAI’s models.

Tech industry group sends warning to the Pentagon

ITI raises concerns about supply chain risk designation

The conflict has also drawn attention from the wider technology industry.

The Information Technology Industry Council (ITI), whose members include companies such as Google and Nvidia, sent a formal letter to the Pentagon urging caution.

Although the letter did not mention Anthropic directly, it appeared to reference the company’s recent designation.

The letter stated:

“We are concerned by recent reports regarding the Department of War’s consideration of imposing a supply chain risk designation in response to a procurement dispute.”

Industry calls for dialogue instead of punitive measures

ITI argued that such designations should be used only in genuine national security emergencies.

The group added:

“Emergency authorities such as supply chain risk designations exist for genuine emergencies and are typically reserved for entities that have been designated as foreign adversaries.”

Companies labelled as supply chain risks are effectively barred from working with U.S. defense contractors, which could have major financial and reputational consequences.

Public backlash against OpenAI

Surge in ChatGPT uninstalls

OpenAI faced intense criticism on social media following news of the Pentagon partnership. According to reports, users began uninstalling the company’s popular chatbot, ChatGPT, in protest.

The number of uninstalls reportedly surged by nearly 300 per cent after the controversy.

Claude rises to the top of app charts

Meanwhile, Anthropic’s chatbot Claude experienced a dramatic rise in popularity.

The app climbed to the top of the U.S. App Store rankings, and reportedly suffered outages on two occasions due to the sudden increase in user activity.

Amodei says public sees Anthropic as the “heroes”

Amodei acknowledged the wave of public support in his memo.

He wrote that while Altman was "presenting himself as a peacemaker and dealmaker," the public response told a different story.

Amodei added:

"I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI’s deal with the DoW as sketchy or suspicious, and see us as the heroes (we’re #2 in the App Store now!).”

However, he also expressed concern that some OpenAI employees might accept Altman’s narrative.

He wrote:

“It is working on some Twitter morons, which doesn’t matter, but my main worry is how to make sure it doesn’t work on OpenAI employees."

Conclusion

The dispute between Anthropic and OpenAI underscores the intensifying ethical and political debates surrounding artificial intelligence and its military applications. As governments increasingly explore AI for defense purposes, technology companies are facing difficult choices about how their systems can be used.

While OpenAI maintains that its partnership with the Pentagon includes strict safeguards, Anthropic’s leadership argues that such assurances are insufficient. The public backlash and industry intervention show that the broader tech ecosystem is closely watching how these powerful AI technologies are deployed.

Ultimately, the controversy highlights a key challenge for the AI era: balancing innovation, national security needs, and responsible governance of powerful technologies.

TWN Exclusive