US Government Halts Anthropic Use, Mandates OpenAI Tools Across Agencies
News Synopsis
The U.S. government’s artificial intelligence strategy has entered a new phase after the Department of Defense (DoD) designated Anthropic as a “supply-chain risk.” The move has led to sweeping restrictions across federal agencies, effectively sidelining the AI company from government use.
The decision follows a broader policy shift under President Donald Trump, who has directed federal departments to discontinue the use of Anthropic’s AI tools. Meanwhile, OpenAI has strengthened its position within the federal ecosystem through a recent defense partnership, positioning its AI systems as the primary tools for multiple government agencies.
Federal Agencies Transition Away from Anthropic
Treasury Department Terminates Anthropic Usage
Treasury Secretary Scott Bessent publicly confirmed that his department is ending its reliance on Anthropic products. In a post on X (formerly Twitter), Bessent stated:
“The American people deserve confidence that every tool in government serves the public interest, and under President Trump, no private company will ever dictate the terms of our national security.”
This announcement marked a clear stance by the administration that AI vendors must align fully with federal national security policies.
Health and Human Services Advises Shift to Alternatives
According to a news agency, the U.S. Department of Health & Human Services has instructed employees to transition from Anthropic tools to alternative AI platforms such as ChatGPT and Gemini.
This shift signals a broader, coordinated transition across civilian agencies toward approved AI providers.
State Department Mandates OpenAI Tools
The U.S. State Department has also formally recommended moving away from Anthropic systems. A memo reportedly stated:
“For now, StateChat will use GPT4.1 from OpenAI,”
The directive effectively standardizes OpenAI’s GPT-4.1 as the primary AI model supporting diplomatic communications and internal workflows.
Root Cause — Dispute Over Military AI Applications
Conflict Over Autonomous Weapons and Surveillance
At the center of the dispute is a policy disagreement between Anthropic and the Department of Defense regarding the acceptable uses of artificial intelligence.
The DoD reportedly sought greater flexibility in deploying AI for autonomous weapons targeting and large-scale surveillance systems. Anthropic, however, insisted on strict limitations governing such applications.
Anthropic’s position aligns with its publicly stated AI safety principles, which emphasize constrained and responsible deployment. The government, on the other hand, prioritized operational flexibility in defense contexts.
OpenAI Steps In With Defense Deal
With Anthropic declining to modify its contractual stance, OpenAI entered into an agreement with the Department of Defense. This deal significantly expanded OpenAI’s footprint in federal operations.
The agreement has drawn attention within the tech community, especially given ongoing debates about military uses of AI systems and ethical boundaries in autonomous technologies.
Tech Industry Pushback Against “Supply-Chain Risk” Label
Open Letter to Congress and the Department of War
In response to Anthropic’s designation, 121 signatories from major global technology firms—including OpenAI, Slack, Cursor, IBM, and others—signed an open letter urging the Department of War and Congress to reconsider the label.
The letter stated:
“We strongly believe the federal government should not retaliate against a private company for declining to accept changes to a contract.”
The signatories argue that labeling Anthropic as a supply-chain risk sets a concerning precedent for public-private partnerships in advanced technology sectors.
Concerns Over Government Pressure on Tech Firms
The letter further urged lawmakers to:
“examine whether the use of these extraordinary authorities against an American technology company is appropriate.”
Industry leaders expressed concern that penalizing a firm for declining contractual revisions could pressure other companies into complying with future government demands out of fear of exclusion.
It added:
“The United States is winning the AI competition because of its commitment to free enterprise and the rule of law; undermining that commitment to punish one company is short-sighted and antithetical to our national security interests.”
Market Reaction and User Migration Trends
While Anthropic faces exclusion from federal contracts, reports indicate that its Claude AI platform has experienced spikes in user migration. Analysts suggest that increased public attention and debates around AI ethics may have driven renewed interest from private users.
At the same time, OpenAI continues expanding enterprise and government integrations, reinforcing its position as a dominant AI infrastructure provider.
Broader Implications for AI Governance
This episode highlights a growing tension between:
-
National security priorities
-
Corporate AI safety frameworks
-
Ethical AI deployment standards
-
Competitive dynamics in the global AI race
As governments increasingly rely on AI systems for defense, diplomacy, and public administration, disagreements over acceptable use cases are likely to intensify.
The controversy also raises key questions:
-
Should AI companies be required to align fully with defense objectives?
-
Where should ethical boundaries be drawn in military AI use?
-
Can safety-driven AI firms operate independently within national security frameworks?
Conclusion
The designation of Anthropic as a “supply-chain risk” represents more than a procurement decision—it signals a pivotal moment in the evolving relationship between government and AI developers.
As federal agencies pivot toward OpenAI tools, the broader technology community is debating whether the move protects national interests or risks undermining principles of free enterprise and corporate autonomy.
The outcome of this dispute could shape future AI policy, defense partnerships, and the regulatory landscape for advanced artificial intelligence in the United States.


