In a significant development at the intersection of artificial intelligence and national security, a federal court in Washington, D.C., has declined to block the Pentagon’s decision to blacklist AI firm Anthropic. The ruling marks an important moment in an ongoing legal battle that could shape how governments regulate advanced AI technologies and engage with private tech companies.
According to a news agency's report, the court’s decision allows the US Department of Defense to continue designating Anthropic as a “supply chain risk.” This classification effectively prevents the company from securing future Pentagon contracts and may influence other government agencies to follow suit.
The ruling is being viewed as a victory for the administration of Donald Trump, as it defends its stance on safeguarding national security interests in critical technology sectors.
Anthropic has strongly contested the Pentagon’s decision, accusing US Defence Secretary Pete Hegseth of misusing his authority. The company claims that the designation was imposed after it declined to comply with certain government demands, including relaxing safety restrictions on its AI systems.
The company has also raised constitutional concerns, arguing that the government violated its First Amendment rights. According to Anthropic, it was not given a fair opportunity to respond to the allegations, which it says undermines due process protections.
Acting Attorney General Todd Blanche defended the court’s decision, emphasizing its importance for national security.
In a post on X, he stated, “Today’s D.C. Circuit stay allowing the government to designate Anthropic as a supply chain risk is a resounding victory for military readiness.”
He further added, “Our position has been clear from the start — our military needs full access to Anthropic’s models if its technology is integrated into our sensitive systems,”
These statements highlight the government’s focus on maintaining strict oversight over AI technologies used in defence applications.
The “supply chain risk” label could have far-reaching consequences for Anthropic. In addition to losing Pentagon contracts, the designation may lead to broader restrictions across other US government agencies.
This could result in significant financial losses, potentially amounting to billions of dollars in missed business opportunities.
Beyond financial implications, the designation poses a reputational risk, as it signals concerns about the company’s reliability in handling sensitive technologies.
Interestingly, Anthropic continues to perform strongly in the commercial market despite the ongoing legal dispute.
Its suite of AI products—including the Claude app, Claude Code, and Claude Cowork—continues to see widespread adoption across industries.
Anthropic recently introduced its most advanced AI model, Claude Mythos, as part of Project GlassWing. This model is designed to support high-level applications, particularly in cybersecurity.
Major technology companies such as Google, Apple, and Nvidia are expected to integrate this AI model into their systems, highlighting its strategic importance in the tech ecosystem.
The case underscores increasing government scrutiny of AI companies, particularly those involved in sensitive or defence-related technologies.
It also raises critical questions about how to balance rapid technological innovation with national security requirements and regulatory compliance.
The US court’s decision to uphold the Pentagon’s ban on Anthropic reflects the growing tension between technological innovation and national security concerns. While the government prioritizes control and oversight of AI systems used in defence, companies like Anthropic are pushing back to protect their rights and business interests.
As the legal battle continues, the outcome could set a significant precedent for how AI firms operate within regulated environments. For now, the case serves as a reminder that in the age of advanced AI, the intersection of technology, policy, and security will remain a complex and evolving landscape.