News In Brief Media and Infotainment
News In Brief Media and Infotainment

Google, Microsoft, xAI Partner With US Government for Early AI Model Reviews

Share Us

188
Google, Microsoft, xAI Partner With US Government for Early AI Model Reviews
06 May 2026
min read

News Synopsis

In a significant move aimed at strengthening artificial intelligence governance, major technology players including Google, Microsoft, and xAI have agreed to provide the US government with early access to their AI models.

The initiative is designed to allow authorities to evaluate the systems’ capabilities, identify potential risks, and improve security measures before these models are released publicly.

With this agreement, these companies join OpenAI and Anthropic, which have already been collaborating with the government on similar efforts.

Role of the Centre for AI Standards and Innovation

A Central Hub for AI Evaluation

The pre-release evaluations will be conducted by the US Commerce Department’s Centre for AI Standards and Innovation. This institution acts as a key interface between the government and the AI industry, focusing on testing, collaborative research, and developing best practices.

The centre describes itself as the “industry’s primary point of contact within the US government” for AI-related assessments and innovation frameworks. While its role is expanding, it has not yet been formally codified into law, though lawmakers are considering legislation to grant it a permanent mandate.

Evolution of the Agency

Originally established in 2023 under former President Joe Biden as the AI Safety Institute, the body was later rebranded under the administration of Donald Trump. Its responsibilities have since broadened, particularly in areas concerning national security and advanced AI capabilities.

Expanding AI Evaluations and Industry Collaboration

Track Record of Model Assessments

Since 2024, the centre has evaluated AI models developed by OpenAI and Anthropic prior to their public release. According to official data, it has completed more than 40 evaluations, including assessments of cutting-edge models that remain unreleased.

These evaluations aim to detect vulnerabilities, ensure compliance with safety standards, and provide feedback to developers before their technologies reach wider audiences.

Statement From Leadership

“These expanded industry collaborations help us scale our work in the public interest at a critical moment,” said Chris Fall, the centre’s director.

Fall recently assumed leadership following the abrupt exit of Collin Burns, a former Anthropic researcher whose short tenure drew attention within the AI community.

Rising Concerns Around Advanced AI Systems

Impact of Anthropic’s Mythos Model

The new agreements come amid heightened concerns triggered by Anthropic’s advanced AI system, Mythos. The model reportedly demonstrated an ability to identify weaknesses in cybersecurity systems, raising alarms among US officials.

This development has intensified the government’s focus on evaluating AI technologies before deployment, especially those with potential national security implications.

Increased Government Involvement

Senior officials, including White House Chief of Staff Susie Wiles, Treasury Secretary Scott Bessent, and National Cyber Director Sean Cairncross, have become more actively engaged in AI policy discussions following the emergence of Mythos.

The White House has also opposed broader public access to the model, reflecting concerns about misuse and systemic risks.

Policy Developments and Regulatory Direction

AI Action Plan and Oversight Mechanisms

The Trump administration’s AI Action Plan, released in July, outlines a comprehensive strategy for AI governance. It designates the Centre for AI Standards and Innovation as a core component of the national AI evaluation ecosystem, particularly for national security-related assessments.

The plan further advises regulators to “explore the use of evaluations in their applications of existing law to AI systems.” This suggests that existing legal frameworks may increasingly be applied to AI technologies through structured evaluation processes.

Potential Executive Action

Reports from major publications indicate that the administration is considering an executive order to formalize a government-led review process for AI tools. While a White House official described these reports as speculative, such a move would represent a significant step toward centralized oversight.

Challenges and Ongoing Disputes

Tensions With Anthropic

Despite its collaboration with the Commerce Department, Anthropic is currently involved in legal disputes with the US Department of Defense. The Pentagon is evaluating whether it can classify the company as a supply-chain risk, leading to two ongoing lawsuits.

Both Defence Secretary Pete Hegseth and President Trump have proposed a six-month phase-out period for government use of Anthropic’s tools, highlighting tensions between innovation and security concerns.

Policy Balancing Act

A forthcoming White House memo addressing AI usage across federal agencies is expected to acknowledge these challenges, aiming to strike a balance between fostering innovation and ensuring national security.

Conclusion

The decision by leading AI companies to grant the US government early access to their models marks a pivotal moment in the evolution of AI governance. As artificial intelligence systems become more powerful and complex, proactive evaluation and collaboration between industry and government are increasingly essential.

While this initiative strengthens oversight and security, it also raises questions about regulatory boundaries, innovation pace, and global competitiveness. With geopolitical stakes rising and technological breakthroughs accelerating, such partnerships are likely to shape the future of AI development and policy worldwide.

You May Like

TWN Exclusive