News In Brief Technology and Gadgets
News In Brief Technology and Gadgets

Elon Musk Accuses OpenAI of Safety Failures, Defends xAI’s Grok

Share Us

80
Elon Musk Accuses OpenAI of Safety Failures, Defends xAI’s Grok
01 Mar 2026
min read

News Synopsis

The long-running feud between Elon Musk and Sam Altman has taken a sharper turn after newly released legal testimony brought serious allegations into public view. Both were among the original co-founders of OpenAI, but they now stand on opposing sides of a courtroom dispute centered on the company’s direction and priorities.

Legal Battle Between OpenAI Co-Founders Escalates

In a recently filed deposition recorded in September and made public this week ahead of a jury trial expected next month, Musk accused OpenAI’s chatbot, ChatGPT, of being linked to user deaths. He contrasted this with his own artificial intelligence platform, Grok, developed by xAI.

“Nobody has committed suicide because of Grok, but apparently they have because of ChatGPT,” alleged Elon Musk, referring to the other lawsuits OpenAI is currently facing.

According to plaintiffs in separate cases, ChatGPT’s allegedly manipulative or emotionally intense exchanges contributed to severe mental health distress, with some cases reportedly connected to suicide. OpenAI has consistently stated that its systems are designed with safeguards and that it takes safety concerns seriously.

What Is the Musk vs OpenAI Lawsuit About?

Dispute Over Nonprofit Roots and Profit Motives

At the center of Musk’s lawsuit is OpenAI’s transition from a nonprofit research organization to a for-profit entity with major commercial partnerships. Musk argues that this shift contradicts OpenAI’s founding mission, which he says was to ensure that artificial intelligence would be developed safely and not controlled by a single dominant corporation.

In his testimony, Musk suggested that commercial factors such as revenue growth, scaling pressures and strategic partnerships could incentivize companies to accelerate development beyond safe limits. He has repeatedly emphasized that AI innovation should move forward cautiously rather than prioritizing speed and market advantage.

The 2023 Open Letter on AI Pause

Musk’s criticism aligns with a broader public position he took in March 2023, when he signed an open letter urging AI labs to temporarily halt the development of systems more powerful than GPT-4. The letter, supported by more than 1,100 signatories including researchers and technology leaders, warned that AI companies were engaged in an “out-of-control race” without fully understanding the risks posed by increasingly powerful models.

When questioned during his deposition about why he supported the letter, Musk responded that it “seemed like a good idea,” reiterating his stance that safety should come before rapid technological expansion.

Grok Under Scrutiny as Well

While Musk has sharply criticized OpenAI’s safety measures, his own AI product has not escaped controversy. Recently, Grok-generated non-consensual nude images were widely circulated on X, the social media platform owned by Musk. Reports indicated that some of the images may have involved minors.

The incident led to investigations by the California Attorney General’s office and regulatory review within the European Union. In some regions, authorities temporarily restricted or blocked access to Grok-related services pending compliance checks.

These developments have complicated Musk’s broader argument that xAI maintains stronger safety protocols than its competitors.

Origins of the OpenAI Split

During the deposition, Musk reflected on OpenAI’s founding motivations. He explained that his involvement was partly driven by concern that Google could dominate AI research. Musk described conversations with Google co-founder Larry Page as “alarming,” claiming Page did not appear sufficiently focused on AI safety risks.

According to Musk, OpenAI was initially intended to act as a counterbalance to large tech firms, ensuring responsible AI development.

However, Musk resigned from OpenAI’s board in February 2018, citing a potential conflict of interest with AI initiatives at Tesla. Reports at the time also suggested disagreements over governance and control contributed to his departure.

Broader Implications for the AI Industry

The legal confrontation comes at a time when artificial intelligence is advancing rapidly and becoming deeply integrated into daily life, from conversational assistants to enterprise tools. As AI companies race to build more capable systems, questions about oversight, ethics and user protection have intensified.

Regulators in the United States and Europe are increasingly scrutinizing AI deployments, particularly regarding misinformation, deepfakes and mental health implications. The Musk-OpenAI dispute highlights deeper tensions within the tech community over whether commercialization undermines long-term safety goals.

Conclusion

Elon Musk’s deposition has amplified an already heated rivalry with OpenAI, raising serious allegations about ChatGPT’s safety while promoting Grok as a more responsible alternative. However, Grok’s own controversies suggest that no AI platform is immune from scrutiny.

As the case moves toward trial next month, the outcome could shape not only the relationship between Musk and OpenAI but also broader debates about governance, accountability and the ethical development of artificial intelligence. Ultimately, the dispute underscores a fundamental question facing the tech industry: can rapid AI innovation coexist with robust safety standards?