News In Brief Media and Infotainment
News In Brief Media and Infotainment

Sam Altman warns users: Don't blindly trust ChatGPT—Here’s why

Share Us

437
Sam Altman warns users: Don't blindly trust ChatGPT—Here’s why
02 Jul 2025
4 min read

News Synopsis

In the debut episode of OpenAI’s official podcast, CEO Sam Altman cautioned users against placing unconditional trust in ChatGPT, highlighting that while it’s a powerful tool, it’s also flawed.

“People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates,” Altman said. “It should be the tech that you don’t trust that much.”

Altman’s statement has sparked widespread discussion among both tech professionals and everyday users, many of whom rely on the chatbot for diverse tasks—from drafting content to seeking parenting advice.

Why ChatGPT Isn’t Always Right

ChatGPT functions by predicting the next word in a sentence based on large datasets it was trained on. However, it lacks human understanding and occasionally outputs false or fabricated information—a phenomenon commonly known as "hallucination" in AI terminology.

“It’s not super reliable,” CEO Sam Altman. “We need to be honest about that.”

Despite its widespread adoption, Altman emphasized the risks of overreliance, urging users to approach its outputs with a degree of skepticism.

New Features & Their Risks: Memory and Monetisation

Altman also hinted at upcoming features in ChatGPT, including persistent memory and the possibility of ad-supported models. While these features aim to enhance personalization and revenue generation, they have sparked privacy and data usage concerns across the AI community.

Geoffrey Hinton Adds to the Conversation

Echoing Altman’s remarks, Geoffrey Hinton, often dubbed the “godfather of AI”, also admitted to unintentionally trusting GPT-4 more than he should.

I tend to believe what it says, even though I should probably be suspicious,”
Hinton told CBS in a recent interview.

To showcase GPT-4’s shortcomings, he presented a basic logic riddle:

“Sally has three brothers. Each of her brothers has two sisters. How many sisters does Sally have?”

GPT-4 responded incorrectly. The correct answer is one—Sally herself.

“It surprises me it still screws up on that,” Hinton said, but added that he expects future models like GPT-5 to perform better.

AI’s Role: Helpful, But Not Infallible

Both Altman and Hinton agree on one fundamental point: while AI can be extremely useful, it’s not a flawless source of truth. As artificial intelligence becomes more integrated into everyday applications, users must exercise discernment and not accept outputs at face value.

“Trust, but verify,” remains a prudent mantra for all AI interactions.

Conclusion 

Sam Altman’s candid warning about ChatGPT’s limitations serves as a vital reminder in an age where AI tools are increasingly embedded in everyday decision-making. While ChatGPT is a powerful assistant capable of generating impressive and often helpful outputs, it is still prone to “hallucinations” — fabricating or misrepresenting information in convincing ways.

Altman’s remarks, supported by AI pioneer Geoffrey Hinton, highlight the growing concern around overreliance on AI-generated content without human verification. As new features like persistent memory and ad-supported models are introduced, users must remain vigilant about privacy and accuracy.

These conversations underscore the importance of transparency and responsible use of AI. Trusting AI blindly could lead to unintended consequences, especially in areas requiring factual precision. As technology evolves with future models like GPT-5 on the horizon, the message remains clear: leverage AI’s strengths, but always fact-check. In the era of artificial intelligence, critical thinking is more essential than ever.

You May Like

TWN Exclusive