Top Searches
Advertisement

Hallucinations & Hype: Sam Altman Urges Users to Trust ChatGPT with Caution


Updated: July 03, 2025 11:35

Image Source: Hindustan Times
OpenAI CEO Sam Altman has issued a candid warning to users of ChatGPT, urging them not to place blind trust in the AI chatbot. Speaking on the inaugural episode of OpenAI’s official podcast, Altman said, “People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates. It should be the tech that you don’t trust that much.”
 
Altman’s remarks come amid growing reliance on ChatGPT for tasks ranging from writing and research to parenting advice. He emphasized that while the tool is powerful, it can generate convincing but false or misleading information, a phenomenon known in AI circles as “hallucination.”
 
He also acknowledged the need for transparency and user education, stating, “It’s not super reliable… we need to be honest about that.” Altman’s comments echo similar concerns voiced by AI pioneer Geoffrey Hinton, who admitted he sometimes trusts GPT-4 more than he should—even after it failed a basic logic riddle.
 
Altman further discussed upcoming features like persistent memory and ad-supported models, which aim to improve personalization but have sparked fresh debates around privacy and data usage.
 
Key Highlights:
  • ChatGPT can hallucinate—producing plausible but incorrect answers.
  • Altman urges users to verify AI-generated content.
  • New features like memory and ads raise ethical questions.
  • AI literacy and critical thinking are essential as usage grows.
  • Altman’s message is clear: ChatGPT is a powerful assistant, not a flawless oracle.
Sources: Gizbot, Hindustan Times, India Today

Advertisement

STORIES YOU MAY LIKE

Advertisement

Advertisement