Sam Altman, the visionary behind ChatGPT and a prominent voice in the AI community, has urged users to approach artificial intelligence with significant caution. During the first episode of OpenAI’s official podcast, Altman specifically highlighted AI’s tendency to “hallucinate,” where it generates factually incorrect or misleading information with conviction. He expressed surprise at the “very high degree of trust” already placed in ChatGPT by the public.
Altman underscored that AI “should be the tech that you don’t trust that much,” directly challenging prevailing optimistic views about AI’s current infallibility. This important warning from a leading developer emphasizes the critical need for users to verify information provided by AI chatbots, particularly in sensitive domains. The risk of blindly relying on hallucinated outputs is a serious concern.
Drawing from his personal experience as a new parent, Altman shared how he has used ChatGPT for practical advice on issues like diaper rashes and baby nap routines. This relatable example, while demonstrating AI’s convenience, also serves as a cautionary tale about the potential for misinformation if the AI were to hallucinate on critical topics.
Furthermore, Altman addressed growing privacy concerns within OpenAI, acknowledging that the exploration of an ad-supported model has raised new questions. These privacy discussions occur amidst ongoing legal challenges, most notably The New York Times’ lawsuit accusing OpenAI and Microsoft of using its content without permission. In a significant shift from earlier statements, Altman also indicated that new hardware would be essential for AI’s widespread adoption, as current computers are not designed for an AI-centric world.