“Sam Altman Warns ChatGPT Use Can Turn 'Self-Destructive' for Vulnerable Users”
OpenAI CEO Sam Altman warns that some users—especially those in fragile mental states—are relying on ChatGPT in harmful, self-destructive ways, such as for critical life decisions or emotional support. In response, OpenAI is introducing session break reminders and safer interaction guidelines.
image for illustrative purpose

OpenAI CEO Sam Altman has voiced growing concern over how some users engage with ChatGPT—warning that certain individuals, particularly those who are mentally vulnerable or prone to delusion, could be using the AI in self-destructive ways. His remarks come amid a wave of backlash over the retirement of older models like GPT-4o following the rollout of GPT-5.
Altman acknowledged that while most users can discern the boundary between reality and role-play, a minority may not, and that the AI could risk reinforcing harmful beliefs. He admitted that the swift deprecation of familiar models was a misstep, reflecting how emotionally attached users had become to them.
Additionally, he expressed discomfort over users relying on ChatGPT for major life choices—referring to it as a de facto therapist or life coach—and the troubling possibility that legal systems might force OpenAI to disclose sensitive conversational data. Altman noted that while the AI offers helpful support to many, trusting it with consequential decisions “makes [him] uneasy.”
To promote healthier engagement, OpenAI is introducing features encouraging users to take breaks during long sessions, while also implementing safeguards that prompt users to weigh perspectives rather than offering direct personal advice.