ISLAMABAD: At least 1.2 million users on ChatGPT talk about “suicide plan” on the platform, its company OpenAI said this week.
OpenAI has shared new data showing how often its chatbot, ChatGPT, is used in moments of deep distress. In their latest blogpost, it says that every week, about 0.15% of its users show “explicit indicators of potential suicidal planning or intent.”
With roughly 800 million users each week, that translates into 1.2 million people talking to ChatGPT about suicide.
The company also found that another 0.07 percent, about 560,000 people, show early signs of mental health emergencies like psychosis or mania. OpenAI claims that these numbers are small percentages but when considering its huge userbase, the numbers are concerning.
Earlier this year, Arab News reported that Pakistan ranked among the top 20 countries for ChatGPT traffic, with thousands using it daily “to vent feelings, manage anxiety, and seek late-night reassurance when friends aren’t available.”
OpenAI says “these are only estimates.” Detecting emotional distress through text is complex and can easily be misread. But the company says it is working with doctors and psychologists to make ChatGPT more sensitive in how it responds.
To improve safety, OpenAI partnered with more than 170 mental health experts worldwide.
“We worked with more than 170 mental health experts to help ChatGPT more reliably recognize signs of distress, respond with care, and guide people toward real-world support–reducing responses that fall short of our desired behavior by 65-80%.” The report said.
The team trained ChatGPT to recognise warning signs, provide gentle and supportive responses, and guide users toward local helplines. The company says unsafe or unhelpful replies have dropped by 65 to 80 percent after these updates.
The blog also highlights a new risk: people becoming emotionally dependent on the AI itself. Some users talk to ChatGPT for comfort, spending long hours chatting when they should reach real people. OpenAI now encourages users to take breaks and seek human support when needed.
The company insists ChatGPT is not a therapist. It can provide information and empathy, but it cannot replace professional care. Still, the data shows that people in crisis often reach out to what feels most accessible: an AI that listens without judgment.
“We believe ChatGPT can provide a supportive space for people to process what they’re feeling, and guide them to reach out to friends, family, or a mental health professional when appropriate."