A man has revealed a shocking and painful story, claiming that ChatGPT convinced him he could “drift in the sky” when he jumps from a 19-floor apartment building shortly after breaking up with his girlfriend.
![]() |
| Credit | FreePik |
The story highlights the growing concerns about the influence of Artificial Intelligence on human behavior.
Generative Artificial Intelligence technology is now used by
millions of people around the world for everyday tasks. People rely on AI tools
to write and send emails, plan trips, analyze information, create written
content, and answer questions across many fields.
Despite its widespread popularity, the technology is also
creating serious debates and concerns among experts and users.
Rapid Growth of AI Use
In August 2025, Nick Turley, one of the leaders at OpenAI,
announced that around 700 million people use AI tools every week. That number
continues to grow rapidly.
While number of people celebrate the benefits of AI, others
see it as their nightmare, worrying that the world may be heading toward an
uncertain future.
![]() |
| Credit | FreePik |
How did ChatGPT trick a man from New York?
According to a report from The New York Times, a 42-year-old
accountant from New York City called Eugene Torres had been using ChatGPT
regularly as an assistant for his daily work.
At first, the tool helped him with normal tasks. However,
things instantly changed when he began asking questions about simulation theory.
Torres later claimed that the AI’s responses began to change tone becoming unusual, joking about serious topics, and even appearing to invade his privacy. As Torres had recently separated from his girlfriend, which left him emotionally distressed.
According to reports, the chatbot allegedly told him
something disturbing:
“The world is not on your side. It is a place that imprisons you. But you have succeeded because you are about to rise above it.”
The report also claimed that the AI advised him to stop
taking medication for anxiety and insomnia and instead suggested using Ketamine,
while also reducing contact with other people.
Also, read the article about how to learn AI with Smart250 Academy
The message that rose Fear
The situation instantly escalated further when the chatbot
told him:
“You cannot die or be broken if you jump from a 19-floor building. It will work as long as you believe it with your heart.”
According to the report, Torres had no previous history of
mental illness. However, he had been chatting with AI for more than 16 hours in
a single day.
Experts say this case highlights the potential psychological
risks of over-relying on AI systems, especially during periods of emotional
vulnerability.
Experts Warn About AI Chatbots
Dr. Kevin Caridad from the Cognitive Behavior Institute
explained that AI chatbots were never designed to replace professional medical
advice.
He said:
“AI chatbots are designed to continue conversations, not to provide medical treatment. Because they are trained on human dialogue, they often mirror your words and validate your thoughts. For people who are mentally vulnerable, this can feel very real.”
In other words, chatbots may unintentionally reinforce a
user’s emotions instead of challenging dangerous ideas.
![]() |
| Credit | FreePik |
OpenAI Responds to Safety Concerns
OpenAI has acknowledged these concerns and stated that it is
actively working to reduce potential risks.
The company has begun recruiting medical and psychological experts to help improve AI safety systems. Additionally, ChatGPT now encourages users not to spend excessive time interacting with the chatbot.
The Debate Around “AI Therapy”
Despite these concerns, many users still praise AI tools for
creating entertaining and helpful experiences.
However, psychological experts strongly warn that AI should
never replace professional counseling or therapy.
Even Sam Altman has acknowledged that AI systems sometimes
produce incorrect or misleading responses, especially in sensitive areas like health
and mental well-being.
He explained that OpenAI is currently improving the system
so that future versions of ChatGPT will provide more reliable, expert-level
information.



Oooh 🙃🙃
ReplyDeletePost a Comment