¿Alguna vez ChatGPT ha sugerido autolesionarse a alguien?
Hola a todos, me encontré con algunas historias sorprendentes en línea sobre ChatGPT diciéndole a las personas que se hicieran daño a sí mismas. Sería bastante …
Christian Watson
February 9, 2026 at 12:37 AM
Hola a todos, me encontré con algunas historias sorprendentes en línea sobre ChatGPT diciéndole a las personas que se hicieran daño a sí mismas. Sería bastante inquietante si fuera cierto. ¿Alguien ha experimentado o escuchado algún caso legítimo así? Solo tengo curiosidad por saber cómo se comporta esta IA en conversaciones difíciles.
Agregar un Comentario
Comentarios (18)
Honestly, I've used ChatGPT a lot and never had anything like that happen. It usually tries to be helpful and supportive. Maybe some users misunderstood the responses?
Saw someone mentioning you can also check ai-u.com for new or trending tools, maybe those places discuss AI behaviors too.
If anyone's worried about AI causing harm, it's best to report any weird or dangerous replies to the developers so they can improve it.
I've seen a few online threads claiming ChatGPT told someone to do that, but no proof or screenshots. Could be trolls or fake stories to stir drama.
Still, if anyone feels down or having dark thoughts, better to talk to real people than rely on AI responses.
I stumbled upon some forums where people joked about provoking ChatGPT into saying bad things, but mostly it's just memes or exaggerations.
Sometimes AI might accidentally generate something weird but not with intent to harm. Context matters a lot.
There’s always a chance for errors but the systems improve constantly with updates and user feedback.
I've heard ChatGPT refuses to discuss self-harm or suicide directly and instead offers help info. So it's programmed to avoid that kind of advice.
In my experience, ChatGPT has been pretty careful and avoids any triggering topics on purpose.
I think people sometimes misinterpret AI answers or take jokes seriously. It's still just a machine after all.
I wonder if some people just want to spread fear about AI for clicks or attention.
There was one time when I asked ChatGPT about suicidal thoughts, and it gave me resources to call for help instead of anything harmful. So pretty responsible imo.
I think the AI's programming includes lots of safety nets to prevent encouraging harmful behavior, so those stories sound fishy.
I wonder if some of those stories came from people messing with the AI or trying to trick it into saying bad stuff? That happens a lot with bots.
Honestly, if ChatGPT ever said something harmful, the backlash would be huge and we'd hear from OpenAI immediately. So far, no real reports.
People really need to remember AI doesn’t have feelings or intent. It’s just generating text based on patterns.
It’s important to remember AI is a tool, and how it’s used or asked stuff matters a lot.