Online harassers are using AI tools to create more realistic death threats, posting hyper-realistic AI-generated images and sounds to social media platforms

Content
Key Insights
The core facts reveal that AI tools are now being used to create hyper-realistic death threats featuring images and audio, posted on social media, increasing the severity and believability of online harassment.
Key stakeholders include victims of harassment, social media companies, law enforcement agencies, and AI developers, while peripheral groups encompass advocacy organizations and policymakers focused on digital safety.
Immediate impacts include heightened psychological distress among victims and increased challenges in content moderation for platforms, with potential cascading effects on user trust and platform engagement.
Historically, this situation parallels the rise of deepfake technology concerns circa 2018-2019, where rapid technological adoption outpaced regulatory frameworks, resulting in reactive rather than proactive measures.
Looking ahead, optimistic futures may involve AI-enhanced detection tools and improved digital literacy, whereas risk scenarios foresee escalating misuse fueling harassment and misinformation.
Recommendations for regulatory authorities include prioritizing the development of AI content verification standards, enforcing stricter platform accountability with clear compliance deadlines, and investing in victim support resources; these vary in complexity but collectively aim to mitigate harm while fostering safe innovation.