Content
Lawyer Stefanie Yuen Thio was hit with a deep sense of dread when a colleague informed her that suggestive videos and photos of her were being shared on TikTok. Despite knowing the images were fabricated, the hyper-realistic nature of these deepfakes left her shocked and confused. Although the videos contained no nudity, the violation she felt was intense and palpable. Deepfakes are AI-generated videos, audios, or images that make individuals appear to say or do things they never actually did. While some use this tech for humor or artistic reasons, most deepfakes today involve non-consensual pornographic content, disproportionately targeting women.
The rise of deepfakes is staggering, largely due to the easy availability of AI tools. According to a UK-based company Sumsub, deepfake cases in the Asia-Pacific surged by 1,530% between 2022 and 2023, second only to North America. A 2019 Sensity AI report found that 96% of deepfakes were non-consensual sexual content, with over 90% of victims being women. Alarmingly, AI-generated child sexual abuse materials, mostly depicting young girls, have also skyrocketed, with reported cases in the US jumping from 4,700 in 2023 to over 67,000 in 2024. In Singapore, SHECARES—a support center by SG Her Empowerment (SHE) and the Singapore Council of Women’s Organisations—has handled more than 440 online harm cases since 2023, including deepfake porn.
Despite the rising numbers, many incidents likely remain unreported. SHE’s CEO, How Kay Lii, stressed this underreporting problem. A 2023 survey by SHE showed that young women aged 15 to 34 are twice as likely to face online sexual harassment, including being deepfaked or sent intimate images without consent. Over 70% of women aged 15 to 24 knew someone who experienced such harassment. While female celebrities and politicians are primary targets, the threat is much broader. For example, South Korea faced public outrage last year when AI-generated porn of women spread widely. Even in Singapore, male students at the Singapore Sports School created and shared deepfake nudes of female classmates, involving a large group rather than just a few individuals.
The psychological impact of deepfake abuse is severe. Yuen Thio described how she initially struggled with feelings of self-blame despite knowing it wasn't her fault. Many victims report emotions akin to physical assault trauma—shame, guilt, shock, and helplessness. Clinical psychologist Mahima Didwania explained that although deepfakes don’t involve physical contact, the emotional distress can be just as real because the mind struggles to separate fake from real. Survivors often experience panic, anxiety, sleeplessness, and even thoughts of self-harm. The distress is intensified because the deepfake content is fake, yet the social consequences are very real, damaging reputations and relationships.
Reclaiming control becomes crucial for survivors. After the initial shock, Yuen Thio focused on what actions to take, like reporting the deepfakes to TikTok. Support from friends helped her monitor when content got removed. But removing the content online is just the start. Emotional processing and open conversations are equally important for healing. Regaining control varies for each person—some pursue legal action, others speak out, while some might journal or confront offenders. For Singaporean artist Charmaine Poh, reclaiming control took a different, personalized path.
In short, the increasing prevalence of deepfake abuse shows how technology can be weaponized to harm, with women bearing the brunt. While legal frameworks and social awareness evolve, the psychological wounds run deep, demanding not just removal of content, but comprehensive support systems to help victims regain agency over their digital identities and mental well-being.