Content
As Hurricane Melissa swept across the Caribbean this week, social media platforms became flooded with AI-generated videos purporting to show the storm's devastating effects. Among the most viral clips was one depicting four sharks swimming in a flooded hotel pool in Jamaica, while another showed Kingston's airport seemingly destroyed by the hurricane. However, these dramatic scenes never actually took place; they were fabricated using advanced AI video generators and misinformation techniques. These videos have drawn millions of views on sites like X, TikTok, and Instagram, mixing fake content with genuine footage from local residents, which has caused widespread confusion.
In an effort to combat the spread of falsehoods, Jamaican officials including Education Minister Dana Morris Dixon have urged the public to rely on official channels for accurate updates. While misinformation during natural disasters is not new, the rise of AI-generated deepfakes has escalated the challenge to a whole new level. These synthetic videos are increasingly realistic, often lacking the obvious flaws that once allowed viewers to easily spot fakes. Earlier signs like unnatural motion or distorted text are becoming less common as AI tools improve rapidly.
Experts point out that Hurricane Melissa marks the first major disaster since the release of OpenAI’s latest AI video generation tool, Sora. This platform allows even casual users to create convincing synthetic videos with little effort. Sofia Rubinson of NewsGuard highlighted how such tools make it simpler for bad actors to churn out misleading videos that blend seamlessly with authentic content. This blurring of reality and fiction complicates efforts to verify information during crises.
Many of these AI-generated videos appear to be created not for political reasons but as clickbait designed to gain engagement and followers. AI expert Henry Ajder explained that users gain financially from high engagement on platforms like X and YouTube. With minimal work, viral videos can generate thousands of dollars in ad revenue. Additionally, accounts sharing these clips often aim to grow their follower base to promote various projects or products. Still, some creators are motivated by curiosity or experimentation with AI rather than deceit.
To spot deepfakes, viewers are advised to check for watermarks indicating AI origin, such as logos from Sora or other video generators. However, since watermarks can be edited out, it’s also important to look for inconsistencies like strange shapes, garbled lettering, or suspicious pixelation. Paying attention to context—whether the content seems plausible or exaggerated—is crucial. Platforms and users alike face the ever-growing challenge of distinguishing real disaster footage from AI-manufactured illusions.
One example is the shark pool video, which carries a watermark linked to a TikTok account named Yulian_Studios based in the Dominican Republic. While the exact creator of the viral clip is unknown, the account openly describes itself as an AI visual effects content creator. Other AI-generated clips from the same source show similarly surreal scenes involving hurricanes. In response, media watchdogs and journalism sites recommend trusting official sources and applying critical thinking to avoid falling for deepfake misinformation during emergencies.
As technology advances, the line between genuine and fake content will continue to blur, making digital literacy and verification skills more essential than ever. The public is encouraged to remain vigilant, question suspicious content, and prioritize information shared by credible authorities to reduce the spread and impact of AI-driven disinformation in times of crisis.