Content
As Hurricane Melissa swept through the Caribbean recently, a wave of misleading AI-generated videos flooded social media platforms, creating confusion and misinformation. One particularly viral clip shows four sharks supposedly swimming in a hotel's pool in Jamaica, a scene that’s entirely fabricated. Another video claims to depict Jamaica’s Kingston airport utterly destroyed by the storm, but this too is false. These videos, alongside others, have amassed millions of views across platforms like X, TikTok, and Instagram, blending with genuine footage from locals and making it tough for viewers to discern fact from fiction.
Many of these clips are either stitched together from old disaster footage or entirely crafted using AI video generators. Dana Morris Dixon, Jamaica’s education minister, expressed concern over the proliferation of such fake videos in numerous WhatsApp groups, urging the public to rely on official channels for accurate information. While misinformation during natural disasters isn’t new, the advent of advanced AI tools has made it easier than ever to create highly realistic synthetic videos, raising the stakes for those trying to combat falsehoods.
To avoid falling for these deepfakes, viewers are advised to check for watermarks indicating AI origins, such as logos from tools like Sora, an AI video generator launched recently by OpenAI. However, these watermarks can be removed or blurred, so people should look out for signs like pixelation, discoloration, or blurred areas where watermarks should be. Scrutinizing the video closely can reveal oddities—like the strangely shaped sharks in the pool video—or garbled text and blended objects, signs typical of AI creation. Also, pay attention to branding, as AI often struggles to replicate exact company logos.
Experts warn that distinguishing real footage from AI deepfakes will only get tougher as technology advances. Hurricane Melissa marks the first major natural disaster since the launch of OpenAI’s latest video generation tool, Sora, which has lowered barriers for bad actors to produce convincing fake videos. Sofia Rubinson from NewsGuard points out that flaws like unnatural motion or distorted text are becoming less common, complicating detection for everyday viewers.
Regarding motives behind these crisis-related deepfakes, AI specialist Henry Ajder suggests that most are driven by the pursuit of clicks and engagement rather than political agendas. Platforms like X and YouTube reward users financially based on views and engagement, so creators can earn significant sums with minimal effort. Besides monetary gain, some creators aim to boost follower counts to promote products or projects. Ajder advises viewers to consider the source: accounts known for clickbait content should be treated skeptically. Interestingly, some creators openly use AI to experiment with attention-grabbing content rather than conceal their techniques.
The origin of the shark pool video remains unclear, though a version found on Instagram carries a watermark linking it to Yulian_Studios, a TikTok account from the Dominican Republic specializing in AI visual effects. Although that account doesn’t showcase the shark video on its page, it features other AI-generated clips related to the hurricane.
Ultimately, trusting your instincts and context is key. If a video seems exaggerated or out of character, it’s wise to suspect a deepfake. Even audio is becoming more realistic, with synthetic voices matching natural cadence. Platforms like X offer community notes that flag suspicious content; for example, a community note on the shark pool video clearly states it is AI-generated misinformation. Most importantly, always verify information with official sources such as the Jamaican government or the National Hurricane Center, rather than relying on random social media posts.