Images circulated as showing Venezuelan President Nicolas Maduro being seized and escorted by U.S. troops (left), and as a blizzard on Russia's Far Eastern Kamchatka Peninsula. They are AI-generated manipulated images. Screenshots from X and Threads
One of the questions photojournalists are asked most often lately is “Is this photo real?” Most of them originate on social networking services (SNS). Whenever an incident draws international attention, photos of unclear origin spread rapidly online. In the past, the main tactics were presenting old photos as if they depicted current scenes or compositing multiple images. Since the advent of artificial intelligence (AI), it has moved on to the stage of newly generating outright false images.
AI-generated images typically contain a digital watermark that is invisible to the naked eye. Google DeepMind's synthetic identification technology ‘SynthID’ is a prime example. However, if an AI-made image is printed on paper and then scanned, the situation changes. Most of the digital traces used to judge whether an image was AI-generated disappear, making authentication virtually impossible.
On January 3 (local time), Venezuelan President Nicolas Maduro being captured by U.S. forces and transported. The photo also shows edges as if it were printed on paper. Truth Social screenshot
Venezuelan President Nicolas Maduro being held in a detention facility after arriving at Stewart Air National Guard Base in New York State, USA. The photo was used by Reuters, Boston NBC, WABC-TV, and others. Social media screenshot
Venezuelan President Nicolas Maduro and his wife Cilia Flores are transferred to federal court in New York on January 5. This photo was captured by freelance photojournalist Adam Gray, not by AI. EPA-Yonhap
In January, a representative case involved photos tied to the U.S. ‘Maduro captured’ incident that spread via social media. A scene that appeared to show him being taken away at an airport by the Drug Enforcement Administration (DEA) looked like a real photo, and someone added news graphics, fueling faster spread. The White House X official account even reposted a lawmaker's post that included the photo, and some overseas media also cited it. Later, however, the image creator, Ian Weber (@San_live/ X), acknowledged the AI compositing. SynthID analysis likewise judged it to be a composite image. Yet debate continues over the photo of President Maduro giving a thumbs-up. The only grounds cited for its credibility are that “some media used it” or it was “posted by prominent figures on social media.”
A video capture that went viral claiming a blizzard on Russia's Far Eastern Kamchatka Peninsula, appearing to show people sliding down piled snow from a roughly 10-story building. It is an AI-generated fake video. Screenshot from X
An AI image of the Kamchatka Peninsula blizzard posted by Threads user @ibotoved. Screenshot from Threads
A photo provided by the Kamchatka information agency shows workers clearing snow on January 19. Xinhua-Yonhap
Photos that spread claiming record snowfall in Russia's Kamchatka region followed a similar pattern. Images raced across social media and gained credibility as media outlets quoted them. But they were exaggerated beyond the actual weather situation, and the creator (@ibotoved/ Threads) admitted they were composite images made with Grok, revealing them to be false.
Дмитрий iBotoved | Чат-боты Беларусь (@ibotoved) on Threads
Fake blizzard photos of the Kamchatka Peninsula posted on Threads
View original articleThe reason fake photos keep spreading is not simply their sophistication. Research suggests human visual perception itself has reached its limit. According to a survey by the Australian research firm Conjointly, people's accuracy in identifying AI images hovers around 50%, essentially no better than random guessing. Many believed they could tell the difference, but their actual performance was not much different.
Photos from Conjointly's 2025 study on distinguishing real versus AI images. They recorded correct response rates of 31% and 35%, respectively.
Photos from Conjointly's 2025 study on distinguishing real versus AI images. They recorded correct response rates of 65% and 51%, respectively.
Photojournalists and professional verifiers typically assess authenticity using clues such as metadata about the shooting equipment, digital watermarks, and artificial compositional elements. For this reason, purveyors of fake photos often lower image resolution to blur quality or damage informational values.
According to a 2024 paper by researchers at Clarkson University in the United States (The Impact of Print-and-Scan in Heterogeneous Morph Evaluation Scenarios), AI manipulation detectors that achieved nearly 99% accuracy in the digital domain dropped to near-random performance on images that had undergone a print-and-scan process. This means that even as detection technologies advance, it is difficult to overcome technical limits when original information has been degraded.
In the end, the criterion for judging a photo's authenticity becomes context rather than technology. One must weigh, in combination, the timing and circumstances in which the photo appeared, whether it conflicts with reality, and the reliability of the source. However, in sudden situations that are hard to grasp or in overseas events where information is limited, cases of AI images being misused as real photos are increasing.