창간 80주년 경향신문

From the ‘Maduro captured’ photo to the Kamchatka blizzard, AI images are proliferating... Can we tell real from fake?



완독

경향신문

공유하기

  • 카카오톡

  • 페이스북

  • X

  • 이메일

보기 설정

글자 크기

  • 보통

  • 크게

  • 아주 크게

컬러 모드

  • 라이트

  • 다크

  • 베이지

  • 그린

컬러 모드

  • 라이트

  • 다크

  • 베이지

  • 그린

본문 요약

인공지능 기술로 자동 요약된 내용입니다. 전체 내용을 이해하기 위해 본문과 함께 읽는 것을 추천합니다.
(제공 = 경향신문&NAVER MEDIA API)

내 뉴스플리에 저장

From the ‘Maduro captured’ photo to the Kamchatka blizzard, AI images are proliferating... Can we tell real from fake?

입력 2026.02.18 10:50

  • By Han Su-Bin

This article was translated by an AI tool. Feedback Here.

Images circulated as showing Venezuelan President Nicolas Maduro being seized and escorted by U.S. troops (left), and as a blizzard on Russia's Far Eastern Kamchatka Peninsula. They are AI-generated manipulated images. Screenshots from X and Threads

Images circulated as showing Venezuelan President Nicolas Maduro being seized and escorted by U.S. troops (left), and as a blizzard on Russia's Far Eastern Kamchatka Peninsula. They are AI-generated manipulated images. Screenshots from X and Threads

One of the questions photojournalists are asked most often lately is “Is this photo real?” Most of them originate on social networking services (SNS). Whenever an incident draws international attention, photos of unclear origin spread rapidly online. In the past, the main tactics were presenting old photos as if they depicted current scenes or compositing multiple images. Since the advent of artificial intelligence (AI), it has moved on to the stage of newly generating outright false images.

AI-generated images typically contain a digital watermark that is invisible to the naked eye. Google DeepMind's synthetic identification technology ‘SynthID’ is a prime example. However, if an AI-made image is printed on paper and then scanned, the situation changes. Most of the digital traces used to judge whether an image was AI-generated disappear, making authentication virtually impossible.

On January 3 (local time), Venezuelan President Nicolas Maduro being captured by U.S. forces and transported. The photo also shows edges as if it were printed on paper. Truth Social screenshot

On January 3 (local time), Venezuelan President Nicolas Maduro being captured by U.S. forces and transported. The photo also shows edges as if it were printed on paper. Truth Social screenshot

Venezuelan President Nicolas Maduro being held in a detention facility after arriving at Stewart Air National Guard Base in New York State, USA. The photo was used by Reuters, Boston NBC, WABC-TV, and others. Social media screenshot

Venezuelan President Nicolas Maduro being held in a detention facility after arriving at Stewart Air National Guard Base in New York State, USA. The photo was used by Reuters, Boston NBC, WABC-TV, and others. Social media screenshot

Venezuelan President Nicolas Maduro and his wife Cilia Flores are transferred to federal court in New York on January 5. This photo was captured by freelance photojournalist Adam Gray, not by AI. EPA-Yonhap

Venezuelan President Nicolas Maduro and his wife Cilia Flores are transferred to federal court in New York on January 5. This photo was captured by freelance photojournalist Adam Gray, not by AI. EPA-Yonhap

In January, a representative case involved photos tied to the U.S. ‘Maduro captured’ incident that spread via social media. A scene that appeared to show him being taken away at an airport by the Drug Enforcement Administration (DEA) looked like a real photo, and someone added news graphics, fueling faster spread. The White House X official account even reposted a lawmaker's post that included the photo, and some overseas media also cited it. Later, however, the image creator, Ian Weber (@San_live/ X), acknowledged the AI compositing. SynthID analysis likewise judged it to be a composite image. Yet debate continues over the photo of President Maduro giving a thumbs-up. The only grounds cited for its credibility are that “some media used it” or it was “posted by prominent figures on social media.”

A video capture that went viral claiming a blizzard on Russia's Far Eastern Kamchatka Peninsula, appearing to show people sliding down piled snow from a roughly 10-story building. It is an AI-generated fake video. Screenshot from X

A video capture that went viral claiming a blizzard on Russia's Far Eastern Kamchatka Peninsula, appearing to show people sliding down piled snow from a roughly 10-story building. It is an AI-generated fake video. Screenshot from X

An AI image of the Kamchatka Peninsula blizzard posted by Threads user @ibotoved. Screenshot from Threads

An AI image of the Kamchatka Peninsula blizzard posted by Threads user @ibotoved. Screenshot from Threads

A photo provided by the Kamchatka information agency shows workers clearing snow on January 19. Xinhua-Yonhap

A photo provided by the Kamchatka information agency shows workers clearing snow on January 19. Xinhua-Yonhap

Photos that spread claiming record snowfall in Russia's Kamchatka region followed a similar pattern. Images raced across social media and gained credibility as media outlets quoted them. But they were exaggerated beyond the actual weather situation, and the creator (@ibotoved/ Threads) admitted they were composite images made with Grok, revealing them to be false.

The reason fake photos keep spreading is not simply their sophistication. Research suggests human visual perception itself has reached its limit. According to a survey by the Australian research firm Conjointly, people's accuracy in identifying AI images hovers around 50%, essentially no better than random guessing. Many believed they could tell the difference, but their actual performance was not much different.

Photos from Conjointly's 2025 study on distinguishing real versus AI images. They recorded correct response rates of 31% and 35%, respectively.

Photos from Conjointly's 2025 study on distinguishing real versus AI images. They recorded correct response rates of 31% and 35%, respectively.

Photos from Conjointly's 2025 study on distinguishing real versus AI images. They recorded correct response rates of 65% and 51%, respectively.

Photos from Conjointly's 2025 study on distinguishing real versus AI images. They recorded correct response rates of 65% and 51%, respectively.

Photojournalists and professional verifiers typically assess authenticity using clues such as metadata about the shooting equipment, digital watermarks, and artificial compositional elements. For this reason, purveyors of fake photos often lower image resolution to blur quality or damage informational values.

According to a 2024 paper by researchers at Clarkson University in the United States (The Impact of Print-and-Scan in Heterogeneous Morph Evaluation Scenarios), AI manipulation detectors that achieved nearly 99% accuracy in the digital domain dropped to near-random performance on images that had undergone a print-and-scan process. This means that even as detection technologies advance, it is difficult to overcome technical limits when original information has been degraded.

In the end, the criterion for judging a photo's authenticity becomes context rather than technology. One must weigh, in combination, the timing and circumstances in which the photo appeared, whether it conflicts with reality, and the reliability of the source. However, in sudden situations that are hard to grasp or in overseas events where information is limited, cases of AI images being misused as real photos are increasing.

  • AD
  • AD
  • AD
뉴스레터 구독
닫기

전체 동의는 선택 항목에 대한 동의를 포함하고 있으며, 선택 항목에 대해 동의를 거부해도 서비스 이용이 가능합니다.

보기

개인정보 이용 목적- 뉴스레터 발송 및 CS처리, 공지 안내 등

개인정보 수집 항목- 이메일 주소, 닉네임

개인정보 보유 및 이용기간- 원칙적으로 개인정보 수집 및 이용목적이 달성된 후에 해당정보를 지체없이 파기합니다. 단, 관계법령의 규정에 의하여 보존할 필요가 있는 경우 일정기간 동안 개인정보를 보관할 수 있습니다.
그 밖의 사항은 경향신문 개인정보취급방침을 준수합니다.

보기

경향신문의 새 서비스 소개, 프로모션 이벤트 등을 놓치지 않으시려면 '광고 동의'를 눌러 주세요.

여러분의 관심으로 뉴스레터가 성장하면 뉴욕타임스, 월스트리트저널 등의 매체처럼 좋은 광고가 삽입될 수 있는데요. 이를 위한 '사전 동의'를 받는 것입니다. 많은 응원 부탁드립니다. (광고만 메일로 나가는 일은 '결코' 없습니다.)

뉴스레터 구독
닫기

닫기
닫기

뉴스레터 구독이 완료되었습니다.

개인정보 수집 및 이용
닫기

개인정보 이용 목적- 뉴스레터 발송 및 CS처리, 공지 안내 등

개인정보 수집 항목- 이메일 주소, 닉네임

개인정보 보유 및 이용기간- 원칙적으로 개인정보 수집 및 이용목적이 달성된 후에 해당정보를 지체없이 파기합니다. 단, 관계법령의 규정에 의하여 보존할 필요가 있는 경우 일정기간 동안 개인정보를 보관할 수 있습니다.
그 밖의 사항은 경향신문 개인정보취급방침을 준수합니다.

닫기
광고성 정보 수신 동의
닫기

경향신문의 새 서비스 소개, 프로모션 이벤트 등을 놓치지 않으시려면 '광고 동의'를 눌러 주세요.

여러분의 관심으로 뉴스레터가 성장하면 뉴욕타임스, 월스트리트저널 등의 매체처럼 좋은 광고가 삽입될 수 있는데요. 이를 위한 '사전 동의'를 받는 것입니다. 많은 응원 부탁드립니다. (광고만 메일로 나가는 일은 '결코' 없습니다.)

닫기