창간 80주년 경향신문

With a single prompt, AI makes ‘sexually exploitative images’ in 30 seconds···the real problem is ‘the sexual objectification of women’



완독

경향신문

공유하기

  • 카카오톡

  • 페이스북

  • X

  • 이메일

보기 설정

글자 크기

  • 보통

  • 크게

  • 아주 크게

컬러 모드

  • 라이트

  • 다크

  • 베이지

  • 그린

컬러 모드

  • 라이트

  • 다크

  • 베이지

  • 그린

본문 요약

인공지능 기술로 자동 요약된 내용입니다. 전체 내용을 이해하기 위해 본문과 함께 읽는 것을 추천합니다.
(제공 = 경향신문&NAVER MEDIA API)

내 뉴스플리에 저장

With a single prompt, AI makes ‘sexually exploitative images’ in 30 seconds···the real problem is ‘the sexual objectification of women’

입력 2026.03.08 18:20

  • By Kim Song-yi, Kim Won-jin

This article was translated by an AI tool. Feedback Here.

‘Virtual fitting’ and ‘face swapping’ touted by AI tools are proliferating

‘Third-party consent’ wording is a fig leaf···no adult verification at all

"An atmosphere that recognizes objectification as gender-based violence is needed"

Grok displays an age verification notice. Yonhap News Getty Images 사진 크게보기

Grok displays an age verification notice. Yonhap News Getty Images

On the 8th, a search for ‘AI images’ on the Google Play Store returned 238 image-generation applications touting features such as ‘face swapping’ and ‘AI virtual fitting’. Among them, tool A, which promotes itself as ‘making photos talk and dance’, was downloaded for a free trial. After the reporter uploaded a full-body photo taken while wearing a black padded jacket to the image generator and entered the one-line prompt to ‘make it a bikini outfit’, an image in a bikini was produced in just 30 seconds. The app is labeled ‘ages 3 and up’. There was no procedure to verify that the user is an adult, to confirm that consent had been obtained from the person in the photo used for the composite, or to check whether the photo was of a minor.

Even after it became an issue that generative AI such as ‘Grok’ can produce non-consensual body composites, AI tools that enable sexual exploitation are still flooding the market. Experts said that, prior to distribution, preventive technologies must be developed and, fundamentally, a social atmosphere must be cultivated that recognizes the sexual objectification of women itself as gender-based violence.

AI synthesis tools commonly sexualized women. Tool B advertised on the Play Store that it can convert cute, catlike images into video, but its YouTube and in-app promotional videos emphasized images of women that highlight certain body parts or sexually suggestive poses. It even provided prompts that sexualize female-dominated occupations such as flight attendants or teachers as example templates.

Tool C, which puts forward ‘clothing removal’ as a keyword, targeted a specific gender from the function name itself, describing it as ‘take the clothes off a woman (undress her)’. In its example images and default prompts, only female models and female body parts appeared. A similar tool D, when asked whether the app works on men, said that deepnude technology was not originally intended for male subjects because the training data focused mainly on women.

An AI image-generation tool advertising clothing removal displays a disclaimer. Website screenshot 사진 크게보기

An AI image-generation tool advertising clothing removal displays a disclaimer. Website screenshot

Although they state in their terms of service that responsible use is required, in practice these clauses function largely as a liability shield. None of the tools required adult verification to confirm that users are at least 18 years old. Some required users to agree to a statement such as not using photos of others without permission, but it was limited to a simple checkbox. In particular, tool C stated that images containing minors are strictly prohibited, yet it did not address, to a comparable degree, the use of images of adult women without consent.

Under current domestic law, if a person deemed a child or adolescent appears in sexual exploitation material, it is punishable; however, for composites of adult women, it is difficult to prosecute under the Act on Special Cases Concerning the Punishment of Sexual Crimes if a virtual person is depicted or no specific, real victim is identified. In January, the U.S. non-profit Technology Transparency Project (TTP) announced that the cumulative revenue of such apps had reached 117 million dollars, and even after the group requested takedowns, many of these apps could still be found in wide circulation as of this date.

On online communities, job postings were recruiting people to use generative AI to produce five-second sexual exploitation videos for $75 (100,000 KRW) per task. Compared with the deepfake sex-crime crisis in 2024, in only one year and six months, anyone can now create composites by entering just a line or two of prompts.

Images within Grok are seen on a computer screen. Yonhap News Getty Images 사진 크게보기

Images within Grok are seen on a computer screen. Yonhap News Getty Images

Because the spread of sexual exploitation material can become uncontrollable after AI synthesis is carried out, voices are calling for stronger prevention up front. Technologies are also being developed to prevent original photos from becoming deepfake targets.

A research team led by Prof. Yu Seok-bong at Chonnam National University recently developed the ‘Deep Protect’ technique. If someone attempts to deepfake a photo posted on social media, the technique distorts the original so that the result comes out differently from what the attacker intends. Prof. Yu said that, whereas responses to date have often involved post-incident handling after damage from digital sex crimes has already occurred, such tools are expected to prevent harm preemptively going forward. The Seoul Metropolitan Government also announced recently that it will distribute to institutions nationwide a system that uses AI to monitor social media around the clock and delete sexual exploitation material.

Ultimately, observers argue that society must come to recognize the sexual objectification of women itself as gender-based violence. We need to face the fact that assumptions such as it being acceptable if the woman is virtual, or acceptable if body parts that evoke sexual shame are not shown, underlie deepfake and AI sexual exploitation.

In a report, Prof. Kim Sua of the Department of Communication at Seoul National University noted that, within a belief that adults have the right to watch pornography, the sphere in which the sexual objectification of women is treated as a problem can be confined to the protection of adolescents. She added that education is needed to help people recognize that forms of gender-based violence mediated by technology are violence grounded in objectification and misogyny and stem from the unequal nature of gender structures.

  • AD
  • AD
  • AD
뉴스레터 구독
닫기

전체 동의는 선택 항목에 대한 동의를 포함하고 있으며, 선택 항목에 대해 동의를 거부해도 서비스 이용이 가능합니다.

보기

개인정보 이용 목적- 뉴스레터 발송 및 CS처리, 공지 안내 등

개인정보 수집 항목- 이메일 주소, 닉네임

개인정보 보유 및 이용기간- 원칙적으로 개인정보 수집 및 이용목적이 달성된 후에 해당정보를 지체없이 파기합니다. 단, 관계법령의 규정에 의하여 보존할 필요가 있는 경우 일정기간 동안 개인정보를 보관할 수 있습니다.
그 밖의 사항은 경향신문 개인정보취급방침을 준수합니다.

보기

경향신문의 새 서비스 소개, 프로모션 이벤트 등을 놓치지 않으시려면 '광고 동의'를 눌러 주세요.

여러분의 관심으로 뉴스레터가 성장하면 뉴욕타임스, 월스트리트저널 등의 매체처럼 좋은 광고가 삽입될 수 있는데요. 이를 위한 '사전 동의'를 받는 것입니다. 많은 응원 부탁드립니다. (광고만 메일로 나가는 일은 '결코' 없습니다.)

뉴스레터 구독
닫기

닫기
닫기

뉴스레터 구독이 완료되었습니다.

개인정보 수집 및 이용
닫기

개인정보 이용 목적- 뉴스레터 발송 및 CS처리, 공지 안내 등

개인정보 수집 항목- 이메일 주소, 닉네임

개인정보 보유 및 이용기간- 원칙적으로 개인정보 수집 및 이용목적이 달성된 후에 해당정보를 지체없이 파기합니다. 단, 관계법령의 규정에 의하여 보존할 필요가 있는 경우 일정기간 동안 개인정보를 보관할 수 있습니다.
그 밖의 사항은 경향신문 개인정보취급방침을 준수합니다.

닫기
광고성 정보 수신 동의
닫기

경향신문의 새 서비스 소개, 프로모션 이벤트 등을 놓치지 않으시려면 '광고 동의'를 눌러 주세요.

여러분의 관심으로 뉴스레터가 성장하면 뉴욕타임스, 월스트리트저널 등의 매체처럼 좋은 광고가 삽입될 수 있는데요. 이를 위한 '사전 동의'를 받는 것입니다. 많은 응원 부탁드립니다. (광고만 메일로 나가는 일은 '결코' 없습니다.)

닫기