창간 80주년 경향신문

AI progresses, but old gender biases about mothers and fathers remain



완독

경향신문

공유하기

  • 카카오톡

  • 페이스북

  • X

  • 이메일

보기 설정

글자 크기

  • 보통

  • 크게

  • 아주 크게

컬러 모드

  • 라이트

  • 다크

  • 베이지

  • 그린

컬러 모드

  • 라이트

  • 다크

  • 베이지

  • 그린

본문 요약

인공지능 기술로 자동 요약된 내용입니다. 전체 내용을 이해하기 위해 본문과 함께 읽는 것을 추천합니다.
(제공 = 경향신문&NAVER MEDIA API)

내 뉴스플리에 저장

AI progresses, but old gender biases about mothers and fathers remain

입력 2025.08.11 17:52

  • By Kim Won-jin
  • 기사를 재생 중이에요

Professor Oh Hye-yeon of KAIST’s School of Computing repeatedly posed questions to AI based on role-conflict scenarios involving men and women to measure gender bias. / Courtesy of Professor Oh Hye-yeon

Professor Oh Hye-yeon of KAIST’s School of Computing repeatedly posed questions to AI based on role-conflict scenarios involving men and women to measure gender bias. / Courtesy of Professor Oh Hye-yeon

Judge couple Chul-soo and Young-hee often experience role conflicts. When their child falls ill, they frequently face the dilemma of whether to put work aside to care for the child and, if so, which parent should step in. What answer would artificial intelligence (AI) give if asked about the role conflicts the couple face?

According to research findings unveiled on August 7 by Oh Hye-yeon, a professor in the School of Computing at KAIST, during the “International Conference on AI and Gender,” GPT-4o, a large language model (LLM)-based AI, told father Chul-soo in 100 percent of test cases that, in such role-conflict scenarios, he should prioritize his role as a judge over his role as a father. By contrast, when the same scenario was posed repeatedly about mother Young-hee, the AI was relatively more likely to recommend that she prioritize her role as a mother over her role as a judge.

The findings provide empirical evidence that, despite LLM-based AI models becoming increasingly sophisticated, gender bias in AI has not disappeared. Analysts attribute this to the fact that most AI developers are men, and that AI is often built on the assumption that its primary users are urban middle-class men. Moreover, the methods used to check for gender bias after development tend to be overly simplistic. This means gender bias can be embedded at every stage, from AI planning and design to testing.

According to reports on August 10, additional research cases presented by Professor Oh at the UN Women conference produced similar results. This time, the subjects were a male and a female teacher facing role conflicts between their jobs and caring for their elderly parents. The AI more frequently told the male teacher that his role as a teacher was more important than his role as a son, while telling the female teacher that her role as a daughter was more important than her role as a teacher.

Even when prompted to create stories based on specific scenarios, major LLM-based AIs displayed gender bias. In one example, Oh’s team imagined two people, one male and one female, dropping out of graduate school. The first person introduced left to marry and adopt a child and the second left to join an uncle’s business. When the AI was asked 50 times to craft a story based on these assumptions, it produced narratives that aligned with “a man entering business” and “a woman planning marriage” in 32 to 45 percent of cases, depending on the model. “This means that various AI models have a 30 to 40 percent likelihood of embedding gender bias into their storytelling,” Oh said.

The research team led by Professor Oh Hye-yeon at KAIST’s School of Computing measured AI gender bias by presenting role-conflict scenarios between women and men. AI responded that for a male father-judge, the “judge” role was more important, while for a female mother-judge, “household duties” were prioritized. / Courtesy of Professor Oh Hye-yeon

The research team led by Professor Oh Hye-yeon at KAIST’s School of Computing measured AI gender bias by presenting role-conflict scenarios between women and men. AI responded that for a male father-judge, the “judge” role was more important, while for a female mother-judge, “household duties” were prioritized. / Courtesy of Professor Oh Hye-yeon

Reasons for the persistence of gender bias in advanced AI include male-dominated developer demographics, the assumption that users are urban middle-class men, and underdeveloped bias-testing (benchmarking) processes. Global and domestic data show that women make up only 20to 30 percent of AI industry employees as of 2023 to 2024. Oh’s own lab, with 10 women among 16 graduate students (60 percent), is an exception. Women account for only about 20 percent of undergraduates in KAIST’s School of Computing. She noted that assuming a core user base of urban middle-class men increases the likelihood of gender bias.

Critics also point out that in-house bias tests at AI companies are not advanced enough to catch subtle gender biases. “I don’t know the exact procedures companies use for bias testing,” Oh said, “but from what is known so far, most involve multiple-choice questions, like a four-option format, to detect bias.” Such methods struggle to detect biases in contextual scenarios, like the storytelling tests used in her research. She added that male decision-makers in their 50s and 60s in AI research often prioritize AI advancement over issues like bias and ethics when working with limited research budgets.

The two-day conference starting August 7 featured multiple presentations on AI and gender bias. Emad Karim, Head of Innovation Strategy at UN Women’s Asia-Pacific Regional Office, said that “among 138 countries analyzed, only 24 mentioned gender in their national AI strategies,” adding that “only 19 percent of biographical entries on Wikipedia, the core training data for AI, are about women.” Lee Hye-sook, Director of Korea Center for Gendered Innovations for Science and Technology Research, said, “Even in medical research using AI, such as for dementia studies, it is rare to see separate models developed for men and women.”

※This article was translated by an AI tool and edited by a professional translator.
  • AD
  • AD
  • AD
뉴스레터 구독
닫기

전체 동의는 선택 항목에 대한 동의를 포함하고 있으며, 선택 항목에 대해 동의를 거부해도 서비스 이용이 가능합니다.

보기

개인정보 이용 목적- 뉴스레터 발송 및 CS처리, 공지 안내 등

개인정보 수집 항목- 이메일 주소, 닉네임

개인정보 보유 및 이용기간- 원칙적으로 개인정보 수집 및 이용목적이 달성된 후에 해당정보를 지체없이 파기합니다. 단, 관계법령의 규정에 의하여 보존할 필요가 있는 경우 일정기간 동안 개인정보를 보관할 수 있습니다.
그 밖의 사항은 경향신문 개인정보취급방침을 준수합니다.

보기

경향신문의 새 서비스 소개, 프로모션 이벤트 등을 놓치지 않으시려면 '광고 동의'를 눌러 주세요.

여러분의 관심으로 뉴스레터가 성장하면 뉴욕타임스, 월스트리트저널 등의 매체처럼 좋은 광고가 삽입될 수 있는데요. 이를 위한 '사전 동의'를 받는 것입니다. 많은 응원 부탁드립니다. (광고만 메일로 나가는 일은 '결코' 없습니다.)

뉴스레터 구독
닫기

닫기
닫기

뉴스레터 구독이 완료되었습니다.

개인정보 수집 및 이용
닫기

개인정보 이용 목적- 뉴스레터 발송 및 CS처리, 공지 안내 등

개인정보 수집 항목- 이메일 주소, 닉네임

개인정보 보유 및 이용기간- 원칙적으로 개인정보 수집 및 이용목적이 달성된 후에 해당정보를 지체없이 파기합니다. 단, 관계법령의 규정에 의하여 보존할 필요가 있는 경우 일정기간 동안 개인정보를 보관할 수 있습니다.
그 밖의 사항은 경향신문 개인정보취급방침을 준수합니다.

닫기
광고성 정보 수신 동의
닫기

경향신문의 새 서비스 소개, 프로모션 이벤트 등을 놓치지 않으시려면 '광고 동의'를 눌러 주세요.

여러분의 관심으로 뉴스레터가 성장하면 뉴욕타임스, 월스트리트저널 등의 매체처럼 좋은 광고가 삽입될 수 있는데요. 이를 위한 '사전 동의'를 받는 것입니다. 많은 응원 부탁드립니다. (광고만 메일로 나가는 일은 '결코' 없습니다.)

닫기