Elon Musk and xAI·Grok logos. AFP Yonhap News
“I thought, ‘Surely this cannot be real.’ So I tested it with a photo of myself from childhood. It was real. Truly disgusting.”
A freelance journalist in the United Kingdom shared an image on X on the 2nd along with this message. The image he posted showed the result of the artificial intelligence (AI) service ‘Grok’ within X turning a girl into a bikini outfit at the request “change her clothes to a bikini.” A childhood photo taken wearing a dress and cardigan was seamlessly altered.
Controversy continues over the generation·distribution of sexual exploitation images by Grok. The lack of safeguards and an accountability vacuum, long obscured behind advances in AI technology, has surfaced.
The social network X, once called ‘Twitter’, and the generative AI chatbot Grok are operated by the AI company ‘xAI’ led by Tesla Chief Executive Officer (CEO) Elon Musk. Grok can be used within the X service.
At the end of last month, when an ‘image editing’ feature was added to Grok, a flood of concerns was raised that it makes it easy to create sexualized fake images (deepfakes). With this feature, if an X user tags Grok in a comment on a post containing an image to request edits, Grok generates and uploads the image without the consent of the original subject.
For example, someone could freely change a photo I posted to show me in underwear, and other users could see it as well. Grok forbids turning a person into a fully nude state, but it is known to have looser moderation than other services.
The European nonprofit ‘AI Forensics’ analyzed 200,000 random images generated by Grok between December 25 last year and the 1st of this month and found that 53% depicted people wearing only minimal clothing such as underwear or bikinis. Of these, 81% appeared to be women. Two percent of all images appeared to feature individuals 18 or younger. Among the images Grok generated were propaganda for the Nazis and for the Islamist extremist militant group Islamic State (ISIS).
On the 2nd, Grok stated in response to a user’s post raising the issue, “We have identified defects in the safeguards and are urgently correcting them. Child sexual exploitation material is illegal and prohibited.” The following day, Musk commented on another post, saying, “Those who use Grok to create illegal content will face the same penalties as when they upload illegal content.” X said it deletes illegal content and permanently suspends accounts, and cooperates with law enforcement when necessary.
In South Korea as well, producing·distributing pornography using AI can be punished under the Act on Special Cases Concerning the Punishment of Sexual Crimes and the Act on the Protection of Children and Youth from Sexual Offenses, among others. However, a court ruling last year held that exposure photos created by AI cannot be punished as the distribution of false video under the sexual violence punishment law unless the victim is identifiable, prompting calls for legislative supplementation.
There is also criticism that Musk and xAI are downplaying their own responsibility and shifting it onto users. Through its ‘use restriction policy’, xAI prohibits “depicting individuals in obscene ways” and “the sexual objectification or exploitation of children.” However, xAI has attracted users by promoting its low level of moderation. Recently, user engagement on X reportedly hit an all-time high.
Authorities in the European Union (EU), the United Kingdom, India, and Malaysia are currently looking into Grok’s generation of sexual exploitation images.
U.S. outlet Axios assessed the controversy as having “laid bare the question of who is ultimately responsible for harm caused by a chatbot’s output.” CNN said it “shows how dangerous AI and social media can be when combined without sufficient safeguards to protect the most vulnerable in society.”