X’s Grok AI is being used to “undress” women in photos — and Reuters found requests targeting minors, too

NEW YORK — A wave of posts showing AI-altered images of real people — often women — dressed in minimal clothing has triggered international alarm and renewed scrutiny of X after Reuters reported that the platform’s Grok AI was being used to generate sexualized edits, including requests that targeted minors.

Reuters reported that ministers in France said they had referred X to prosecutors and regulators over the content, calling it “manifestly illegal,” while India’s IT ministry wrote to X’s local unit saying the platform failed to prevent Grok’s misuse by generating and circulating obscene and sexually explicit content. U.S. regulators were more reserved in immediate public response: Reuters said the Federal Communications Commission did not respond to requests for comment, and the Federal Trade Commission declined to comment.

The speed and visibility of the surge is a big part of why this story is blowing up. Reuters said the “mass digital undressing spree” appeared to have taken off over a short period, based on public requests and complaints reviewed by the newsroom, though Reuters said it could not determine the full scale.

In one small snapshot, Reuters reviewed public requests sent to Grok over a single 10-minute period around midday U.S. Eastern time on a Friday and tallied 102 attempts by X users to digitally edit photos so subjects would appear to be wearing bikinis. Reuters said most targets were young women, though requests also included men, celebrities, politicians, and even a monkey.

The prompts Reuters described were often explicit about making outfits more revealing, with users requesting “very transparent” and “tinier” clothing. Reuters reported Grok sometimes complied with the initial request and sometimes did not respond to follow-up requests.

Elon Musk’s public tone added fuel. Reuters reported Musk appeared to poke fun at the controversy, posting laugh-cry emojis in response to AI edits of famous people — including himself — in bikinis, and replying with more emojis to a user who joked their feed looked like a bar packed with bikini-clad women.

The broader issue is one parents and schools have been warning about for a while: once “easy button” image manipulation hits a mainstream platform, the harm isn’t theoretical. It can be instantaneous, personal, and scalable — especially when the targets are private individuals rather than public figures, and when the content spreads through quote-posts and reposts faster than moderation systems can react.

X’s response, safeguards, and any enforcement changes will be closely watched because this controversy isn’t just about bad taste — it’s about consent, potential illegality in some jurisdictions, and whether AI products should ship with stronger default guardrails when they can be used to sexualize real people.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.