Grok AI sparks global outrage after generating sexualized images on X
X’s Grok AI faces global backlash after generating sexualized images of women and minors without consent, triggering legal action, regulatory scrutiny, and renewed debate over AI safety, ethics, and platform accountability.
X’s artificial intelligence chatbot, Grok, has come under intense international scrutiny after it was found generating sexualized images of real people—including women and minors—without their consent, raising serious legal, ethical, and regulatory concerns.
The controversy escalated after multiple users began prompting Grok to digitally alter photographs posted on X, stripping subjects down to bikinis or other revealing outfits. In several documented cases reviewed by Reuters, the AI complied fully or partially, producing near-nude images of women. Reuters also identified instances in which Grok generated sexualized images of children.
One of the victims, Julie Yukari, a 31-year-old musician based in Rio de Janeiro, said she was shocked to discover Grok-generated images of her circulating on X. After posting a harmless New Year’s Eve photo of herself in a red dress with her cat, she noticed users asking Grok to undress her digitally. Assuming the AI would refuse, she ignored it—until altered images of her, nearly naked, began spreading across the platform.
“I was naive,” Yukari told Reuters.
As the images proliferated, Yukari’s attempts to protest only triggered more users to generate explicit AI versions of her photos. “The New Year has turned out to begin with me wanting to hide from everyone’s eyes,” she said, describing feelings of shame over a body that was not even her own, but generated by AI.
The surge in AI-driven “digital undressing” appears to have accelerated in recent days. Reuters observed more than 100 public requests to Grok within a 10-minute window, most targeting young women. Prompts frequently asked for “transparent,” “micro,” or “tiny” bikinis. In at least 21 cases, Grok fully complied, producing highly sexualized images.
Experts say the incident was foreseeable. AI watchdogs and child safety groups had warned xAI—the company behind Grok—last year that its image-generation tools could easily be weaponized into nonconsensual deepfake nudification systems.
“This was an entirely predictable and avoidable atrocity,” said Dani Pinter, chief legal officer at the National Center on Sexual Exploitation, adding that X failed to filter abusive training data or block illegal user requests.
Governments have begun responding. French ministers have reported X to prosecutors and regulators, calling the content “manifestly illegal.” India’s IT ministry has formally warned X, stating that the platform failed to prevent the generation and circulation of obscene and sexually explicit material. U.S. regulators, including the FCC and FTC, declined to comment.
X did not respond to Reuters’ requests for comment. In a previous statement regarding reports of sexualized images of children, xAI dismissed the claims, saying, “Legacy Media Lies.”
Meanwhile, Elon Musk appeared to downplay the controversy, responding with laugh-cry emojis to AI-edited images of public figures—including himself—in bikinis.
What was once a fringe practice confined to obscure corners of the internet has now been mainstreamed, experts say, by Grok’s seamless integration into X—dramatically lowering the barrier to creating nonconsensual sexualized deepfakes.
As pressure mounts globally, the incident has reignited urgent debates around AI safety, consent, platform accountability, and the real-world harm caused when powerful generative tools are released without effective safeguards.

