X implements new safety measures for Grok AI
X has rolled out new technical safeguards to curb the misuse of its AI chatbot Grok after backlash over sexualised deepfake images.
X implements new safety measures for Grok AI

Microblogging platform X has introduced technological measures to prevent its AI chatbot Grok from generating sexualised images of real people in jurisdictions where such content is illegal, following widespread backlash over obscene deepfakes.
In a post from its official ‘Safety’ handle, X said the restrictions apply to all users, including paid subscribers. The platform has also geoblocked the generation and editing of images depicting real individuals in revealing clothing, such as bikinis and underwear, in regions where it violates local laws.
X added that image creation and editing through the Grok account on the platform are now limited to paid subscribers, a move aimed at increasing accountability and deterring abuse. However, it clarified that this additional restriction does not override existing safety protocols, which apply uniformly to all users.
The company said its safety teams are working continuously to strengthen safeguards, remove illegal or violative content swiftly, suspend offending accounts, and cooperate with local governments and law enforcement agencies where required.
X reiterated its zero-tolerance policy towards child sexual exploitation, non-consensual nudity, and unwanted sexual content. It said accounts seeking or sharing child sexual abuse material are reported to law enforcement authorities as necessary.
The measures come amid heightened scrutiny from governments, including India, after Grok was allegedly misused to generate and circulate obscene and non-consensual images. Earlier this month, India’s IT Ministry directed X to remove all unlawful content generated by Grok and submit a detailed report outlining technical and organisational steps taken to prevent future violations.
Following the notice, X removed thousands of posts and suspended hundreds of accounts, later assuring authorities it would comply fully with Indian laws. Regulators in the UK and European Union have also raised concerns, intensifying global pressure on the platform over AI-driven content moderation failures.

