X Implements Restrictions on Grok Image Generation Amid Ongoing Global Scrutiny
Elon Musk’s social media platform X (formerly Twitter) has introduced new restrictions to Grok’s image generation and editing capabilities, targeting content depicting real people in revealing attire such as bikinis. The policy shift, announced on Wednesday evening, follows widespread global condemnation of Grok’s role in generating thousands of harmful nonconsensual “undressing” photographs of women and sexualized imagery of apparent minors.
Despite these measures, standalone Grok applications and websites—independent of X’s platform—remain capable of producing “undress”-style and pornographic content, according to multiple tests conducted by researchers, WIRED, and other journalists. Tests revealed that while X’s Grok integration enforces certain safeguards, its standalone counterparts retain vulnerabilities, allowing continued generation of explicit imagery.
Testing Reveals Persistent Vulnerabilities in Standalone Grok
Paris-based nonprofit AI Forensics, led by researcher Paul Bouchaud, confirmed that Grok’s standalone platform (Grok.com) can generate photorealistic nudity that is prohibited on X. “We can generate nudity on Grok.com in ways that Grok on X cannot,” Bouchaud stated. Similarly, WIRED tests using free Grok accounts on X’s website in the UK and US successfully removed clothing from images of male subjects without apparent restrictions. In the UK’s Grok app, generating images of undressed males required the user to input their year of birth—a measure absent on X’s platform.
Journalists from The Verge and Bellingcat independently corroborated these findings, reporting that sexualized imagery remains producible in the UK, where authorities have condemned X and Grok for enabling nonconsensual intimate content.
Global Condemnation and Regulatory Scrutiny
Since January, Musk’s businesses—including AI firm xAI, X, and Grok—have faced intense criticism for producing nonconsensual intimate imagery, explicit sexual videos, and sexualized content involving apparent minors. Officials in 13 countries and the European Commission (EC) have condemned or launched investigations into X or Grok, including the US, Australia, Brazil, Canada, France, India, and the UK.
X’s Official Safety Update
X’s Safety account detailed the new restrictions on Wednesday, stating: “Technological measures have been implemented to prevent Grok from allowing the editing of images of real people in revealing clothing such as bikinis.” The policy applies to all users, including free and paid subscribers.
In a section titled “Geoblock update,” X claimed it now restricts image generation of bikinis, underwear, and similar attire in jurisdictions where such content is illegal. The company also vowed to remove high-priority violative content, including child sexual abuse material (CSAM) and nonconsensual nudity, while continuing to refine safeguards.
Controversial January Policy Shift
The latest restrictions follow a January 9 policy that limited X’s Grok image generation to paid “verified” subscribers—a move criticized by a leading women’s group as the “monetization of abuse.” Bouchaud confirmed that since January 9, only verified accounts can generate images on X’s platform, with bikini imagery of women now rare. “They appear to have disabled the functionality on X,” he noted.
Musk’s Stance on NSFW Content
Musk, in a post on X, clarified that Grok allows explicit AI-generated pornography involving “imaginary adult humans (not real ones)” when “NSFW enabled,” framing it as consistent with U.S. R-rated media standards. However, this distinction has been challenged amid ongoing global legal and ethical concerns.
Ongoing Safeguards and Geoblocking
X’s statement emphasized that geoblocking restricts image generation in illegal jurisdictions, though it remains unclear if the measures apply to all Grok-enabled regions. Spokespeople for xAI and X did not immediately respond to WIRED’s requests for comment, but X clarified that its geoblock policy extends to both the app and website.
This development underscores persistent challenges in regulating AI-generated content, with critics warning that gaps in enforcement could continue to enable nonconsensual imagery, despite X’s new safeguards.