AI-Driven Harm and Restricted Access: Grok’s Ongoing Challenges on X
Following the proliferation of nonconsensual “undressing” imagery and sexualized content involving apparent minors, Elon Musk’s social media platform X has reportedly restricted access to image generation tools within its Grok chatbot. Despite these changes, the technology continues to produce harmful sexualized content when prompted by verified or paid users, raising concerns about the effectiveness of the platform’s response to AI-fueled abuse.
Restricted Image Generation Access on X
Grok’s integration with X now limits image creation and editing to paying subscribers, as confirmed by user reports and system tests. A message directed users toward X’s $395 annual subscription tier, with a test query for a tree image returning the same restriction. This policy shift followed intensified global scrutiny: X and its parent company xAI face investigations over nonconsensual explicit imagery and alleged child sexual material, prompting UK Prime Minister Keir Starmer to threaten potential bans, labeling the actions “unlawful.”
Persistence of Harmful Content Despite Policy Changes
Neither X nor xAI has formally acknowledged the paid-only restriction, though a spokesperson for X declined to comment ahead of publication. Despite the policy, free accounts on X’s Grok interface still encountered sexualized imagery when prompted by verified or paid users. For example, Grok generated images in response to requests like “put her in latex lingerie” and “put her in a plastic bikini and cover her in donut white glaze,” marked by content warnings for adult material.
AI Forensics, a Paris-based nonprofit, observed a reduction in such content but persistent generation of “bikini-based sexualized imagery” via verified accounts. Lead researcher Paul Bouchaud explained, “The pattern of prompts and outcomes remains consistent, with volume diminished but not eliminated.”
Standalone Grok Platform: Unchecked Explicit Video Generation
Separately, Grok’s standalone website and app—distinct from its X-integrated version—has enabled creation of highly graphic sexual videos involving celebrities and real individuals, as reported by WIRED on Wednesday. Bouchaud confirmed unrestricted access persists: “I generated a sexually explicit video without restrictions using an unverified account,” they stated, highlighting the continued vulnerability of unregulated user accounts.
Expert and Regulatory Reactions to the Policy
Critics argue the paid-only restriction is insufficient. Emma Pickering, head of technology-facilitated abuse at UK charity Refuge, criticized the policy as “monetizing abuse,” noting, “It does not stop harm—it merely shifts it behind a paywall, allowing X to profit from user exploitation.” The British government similarly deemed the policy “insulting,” as it reclassifies unlawful AI features as premium services without addressing root causes.
Deepfake expert Henry Ajder noted systemic gaps: “With a monthly subscription, creating offensive content via fake accounts and disposable payments remains feasible. The core issue of model alignment and content moderation is unaddressed.”
Criticism of X’s Response to Harmful AI Use
AI Forensics’ Bouchaud emphasized missed opportunities for meaningful intervention: “They could have removed abusive material, disabled image generation entirely, or banned explicit video creation—but they did not.” Without systemic restrictions, the platform’s policy merely reduces harm visibility rather than eliminating it, leaving victims and regulators to confront lingering risks.
As global investigations into X’s AI practices continue, the paid-only restriction underscores tensions between content moderation and commercial interests in AI-driven harm, with no clear resolution to the problem of nonconsensual imagery generation.