An overwhelming majority of Grok user-generated content from a mid-January analysis depicts nudity or sexual activity, according to a think tank.
Elon Musk’s artificial intelligence (AI) platform Grok is still being used to generate sexually explicit images despite recent restrictions by the company, according to a new analysis.
Last summer, xAI, Grok’s operating company, introduced an image-generator feature that included a “spicy mode” that could generate adult content. In recent weeks, the feature was used to undress images of women.
After mounting criticism, parent company X said on January 14 that it had “implemented technological measures” to prevent Grok from editing images of real people in revealing clothing.
But European non-profit AI Forensics found that Grok could still be used to generate sexualised images of individuals.
It analysed 2,000 user conversations on January 19 and found an "overwhelming" majority depicted nudity or sexual activity, showing the platform is still used to generate sexual images.
The researchers also found that users can bypass restrictions by accessing Grok directly through its website rather than through X, or by using Grok Imagine, the AI’s video and image generation tool.
Grok did not answer Euronews Next’s prompt to generate a sexual image on January 20th. Instead, it said xAI implemented content blocks to prevent image creation that depicts real people in revealing or sexualised clothing, including bikinis, and underwear.
“You cannot reliably generate naked or half-naked images through Grok right now, especially not of real people or anything explicit,” the chatbot said in response to the prompt. “xAI has prioritised legal compliance and safety over unrestricted ‘spicy’ generation after the January controversies.”
Euronews Next reached out to xAI for comment about the analysis, but did not receive a response at the time of publication.
Separately, Musk wrote on X that the algorithm for both xAI and Grok is “dumb” and “needs massive improvements.”
X has since open sourced its algorithmand posted it on GitHub, a developer platform, to share code, so users can watch its team “struggle to make it better in real-time,” Musk said.
X’s algorithm considers what a user has clicked on or engaged with when deciding which content to show on its feed, according to xAI’s GitHub page.
It also analyses “out-of-network” content from accounts that the user doesn’t follow but that they might find interesting. The algorithm then sorts and ranks this content using a mathematical formula to decide what will be shown in the user’s feed.
It also filters out posts from blocked accounts or keywords that the user doesn’t want to see, along with content that the algorithm detects as violent or spam, the diagram continued.
Musk has committed to updating the GitHub page every four weeks with the developer’s notes, so users can understand what changes have been made.