Newsletter Newsletters Events Events Podcasts Videos Africanews
Loader
Advertisement

Grok under fire for generating sexually explicit deepfakes of women and minors

Tesla and SpaceX's CEO Elon Musk attends the first plenary session on of the AI Safety Summit at Bletchley Park, on Wednesday, Nov. 1, 2023 in Bletchley, England.
Tesla and SpaceX's CEO Elon Musk attends the first plenary session on of the AI Safety Summit at Bletchley Park, on Wednesday, Nov. 1, 2023 in Bletchley, England. Copyright  Leon Neal/Pool Photo via AP
Copyright Leon Neal/Pool Photo via AP
By Anca Ulea
Published on Updated
Share Comments
Share Close Button

Elon Musk’s xAI is facing backlash as its chatbot Grok, a key feature on social media platform X, repeatedly generated sexually explicit images of women and minors.

Growing global backlash to xAI’s sexually explicit artificial intelligence-generated imagery has forced the company, owned by Elon Musk, to address safety concerns.

In recent weeks, X’s AI chatbot Grok has responded to user prompts to “undress” images of women and pose them in bikinis, creating AI-generated deepfakes with no consent or safeguards.

Media analyses also found that Grok often complied when users prompted it to generate sexually suggestive images of minors, including one of a 14-year-old actress, raising alarm bells with global regulators.

In response to the flood of images, government officials in the EU, France, India and Malaysia have launched investigations and threatened legal action if xAI doesn’t take measures to prevent and remove sexual deepfakes of real people and child sexual abuse material (CSAM).

The UK's Office of Communications (Ofcom) said in a post on X that it had reached out to the company to ask how they plan to protect British users.

"Based on their response, we will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation," Ofcom wrote.

Musk, who had initially made light of the bikini images by reposting Grok-generated likenesses of himself and a toaster in a bikini, posted on Saturday that “anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content".

X’s safety account added in a post on Sunday that illegal content would be removed and accounts that post it would be permanently suspended, saying the company would work with local governments and law enforcement to identify offenders.

Grok, no stranger to controversy

Since Musk bought X, formerly known as Twitter, in 2023, he’s billed the social media platform as a counterbalance to “political correctness,” aiming at legacy media and progressive politics.

This philosophy has also been applied to the AI business, with Grok designed to be “politically-neutral” and “maximally truth-seeking,” according to Musk.

In reality, the chatbot – which is integrated into X’s interface, meaning users can directly ask it questions by tagging it in posts – has increasingly reflected Musk’s own worldview and right-leaning views.

Last July, xAI issued a lengthy apology after Grok posted a slew of anti-Semitic comments praising Adolf Hitler, referring to itself as “MechaHitler,” and generating Holocaust denial content.

Grok Imagine, the company’s AI-powered image and video generator, has been criticised for allowing the spread of sexual deepfakes since its launch in August 2025.

The generator includes a paid “Spicy Mode” that allows users to create NSFW content, including partial nudity.

Its terms prohibit pornography that features real people’s likenesses and sexual content involving minors. But the tool reportedly generated nude videos of pop star Taylor Swift without being prompted, according to The Verge.

The fight against AI-powered 'nudification' tools

AI-powered tools that allow users to edit images to remove someone's clothing have come under fire from regulators aiming to tackle misogyny and protect children.

In December, the UK government said it would ban so-called "nudification" apps as part of a broader effort to reduce violence against women and girls by half. The new laws would make it illegal to create or supply AI tools that allow users to digitally remove someone's clothing.

Deepfake pornography accounts for approximately 98% of all deepfake videos online, with 99% of the targets being women, according to a 2023 report by cybersecurity firm Home Security Heroes.

Go to accessibility shortcuts
Share Comments

Read more