Socialite and hotel heiress Paris Hilton joined US congresswoman Alexandria Ocasio-Cortez to advocate for legal protections for victims of AI deepfake porn.
A sex tape released without her consent helped make Paris Hilton a household name in the early 2000s.
The hotel heiress and businesswoman compared what happened to her then to the deepening artificial intelligence (AI) deepfake pornography crisis now targeting women and girls around the world.
In recent weeks, global regulators have urgently addressed a growing wave of sexually-explicit deepfakes targeting women and minors without their consent, with Elon Musk’s chatbot Grok at the centre of the backlash.
Responding to user prompts, Grok generated hundreds of thousands of images that “undress” real women and, in some cases, girls. While xAI said it “implemented technological measures” to prevent the chatbot from editing these images, researchers found those safeguards could be bypassed.
“Deepfake pornography has become an epidemic,” Hilton told a crowd outside the US Capitol building on Thursday. “It’s the newest form of victimisation happening at scale – to your daughters, your sisters, your friends, and neighbours.”
Hilton was 19 years old when a nude video of her spread like wildfire online, catapulting her to infamy in an era defined by predatory tabloids that exploited young women in the public eye.
“People called it a scandal – it wasn’t. It was abuse. There were no laws at the time to protect me, there weren’t even words for what had been done to me,” Hilton said, speaking publicly for the first time about the 2004 incident.
“I lost control over my body, over my reputation. My sense of safety and self-worth was stolen from me, and I’ve fought hard to get those things back,” she added.
The 44-year-old said she now wants to use her story to help other young women and girls who are being exploited online by abusers using AI tools.
That’s why Hilton said she joined US congresswomen Alexandria Ocasio-Cortez and Laurel Lee to advocate for the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act.
The bill, which passed unanimously in the Senate last week and must now be brought before the House of Representatives, would give victims of AI-generated deepfakes a legal pathway to press charges against their abusers.
“This isn’t about just technology, it’s about power,” Hilton said. “It’s about someone using someone’s likeness to humiliate, silence and strip them of their dignity. But victims deserve more than after-the-fact apologies, we deserve justice.”
Hilton said that she has also been targeted by 100,000 sexualised AI deepfakes.
“Not one of them is real, not one of them is consensual. And each time a new one appears, that horrible feeling returns, that fear that someone somewhere is looking at it right now and thinking it’s real,” she said.
Reining in abusive AI tools
The DEFIANCE Act comes on the heels of the TAKE IT DOWN Act, which was signed into law in May 2025 as the first US federal law limiting the use of AI in ways that can be harmful to individuals.
TAKE IT DOWN, which stands for “Tools to Address Known Exploitation by Immobilising Technological Deepfakes on Websites and Networks Act,” requires online platforms to remove unauthorised intimate images and deepfakes when notified. It comes into effect in May 2026.
“TAKE IT DOWN gave us removal, and DEFIANCE will give us recourse and restitution,” said Ocasio-Cortez, co-sponsor of the DEFIANCE Act.
“Once the bill is signed into law, and it will be signed into law, survivors will have the ability to hold their abusers accountable and seek financial and reputational damage for the harm they have caused,” she added.
The bipartisan DEFIANCE Act would give survivors the right to sue individuals who knowingly produce, distribute, solicit, or receive nonconsensual sexually-explicit digital forgeries. It also targets those who possess the content with the intent to distribute.
In Europe, the Digital Services Act (DSA) and AI Act offer some protection against deepfakes, by requiring platforms to label AI-generated content. Deepfake pornography is not explicitly addressed, leaving the enforcement up to individual member states.
Countries such as France, Denmark, and the United Kingdom have passed lawsprotecting victims from sexually-explicit deepfakes – targeting distributors of nonconsensual AI deepfakes with hefty fines and even prison sentences.