Some have argued the proposed EU law could usher in mass surveillance in the bloc through the scanning of all communications, including encrypted messages.
Rhiannon was just thirteen when she was groomed online, coerced and sexually abused.
Her perpetrator was charged, but the impact of his crimes runs deep.
“I didn't speak about my abuse for a very long time,” Rhiannon, a survivor and head of advocacy at the Marie Collins Foundation, told Euronews. “I suffered with anxiety, panic attacks, depression, self-harm and suicide attempts.”
Today, at 33, she lives in the knowledge that images and videos of her abuse are still circulating online. But she speaks out, calling for stronger regulation to clamp down on sexual predators.
On Thursday, European Union ministers will discuss sweeping new laws proposed by the European Commission to tackle child sexual abuse online and to ensure crimes, such as those committed against Rhiannon, are not re-lived day after day on the Internet.
A British citizen, Rhiannon says the proposed EU regulation together with the UK's Online Safety Bill, which will soon become law, are critical in the global fight against child sexual abuse.
The EU's planned laws would use emerging technologies to detect new and existing child sexual abuse material (CSAM) and child grooming activities, and would give national authorities the power to oblige digital services to scan users’ communications, including encrypted messages.
But a bitter row has erupted, pitting child protection advocates against digital rights lobbies, who claim it will instigate a mass surveillance regime and spell the end of digital privacy as we know it. Supporters say failing to pass the regulation would leave criminals undetected and big tech unregulated.
Between both camps is a tightrope that is proving difficult to tread: how to catch child abusers without undermining our privacy online.
Are the technologies mature enough?
To detect existing child sexual abuse material (CSAM) known to law enforcement, the Commission has proposed using the so-called perceptual hash function, which takes a fingerprint, or hash, of harmful files and detects replicates across the internet.
But academic experts warn that perpetrators can easily manipulate images to dodge detection, and that innocent users could be wrongfully accused: “The problem is that it’s very easy to break the hash by changing one pixel or even by slightly cropping the image," professor Bart Preneel, a cryptography expert at KU Leuven university, explained. "It’s also possible for a perfectly legitimate image to be flagged as a false positive."
The Commission wants to set up an EU Centre on Child Sexual Abuse in the Hague, where staff would be hired to manually filter content flagged as illegal in order to avoid flooding law enforcement agencies with false positives.
But civil society organisation ECPAT International says there is sufficient evidence that perceptual hash technologies do work.
“These technologies are not merely promising; they are proven. Hash-based methods have been effective for over a decade, enabling swifter action against illegal content and aiding law enforcement. For example, more than 200 companies use PhotoDNA technology to prevent their services from being used to spread child sexual abuse materials,” an ECPAT spokesperson said.
The Commission also wants to use artificial intelligence (AI) to detect newly created CSAM as well as to flag behavioural patterns that could amount to child grooming. Preneel told Euronews these methods would pose an even greater risk of false incrimination.
“Even if we reduce the error rate to 1%, with billions of images sent in the EU every day we could be looking at tens of millions of daily false positives,” Preneel warned. “We could be incriminating innocent people, accusing them of the most serious of crimes.”
Preneel also warns teenagers voluntarily and legally sharing nude images amongst each other could be wrongfully criminalised.
But while recognising that language-based AI models to detect grooming behaviour still need maturing, ECPAT says AI has been successfully deployed to detect new CSAM with “low error rates”.
"CSAM detection tools are specifically trained not to find legal images," the ECPAT spokesperson explained. "These tools are trained on known CSAM, adult pornography and benign images, particularly to distinguish between them and to keep benign imagery from being misinterpreted as CSAM."
Mié Kohiyama, a survivor and co-Founder of the Brave Movement from France who, like Rhiannon, advocates for stronger regulation, says the prevalence of child abuse images and videos online means the European Union has a responsibility to take action.
“More than 60% of these images are hosted on European servers, so we have a responsibility to act upon it,” she explained. “Detection is key, and removal is key.”
Would the new rules undermine privacy?
The most contested aspect of the Commission’s proposal is the obligation on tech companies to deploy client-side scanning (CSS) technology to scan the messages of users, including end-to-end encrypted communications on platforms such as Meta’s Whatsapp, when a risk is identified.
This would mean the encrypted messages, pictures, emails and voice notes of users could be tapped into.
Privacy advocates warn this amounts to a serious breach of people’s right to privacy online, and could be easily manipulated by malicious actors. CSS was deployed briefly by Apple in 2021 to scan iCloud uploads, but was taken down within weeks when the system was hijacked.
But ECPAT International says it is important to remember that CSS operates “before data becomes encrypted.”
“It does so by flagging CSAM before it is uploaded and sent through an encrypted environment – the same way as WhatsApp, an end-to-end encrypted service, already deploys technology to detect malware and viruses,” an ECPAT spokesperson said.
Critics also warn undermining encryption could set a dangerous precedent for authoritarian regimes, who could manipulate the technology to detect criticism and target dissidents.
Mié says such scaremongering is simply a means of diverting attention from the real issue.
"This regulation would have safeguards," she said. "Europe is a democracy, not a dictatorship. And let’s not be naive: in a dictatorship, when you want to spy on citizens you do spy on citizens. You don’t need a new regulation."
Can EU ministers find a compromise?
The proposed regulation has torn EU capitals, with many concerned about the technologies' maturity and the threat to consumer privacy. Ministers may choose to green-light certain aspects of the text, while putting other plans on hold until technologies have sufficiently matured.
Mié and Rhiannon told Euronews ministers should avoid bowing to the pressure of big tech and digital lobbies. They say the steep rise in abusive material shows that tech companies' voluntary measures to detect and take down content are clearly insufficient. A study released on Tuesday by the WeProtect Global Alliance suggests reported child sexual abuse material has increased by 87% since 2019.
"Tech companies design their products to entice children and engage them for as long as possible. If that's their model, it has to be a safe environment for children," Rhiannon said.
"The model of self-regulation for tech companies hasn't worked, this is clear from the number of children being abused. We have to legislate on this issue, we have to force the hand of tech companies to protect children," she added.
Mié also believes the bloc has a responsibility to protect survivors of digital-assisted abuse from the re-traumatisation of knowing images of their abuse are being viewed every day.
"These survivors are afraid of everything. They’re not able to leave their homes. For some of them, they are afraid even of using the Internet. These are people that are totally marginalised from society," she said. "We have to protect them. We have to protect children. It has to come first in everybody's mind."