Euroviews. Facial recognition technology should be regulated, but not banned

Surveillance cameras sit on a utility pole in Moscow, February 2020
Surveillance cameras sit on a utility pole in Moscow, February 2020 Copyright AP Photo/Euronews
Copyright AP Photo/Euronews
By Tony Porter, Chief Privacy Officer, Corsight AI, and Dr Nicole Benjamin Fink, Founder, Conservation Beyond Borders
Share this articleComments
Share this articleClose Button
The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

The European Commission has proven itself to be an effective regulator in the past. A blanket ban on FRT in law enforcement will only benefit the criminals, Tony Porter and Dr Nicole Benjamin Fink write.

ADVERTISEMENT

The EU’s AI Act passed a major hurdle in mid-June when the bloc’s lawmakers greenlit what will be the world’s first rules on artificial intelligence. 

But one proposal stands apart: a total ban on facial recognition technology, or FRT. 

If left to stand, this rule will blindfold the law enforcers who do vital work to protect the most vulnerable in society. It will embolden criminal groups such as those who traffic wildlife and human victims, thereby putting lives at risk.

All surveillance capabilities intrude on human rights to some extent. The question is whether we can regulate the use of FRT effectively to mitigate any impact on these rights. 

Protecting privacy versus protecting people is a balance EU lawmakers can and must strike. A blanket ban is the easy, but not the responsible option.

Privacy concerns should face a reality check

MEPs voted overwhelmingly in favour of a ban on the use of live FRT in publicly accessible spaces, and a similar ban on the use of “after the event” FRT unless a judicial order is obtained. 

Now attention has shifted to no doubt heated trilogue negotiations between the European Parliament, European Council and member states.

FRT in essence uses cameras powered by AI algorithms to analyse a person’s facial features, potentially enabling authorities to match individuals against a database of pre-existing images, in order to identify them. 

Privacy campaigners have long argued that the potential benefits of using such tech are not worth the negative impact on human rights. But many of those arguments don’t stand up to scrutiny.
Kelvin Chan/AP
Activists demonstrate in front of a mobile police facial recognition facility outside a shopping centre in London, February 2020Kelvin Chan/AP

Privacy campaigners have long argued that the potential benefits of using such tech are not worth the negative impact on human rights. But many of those arguments don’t stand up to scrutiny. in fact, they’re based on conclusively debunked myths.

The first is that the tech is inaccurate and that it disproportionately disadvantages people of colour. 

That may have been true of very early iterations of the technology, but it certainly isn’t today. Corsight has been benchmarked by the US National Institute of Standards and Technology (NIST) to an accuracy rate of 99.8%, for example. 

Separately, a 2020 NIST report claimed that FRT performs far more effectively across racial and other demographic groups than widely reported, with the most accurate technologies displaying “undetectable” differences between groups.

It's also falsely claimed that FRT is ineffective. In fact, Interpol said in 2021 that it had been able to identify almost 1,500 terrorists, criminals, fugitives, persons of interest and missing persons since 2016 using FRT. That figure is expected to have risen exponentially since.

A final myth, that FRT intrudes on human rights as enshrined by the European Convention of the same name, was effectively shot down by the Court of Appeal in London. In that 2020 case, judges ruled that scanning faces and instantly deleting the data if a match can’t be found has a negligible impact on human rights.

It's about stopping the traffickers

On the other hand, if used in compliance with strict regulations, high-quality FRT has the capacity to save countless lives and protect people and communities from harm. 

Human trafficking is a trade in misery which enables sexual exploitation, forced labour and other heinous crimes. It’s estimated to affect tens of millions around the world, including children. 

But if facial images of known victims or traffickers are caught on camera, police could be alerted in real-time to step in. 

Given that traffickers usually go to great lengths to hide their identity, and that victims — especially children — rarely possess official IDs, FRT offers a rare opportunity to make a difference.
AP Photo/Andoni Lubaki
A man from Syria looks out at sea after being rescued 45 nautical miles far away from the Libyan coast, March 2022AP Photo/Andoni Lubaki

Given that traffickers usually go to great lengths to hide their identity, and that victims — especially children — rarely possess official IDs, FRT offers a rare opportunity to make a difference.

ADVERTISEMENT

Wildlife trafficking is similarly clandestine. It’s a global trade estimated many years ago at €20.9 billion — the world’s fourth biggest illegal activity behind arms, drugs and human trafficking. 

With much of the trade carried out by criminal syndicates online, there’s a potential evidence trail if investigators can match facial images of trafficked animals to images posted later to social media. 

Buyers can then be questioned as to whom they procured a particular animal from. Apps are already springing up to help track wildlife traffickers in this way.

There is a better way forward

Given what’s at stake here, European lawmakers should be thinking about ways to leverage a technology proven to help reduce societal harm — but in a way that mitigates risks to human rights. 

The good news is that it can be done with the right regulatory guardrails. In fact, the EU’s AI Act already provides a great foundation for this, by proposing a standard of excellence for AI technologies which FRT could be held to.

ADVERTISEMENT

Building on this, FRT should be retained as an operation tool wherever there’s a “substantial” risk to the public and a legitimate basis for protecting citizens from harm.

[FRT's] use should always be necessary and proportionate to that pressing need, and subject to a rigorous human rights assessment.
AP Photo/Francisco Seco
A member of the cleaning crew sanitises desks at the European Parliament in Brussels, September 2020AP Photo/Francisco Seco

Its use should always be necessary and proportionate to that pressing need, and subject to a rigorous human rights assessment. 

Independent ethical and regulatory oversight must of course be applied, with a centralized supervisory authority put in place. And clear policies should be published setting out details of the proposed use. 

Impacted communities should be consulted and data published detailing the success or failure of deployments and human rights assessments.

The European Commission has proven itself to be an effective regulator in the past. So, let’s regulate FRT. A blanket ban will only benefit the criminals.

ADVERTISEMENT

Tony Porter is the Chief Privacy Officer at Corsight AI and the former UK Surveillance Camera Commissioner, and Dr Nicole Benjamin Fink is the Founder of Conservation Beyond Borders.

At Euronews, we believe all views matter. Contact us at view@euronews.com to send pitches or submissions and be part of the conversation.

Share this articleComments

You might also like

Protesters clash with police in Argentina over austerity reforms

Serbia police arrest 2 men accused of hitting 2-year-old girl with car and dumping her body

Gang and police gunbattles paralyse area near Haiti's National Palace