Facial recognition: London rolls out controversial technology as EU considers ban

The technology will be rolled out in London from Friday
The technology will be rolled out in London from Friday Copyright Pixabay
By Rachael Kennedy
Share this articleComments
Share this articleClose Button

The Metropolitan Police say the technology will help them combat "serious crime" but opponents argue it will be in violation of human rights.

ADVERTISEMENT

Police in London have begun rolling out controversial facial recognition technology in a landmark move that has been criticised as a violation of privacy and rights.

The Metropolitan Police said the deployment — from Friday — would help the force tackle "serious crime" and stressed it would not replace "traditional policing".

It insisted live facial recognition (LFR) technology would be clearly signposted and implemented at specific locations where it believes the most serious offenders would be seen.

"Every day, our police officers are briefed about suspects they should look out for; LFR improves the effectiveness of this tactic," a statement from the Met's assistant commissioner Nick Ephgrave said.

He added: "Similarly if it can help locate missing children or vulnerable adults swiftly, and keep them from harm and exploitation, then we have a duty to deploy the technology to do this."

READ MORE:

Live facial recognition technology has long been a controversial topic in the UK and in wider Europe.

Earlier this week, leaked EU documents revealed the bloc had been considering a temporary ban on the technology - a move that has been backed by Sundar Pichai, the CEO of Google's parent company Alphabet.

It came after years of hotly-contested trials in several member states, whom, like the Met, wanted to test whether such technology could help combat crime.

In Germany, police in Hamburg trialled facial recognition during the 2017 G20 summit. This was later found to be in violation of data protection laws.

Another test was also run on hundreds of volunteers at Berlin's Südkreuz station, which was met with criticism the technology could glean further information from a subject than it originally let on.

In France, police in Nice have also used live facial recognition as a trial at the city's carnival.

A case in Sweden saw a municipality fined €20,000 after it was discovered a school had been using facial recognition to track students' attendance.

So - how does it work?

Put simply, the system will examine your face and compare it to a database of images of people wanted by police.

It will analyse the structure of your face by measuring the distance between your eyes, nose, mouth and jaw, and will create a biometric map somewhat unique to you.

This map will then be compared with the database of pictures to see if you are a match.

Pixabay
Measurements of the distance between your facial features will create a biometric mapPixabay

Why is there a problem with this?

Aside from the human rights implications, the technology itself has been widely reported to be inaccurate.

In July last year, an independent report obtained by Sky News found that 81% of people flagged by the technology were innocent.

ADVERTISEMENT

This false identifying was found to be particularly prevalent among black people and minority ethnic groups.

The British Institute for Human Rights went further to question how much this knowledge of heightened surveillance would affect normal behaviour.

"Knowing that we are being watched in public spaces may lead us to change our behaviour, we might not attend protests or express our feelings in the same way," he said. "This infringes on our freedom of expression."

It also asked what would happen to this biometric map after it is taken, even if you're not a match on the database?

What has been the reaction to today's announcement?

The Metropolitan Police maintains the rollout "strikes the balance" between using technology to stop criminals while also respecting privacy and human rights.

ADVERTISEMENT

But rights groups across the country are not so sure.

Liberty Human Rights group's advocacy director Clare Collier told Euronews in an email the move was "dangerous, oppressive and completely unjustified" and put the UK on course to become a "surveillance state".

She said: "Facial recognition technology gives the state unprecedented power to track and monitor any one of us, destroying our privacy and our free expression.

"Rolling out an oppressive mass surveillance tool that has been rejected by democracies and embraced by oppressive regimes is a dangerous and sinister step, pushing us towards a surveillance state in which our freedom to live our lives free from state interference no longer exists."

Liberty, which has long campaigned against the technology, has an ongoing petition of which it says 21,000 people have signed in favour of a ban.

ADVERTISEMENT

Meanwhile, Big Brother Watch, a civil liberties and privacy campaigning group, said the move posed a "serious threat" to civil liberties in the UK.

It added: "This move instantly stains the new government's human rights record and we urge an immediate reconsideration."

Share this articleComments

You might also like

UK schools suspend use of controversial facial recognition technology

Prime Ministers of Poland and Ukraine make progress on easing farmer anger

French budget deficit rose to 5.5% in 2023