Euroviews. In the blink of AI: How facial recognition technology is capitalising on the COVID-19 crisis ǀ View

A mobile police facial recognition facility outside a shopping centre in London Tuesday Feb. 11, 2020
A mobile police facial recognition facility outside a shopping centre in London Tuesday Feb. 11, 2020 Copyright Kelvin Chan/AP Photo
By Keith Oliver, Amalia Neenan
Share this articleComments
Share this articleClose Button
The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

What will happen when we emerge into the post-coronavirus world? Will these often invasive technological powers be de-escalated when the threat has passed? Or will they be held in place under the pretence of public protection?

ADVERTISEMENT

‘Always eyes watching you and the voice enveloping you.’ – 1984, George Orwell

Whether it be F. Scott Fitzgerald’s ominous description of the eyes of Dr T.J. Eckleburg looking down on the Valley of Ashes as if “they were the eyes of God” in ‘The Great Gatsby,’ or Orwell’s troublingly prophetic imaginings of “Big Brother” in ‘1984,’ it appears as if someone has always been watching. And while we are not in the era of “thoughtcrime” quite yet, we are entering an age where government surveillance is fast becoming the norm and facial recognition technology stalks the streets.

In the midst of the COVID-19 pandemic, many states have turned to facial recognition technology as a way to combat the spread of the virus by tracking quarantine evaders or gauging elevated temperatures of potentially infected individuals in crowds. However, without proper regulation, we have started to witness the often undetected spread of this technology, much like a virus taking over a host body. This begs the question as to what will happen when we emerge into the post-coronavirus world? Will these often invasive technological powers be de-escalated when the threat has passed? Or will they be held in place under the pretence of public protection?

“Here’s looking at EU, kid!”

Much like the differing approaches taken by world governments in their efforts to contain COVID-19, different jurisdictions have tackled facial recognition regulation in varying ways, creating confusion. Most recently, the EU backtracked on imposing a five-year moratorium on the use of the technology. Early drafts of the European Commission’s policy on artificial intelligence (AI) indicated that there would be a ban so that potential abuses could be analysed. However, the final version of the EU White Paper merely identifies key risks. For example, “by analysing large amounts of data and identifying links among them, AI may be used to de-anonymise data…creating new personal data protection risks.” As a result, facial recognition should only be used when “subject to adequate safeguards.” But what is “adequate” in one member state may be completely different in another. EU countries have been left to their own regulatory devices, muddying the waters in their wake.

While we are not in the era of “thoughtcrime” quite yet, we are entering an age where government surveillance is fast becoming the norm and facial recognition technology stalks the streets.
Keith Oliver, Amalia Neenan
Lawyers

The UK’s approach typifies this problem. While the technology broadly falls under the Data Protection Act 2018/GDPR, the Protection of Freedoms Act 2012 and article 8 of the Human Rights Act 1998, there is no single instrument that looks at facial recognition and associated technologies in detail. Rather, we have a patchwork framework that is no match for this technological sophistication; a scary thought when the technology has already been deployed on an increasingly global scale.

As the world prepares to ease lockdown restrictions, one of the key considerations is how to mitigate the occurrence of a second wave. Enter contact-tracing apps. If a person starts displaying symptoms, they self-report on the app that will send alerts to all app users that have been in proximity in the last two weeks. The app does this by registering all proximate user phones through Bluetooth. Surely large, red alert alarm bells should be ringing over privacy concerns.

Much like the wider regulatory response to new artificial intelligence systems, various European countries have taken different approaches to dealing with this innovation. The key concern is how this data is processed and stored. NHSX (the digital innovation section of the UK’s National Health Service, or NHS) is taking charge of the UK app, which is currently being test-run on the Isle of Wight. Unlike the Google and Apple versions, it intends to store data from user interactions on a centralised server instead of on user phones, igniting fears that this will become a tool used for State-sanctioned mass-surveillance.

Italy and Germany have contrastingly opted for a phone storage-centric model, which forgoes the use of GPS charting. Yet, with these varying approaches over Europe, how can these systems be effectively regulated to prevent abuses? In April, a group of 177 cybersecurity experts signed an open letter to the UK government, citing fears that when COVID-19 has passed, data gleaned from the app could be misused. The group wrote that “such invasive information can include the 'social graph' of who someone has physically met over a period of time. With access to the social graph, a bad actor (state, private sector, or hacker) could spy on citizens.”

Spectators at the feast

The governmental collection of data is now limitless. Human Rights Watch suggests that COVID-19 may be used to spark the permanent deployment of these systems, similar to how the 2008 Beijing Olympics were used to establish China’s existing surveillance regime. Who can say governments must scale-back, particularly when there is no codified law on how to process, store or discard data?

Instead, states have used this gap in the framework to do as they wish. Contact-tracing is just the tip of the technological iceberg. We now face the roll-out of “Immunity Passports,” which will combine facial recognition technology and COVID-19 testing to phase people back to work. The passport requires users to upload a selfie and a photo of their ID to create a digital profile. The user would then have an antibody or antigen test to confirm their immunity status. Once at work, the app will produce a QR code to determine whether they are infection-free and safe to enter. It sounds harmless. But whenever designs require the collection of personal information, abuses will be rife. A wealth of data will be stored on user phones, which can be easily hacked to open up a treasure trove of account passwords, banking details, and anything else stored there – your immunity status per chance? These innovations could, amongst other things, “supercharge” identity fraud. Instead of fake IDs, fake immunity certificates where facial scans and health records could be hacked to produce falsified results.

We don’t know how the interconnectivity of tracing app systems will perform because it has not been tested.
Keith Oliver, Amalia Neenan
Lawyers

Orwell predicted it in ‘1984.’ “No one ever seizes power with the intention of relinquishing it,” he wrote. “Big Brother” is watching us. But at the end of the day, is this not just the price we have to pay to begin life again when we emerge post-COVID? In the meantime, we must wade through the barrage of scaremongering and misinformation that has reigned in the global panic. The latest fear, according to The Telegraph this week, is that, “Britons maybe unable to travel abroad because of UK failure to join international tracing app system.” The truth of that matter is that we do not know how this will all work out.

We don’t know how the interconnectivity of tracing app systems will perform because it has not been tested. What we do know is that we are facing the worst crisis to befall the globe since the Second World War. In the fight against infection, it will be up to individuals to balance supposedly intrusive surveillance with being able to once again go about their daily business. As for the law-abiding authors of this article, there is only one choice: be recognised.

____________

Are you a recognised expert in your field? At Euronews, we believe all views matter. Contact us at view@euronews.com to send pitches or submissions and be part of the conversation.

Share this articleComments

You might also like

Facial recognition: Clearview AI breaks EU data privacy rules, says French watchdog

Analysis: Is COVID-19 further dividing the 'United' Kingdom?

Putting Facial Recognition Technology in the dock: How the use of FRT is impacting UK law ǀ View