'Mass surveillance, automated suspicion, extreme power': How tech is shaping EU borders

Greek soldier guards over a border wall along the Evros River which forms the frontier between Greece and Turkey, on Friday, March 31, 2023.
Greek soldier guards over a border wall along the Evros River which forms the frontier between Greece and Turkey, on Friday, March 31, 2023. Copyright AP Photo
Copyright AP Photo
By Joshua Askew
Share this articleComments
Share this articleClose Button

The EU's frontier is starting to resemble a science fiction film, say experts.

ADVERTISEMENT

Mixed reality glasses, unmanned underwater vehicles, 3D radars, radio frequency analysers, and 360 cameras – these aren't items from a sci-fi film.

They are what's being used on the EU border.

Since the 2015 Migration Crisis, which saw over one million people seek asylum in Europe, the EU and its partners have deployed increasingly powerful, cutting-edge technologies in their bid to “manage” migration.

At the border: ‘Extreme power’

The most striking examples are on Europe’s fringes.

Added to the list above are thermal imaging cameras, night-vision goggles, special sensors for detecting mobile phones, tracking devices and surveillance towers, which have been used in border zones to stop undocumented migrants crossing from Turkey or the Balkans.

If they are caught, migrants are often summarily kicked out in an illegal practice known as pushbacks.

“Technology is making the border zones more and more dangerous for people on the move,” Caterina Rodelli, an analyst at the digital rights organisation Access Now, told Euronews. “It exacerbates the violence that is already there, giving border guards extreme power.”

A December report by the Border Violence Monitoring Network warned of “an unprecedented rise in violence at the EU’s border, including beatings, forced undressing and sexual assaults of migrants by state officials.

16,000 people have been affected by illegal expulsions, they estimated.

Giannis Papanikos/Copyright 2020 The AP. All rights reserved
The Greek army has been used to stop migrants entering from Turkey.Giannis Papanikos/Copyright 2020 The AP. All rights reserved

But tech isn’t confined to land.

“The public does not know what they are doing and the overall system of border management is opaque,” said Jacopo Anderlini, a researcher at Tactical Tech. “But the Mediterranean is under deep surveillance.”

He pointed to a “big, big increase” in the number of drones being flown over the sea by Frontex, the EU’s border and coastguard agency.

Frontex claims they used to help rescue people and catch smugglers, but research suggests they are actually used to push people back, Anderlini told Euronews.

An investigation by Human Rights Watch and Border Forensics alleged Frontex used drones to spot migrant boats and notify the Libyan Coast Guard, who then intercept them.

At least 25,000 people have drowned in the Mediterranean since 2014, say Human Rights Watch. 

According to Rodelli, the relatively lawless international waters of the Med have served as a perfect laboratory for trialling and refining state-of-the-art technologies.

“This is a context where impunity reigns,” she told Euronews. “It's the perfect testing environment. Authorities can try out systems in a way that won't create a backlash because no one can seek redress.”

"We have international human rights, but they are not implemented in practice when it comes to refugees or people without citizenship."

ADVERTISEMENT

On the streets: ‘Mass surveillance’

Even deep inside the EU, tech is helping authorities detect and remove undocumented migrants as they go about their everyday life.

A few years ago Greece – a frontline of the migration crisis – announced plans to equip around 1,000 police officers with smartphone-like devices capable of facial recognition and fingerprint identification, specifically to catch illegal immigrants.

Greek police said it would help improve efficiency and reduce hassle for civilians, though critics warned of huge risks to privacy, increased surveillance and potential abuses.

Studies show facial recognition systems often misidentify people of colour and can lead to wrongful arrests and convictions.

Gabrielle Lurie/San Francisco Chronicle
A security camera is seen on top of a pole on Monday, May 13, 2019.Gabrielle Lurie/San Francisco Chronicle

“Technology is pushing more and more people to the margins,” said Rodelli. “It means undocumented migrants are living in constant fear of being caught when they didn't necessarily have a pathway to get a residence permit in the first place.”

ADVERTISEMENT

At the moment, such technologies are primarily used against migrants. However, the analyst feared they could eventually be rolled out on a more general level across the wider population.

“There’s huge the potential that these systems will be repurposed and used against other categories of people,” Rodelli told Euronews.

“It's very likely that thermal cameras and drones used to detect people at the borders will be used in the outskirts of cities to detect the presence of the homeless or people in poverty that are unwanted in public spaces.”

Behind the scenes: ‘Automated suspicion’

Yet not all technologies are visible.

Rodelli pointed to an “inherently problematic” category of tech, such as AIs, which are being used to automate decision-making and risk assessment within immigration procedures

ADVERTISEMENT

Though eventually “redesigned”, the UK tested an algorithm that automatically graded visa applications according to criteria rights campaigners slammed as “racist”.

One advocacy group called the “streaming tool” “speedy boarding for white people”, noting that applications from Middle Eastern and African people were invariably deemed high risk.

Highly experimental dialect recognition systems are being used in Germany to decide if an asylum seeker is where they say they are from.

Besides accuracy issues, Rodell said these tools “inevitably” lead to discriminatory outcomes because they are designed on an assumption about who people are and what they do.

“These types of technology strengthen – and legitimise – an automated suspicion against anyone who does not hold European citizenship,” she explained.

ADVERTISEMENT

Behind these newfangled border technologies is a highly lucrative industry, awash with public money.

Anderlini says this sprawling “public-private complex” involves arms companies, universities and other institutions, and is largely financed by the EU taxpayers.

Without proper scrutiny, he raised serious concerns that corporations could marketise the data they gathered from undocumented migrants, suggesting they could be sold to make money.

“Where’s this data going?” he asked. “The European Union is usually strict on these things. But it's difficult to get full control of what happens.”

Each year the EU spends more than 1.5 billion euros on research and development for security technology, with border management a top priority, according to a study by King’s College.

ADVERTISEMENT

“It's a big problem,” said Rodelli. “The EU is pouring a lot of money into research projects that are looking into how to make the borders more securitized and more violent.”

“The European Union is already not only complicit but responsible for the human rights violations that have been happening on its borders for a very, very long time.”

‘We are watching’

With technology continually pushing into the unknown, Rodelli suggested better regulation was needed to ensure safety and accountability, though she said “systemic change” was vital.

The EU’s Artificial Intelligence Act seeks to regulate advanced technologies, which are developing at a neck-breaking speed.

A coalition of civil society organisations, including Protect Not Surveil, have criticised it for failing to prevent irreversible harm in migration, undermining what they say is its "very purpose: Protecting the fundamental rights of all."

ADVERTISEMENT

At the end of April, EU lawmakers will vote on the AI Act, signalling how they will regulate its use in the migration context. 

"With this vote, the European Parliament can demonstrate if it will centre fundamental rights in the AI Act regulation, or economic interests," said Rodelli. "We are watching."

Of course, there are some problems with the design itself,” said Anderlini. But the wider issue is not related to technology per se.”

“It's how we use it,” he added.

Frontex has been approached for comment.

ADVERTISEMENT
Share this articleComments

You might also like