The deployment of new technologies [...] to automate EU border security systems, raises multiple human rights concerns. While the past and present human rights implications of Fortress Europe have been widely catalogued, future ones [...], urgently need addressing.
In October 2018 the European Union (EU) announced it was funding a new automated border control system to be piloted in Hungary, Greece and Latvia. Called iBorderCtrl, the project uses an artificial intelligence (AI) lie-detecting system fronted by a virtual border guard to quiz travellers seeking to cross borders. Travellers deemed to answer questions honestly by the system are provided with a code allowing them to cross, while those not so lucky are transferred to human border guards for further questioning.
iBorderCtrl is only one of many projects seeking to automate EU borders with the objective of countering irregular migration. This new tendency within Europe raises a series of serious human rights concerns.
iBorderCtrl’s technology is founded on “affect recognition science,” a widely contested discipline. Affect recognition claims to expose truths about someone’s personality and emotions through the analysis of their facial features. Proponents argue that emotions are “fixed and universal, identical across individuals, and clearly visible in observable biological mechanisms regardless of cultural context”. According to them, studying faces “produces an objective reading of authentic interior states”.
iBorderCtrl is built on this logic, that an AI facial recognition system animated through computer-generated border agents can read people’s feelings.
Yet, as has already been proved repeatedly, AI facial recognition systems are inherently biased, learning prejudices reflected in the data used to train them. Project claims of reducing “subjective control and workload of human agents and to increase the objective control with automated means” are certainly misleading. Moreover, researchers across the board have demonstrated that affect recognition does not stand up to scrutiny and is being applied in dangerously irresponsible ways. iBorderCtrl is a case in point.
While the project stresses that a ‘human border guard’ is always involved in entry refusals and such cases will never be determined solely through assessments made by AI, in practice this is an impossible guarantee. As Evelien Brouwer, Senior Researcher at the Amsterdam Centre for Migration and Refugee Law (Vrije Universiteit Amsterdam) explains: “considering the high number of travellers, the possible lack of sufficiently trained staff, and the political reality pressing for restrictive border policies, the risks that decisions will follow judgements made by the AI system is too high. Practically, it will be very difficult for the subject, the data protection supervisors and courts to test whether or not an entry refusal is based on automated decision making or not”.
Lack of transparency in the development of the technology is equally concerning on a practical level, evoking the ‘black box’ problem so often attributed to AI. Border agents will have to rely upon technology that they do not understand, and travellers are expected to trust an opaque system with little accountability.
Critically, iBorderCtrl is indicative of a wider trend in the EU of enhancing border monitoring capabilities through technology. For decades the EU has invested in securitising and militarising its borders, working towards the construction of what some describe as ‘Fortress Europe’. Although investments in traditional border security systems intensified in response to the growth in people seeking safety in Europe in 2015, an increasing interest in AI and big data has resulted in the proliferation of so called smart border automated security solutions.
We are therefore seeing the emergence of techno-solutionism in border monitoring system around the EU, along with the advent of further human rights violations. Looking at the number of projects using automated technologies for border control purposes funded by Horizon 2020, the biggest EU Research and Innovation programme ever, is a clear indication of this trend.
Take ROBORDER, a project using technologies which seem a distant reality, as an example. Also in its pilot phase, ROBORDER is being tested in the island of Kos in Greece and at the Bulgarian-Serbian land border, amongst other places. The project offers so-called solutions to current border challenges through “unmanned mobile robots including aerial, water surface, underwater and ground vehicles, capable of functioning both as standalone and in swarms, which will incorporate multimodal sensors as part of an interoperable network.” This means the EU’s air land and sea borders would be patrolled by swarms of robots alerting authorities of activities at borders whilst collecting large volumes of data to provide immediate and predictive overviews of situations.
The web of data harvested creates a predictive security system enabling border securities to concentrate resources in designated areas. These predictive capabilities intensify security and surveillance both in the present and the future, endlessly increasing the detection and tracking capabilities. Accordingly, a system such as ROBORDER runs the risk of exacerbating the human rights violations inflicted by Fortress Europe.
Using unmanned autonomous systems to securitise borders could also lead to robots being equipped, not only with sensors, but with lethal capabilities. In addition to the Campaign to Stop Killer Robots, which suggests the very real prospect of such eventualities, lethal robots have actually previously been proposed to the EU for border security systems. The Bulgarian state-owned company, [Prono, wrote to Frontex](https://www.asktheeu.org/en/request/2306/response/8353/attach/3/PAD 2015 vol III 12 Nov.pdf) about the development of a border security system with “manageable lethal influence on offenders without requiring constant monitoring by qualified personnel”. Considering borders have already been largely militarised, automated weapons systems at borders may not be such a distant reality.
Ultimately the deployment of new technologies such as those discussed (and there are clearly many more) to automate EU border security systems, raises multiple human rights concerns. While the past and present human rights implications of Fortress Europe have been widely catalogued, future ones, spurred on through technological shifts reshaping the EU’s border security landscape, urgently need addressing. Considering how much human suffering and grief border policies have caused, and the EU’s growing techno-solutionism approach to border security, it would be careless not to scrutinise new technological developments defining Europe’s fortress of tomorrow.
Lucien Begault is an Amnesty International researcher.