Euroviews. Can the EU AI Act embrace people's needs while redefining algorithms?

European Commission building with EU AI Act flags, illustration
European Commission building with EU AI Act flags, illustration Copyright Midjourney/Euronews
Copyright Midjourney/Euronews
By Yonah Welker
Share this articleComments
Share this articleClose Button
The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

The EU's flagship AI legislation doesn’t cover all cases, scenarios, or terminology, but it may complement existing legislation of the member states, including accessibility, equality or social protection laws, Yonah Welker writes.

ADVERTISEMENT

On 8 December, European Parliament and European Council negotiators reached a provisional agreement on the Artificial Intelligence Act. 

The text that emerged from the trialogue had to be voted on by the European Parliament and also by the European Council. 

The approval of both institutions was needed for the political compromise reached to become law. Last week, the final text of the AI Act was shared with the member states. It will be followed by the Technical Working Group and member states feedback by Friday.

Following the previous comments of member states such as France, the final text left questions from the public on how this regulation will affect SMEs, innovation and technology transfer mechanisms in the region. 

In particular, along with widely discussed foundation models, the EU ecosystem is known to produce a variety of successful technology projects addressing accessibility of public spaces, cognitive and sensory impairments, medical and educational technologies, such as social robotics for children with autism, eyes-tracking systems supporting reading impairments and dyslexia, smart tablets for individuals with visual impairments, variety of rehabilitation and medical technologies. 

These technologies are associated with the use of emotion recognition, eye-tracking, and complex data input from several stakeholders. 

In November, the OECD issued a report and repository covering over a hundred cases of assistive, labour and workplace technologies, supporting disabilities with AI.

The final provision of the AI Act expands the categories of high and unacceptable risk systems, introduces stricter compliance on emotion recognition, publicly accessible workplaces and educational spaces, dedicated rules addressing general-purpose AI, “specific transparency risks”, and the list of special cases, exceptions and comments, including biometrics systems and law-enforcement.

How may these rules balance facilitation and protection for emerging systems and technology transfer, specifically addressing assistive and public algorithms, associated with designated groups and complex adoption processes?

Provisions, classifications and algorithms

Through the lens of assistive, accessibility and social-related categories and algorithms, several components of the AI Act’s provisions should be highlighted:

At first, along with applications that may manipulate human behaviour, governmental ‘social scoring’ and certain applications of predictive policing, It introduces the full prohibition of particular public and real-time cases of biometrics and emotion recognition systems. 

In particular, it prohibits the use of emotion recognition In four public cases: workplaces and educational institutions, law enforcement and migration. 

[The AI Act] also prohibits the systems aimed to categorise or “score” people, including real-time remote biometric identification for law enforcement purposes in publicly accessible spaces — except “exhaustively listed and narrowly defined situations”.
A Greek police officer operates a long range acoustic device, LRAD, during a patrol alongside the Greek - Turkish border near the town of Feres, May 2021
A Greek police officer operates a long range acoustic device, LRAD, during a patrol alongside the Greek - Turkish border near the town of Feres, May 2021AP Photo/Giannis Papanikos

It also prohibits the systems aimed to categorise or “score” people, including real-time remote biometric identification for law enforcement purposes in publicly accessible spaces (except “exhaustively listed and narrowly defined situations”).

Second, the recent provision also shifts general cases of the “identifying or inferring emotions or intentions of natural persons based on their biometric data” to the “high-risk” category which also includes systems related to certain critical infrastructures, medical devices, systems to determine access to educational institutions or for recruiting people, law enforcement and justice. 

And even though not every “affective computing” system will belong to the high-risk category, In the current provision, this category requires the necessity to inform the user if such algorithms are in place, introduce risk mitigation and human oversight, and ensure high-quality data sets, robustness and accuracy, detailed documentation (which may involve regulatory sandboxes).

Transparency at the core

Also, the document introduces the category of the “specific transparency risk”. It means that when employing AI systems such as chatbots, users should be aware that they are interacting with a machine. 

AI-generated content will have to be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. In particular, assistive cases of the use of emotion and expressive recognition systems may include social assistants and robotics or smart devices supporting autism-spectrum disorders or cognitive impairments associated with disrupted emotions, expressions and social processing.

Fourthly, The AI Act introduces dedicated rules for general-purpose AI. Such models, including broad applications of text, image, speech recognition and conversion, pattern detection and language-based mechanics, may fuel existing accessibility, assistive and accommodation ecosystems, requiring communication or interaction with the patient or student. 

ADVERTISEMENT
People reflected in a window of a building at the Davos Promenade with a slogan about AI alongside the World Economic Forum in Davos, January 2024
People reflected in a window of a building at the Davos Promenade with a slogan about AI alongside the World Economic Forum in Davos, January 2024AP Photo/Markus Schreiber

Such algorithms may be used in a variety of social and public contexts, education or fueling therapy bots, personal assistants, tools for cognitive or sensory impairments, autism spectrum disorders or dyslexia. 

Based on provisions, such models should ensure transparency across all stages of development. 

For very powerful models that could pose systemic risks, there will be additional binding obligations related to managing risks and monitoring serious incidents, performing model evaluation and adversarial testing. Besides, organisations behind open-source large language models must disclose their training data.

Finally, the policy provisions introduce the mechanisms of fines and penalties. Fines would range from €35 million or 7% of global annual turnover for violations of banned AI applications to €15m or 3% for violations of other obligations. More proportionate caps are foreseen for administrative fines for SMEs and start-ups.

Balancing facilitation and protection

Despite tougher compliance to particular categories, it’s important to remember that the topology and risk-based mechanism of the EU AI Act doesn’t come in a vacuum, but rather expands the mechanism that was already in use and evolved over time.

ADVERTISEMENT

In particular, it follows the history of the evaluation criteria leveraged by technology transfer, SMEs and innovation ecosystems such as Horizon 2020, the EU's funding program for research and innovation, with nearly €80 billion of funding available over 7 years. 

The European Commission has received appeals and comments addressing AI and disability rights, the use of biometric, facial and emotion recognition systems, and cases affecting refugees and immigrants, including automated risk assessment and profiling systems.
European Commission President Ursula von der Leyen delivers her speech at European Parliament in Strasbourg, January 2024
European Commission President Ursula von der Leyen delivers her speech at European Parliament in Strasbourg, January 2024AP Photo/Jean-Francois Badias

Such programs leveraged safety, privacy and impact as one of the criteria to evaluate applications. 

Over time, particular programs were complemented with other specific criteria, including cross-member states' social and economic impact, used risk mitigation techniques. 

It also logically follows the nature of existing member-state data and privacy legislation and EU-wide frameworks such as GDPR which previously addressed designated groups, profiling and biometrics systems.

Besides, since the initial proposal of the EU AI Act in 2021, the European Commission has received appeals and comments addressing AI and disability rights, the use of biometric, facial and emotion recognition systems, and cases affecting refugees and immigrants, including automated risk assessment and profiling systems. These considerations also affected the act’s provisions.

ADVERTISEMENT

The public, stakeholders, adoption and the way forward

Since its inception, the AI Act has aimed to become the world’s first document to deploy more human-centred systems and algorithms, ensuring both its facilitation and regulation. 

It has sent a powerful signal to the technology ecosystem, putting it from the black box of the evaluator’s work to a broader society. 

It was followed by similar regulations and efforts at the global stage including the UK Safety Summit and The Bletchley Declaration, the US AI directive, special frameworks driven by Unesco, WHO, and OECD, addressing large language models in areas of health, education, labour and work, public systems, with the input of specialised institutions, researchers, experts and stakeholders. 

The final provision of the act was also welcomed by the Digital SME alliance, highlighting a “risk-based approach that is promoted through precise, limited and internationally recognised definitions and thresholds”, establishing regulatory sandboxes and real-world testing opportunities.

As for designated groups and special categories of assistive technologies, the AI Act does not prohibit the use of algorithms in this area but provides additional compliance and requirements. 

ADVERTISEMENT

At the current stage, It doesn’t cover all cases and scenarios, topologies and terminology, but it may complement existing legislation of the member states, including accessibility, equality or social protection laws. 

Finally, to ensure better social protection and access for vulnerable groups, minors or special use cases — such as the platform’s manipulation, invisible risks or addictive algorithmic design, also reflected in the AI Act article 5.1 — it might be complemented by non-AI legislation such as the Accessibility Act, particular articles of the Digital Services Act, special directives and guidelines.

Yonah Welker is an AI expert, technology explorer and public voice for algorithmic diversity working on European Commission-funded research, technology and innovation projects.

At Euronews, we believe all views matter. Contact us at view@euronews.com to send pitches or submissions and be part of the conversation.

Share this articleComments

You might also like

Commission to look for head of AI Office only when law is fully approved

Lawmakers approve AI Act with overwhelming majority

AI giants' earnings: Triumph or trouble for Microsoft, AMD, Google?