Euroviews. Can emerging AI strategies protect people with disabilities and other vulnerable groups?

A tablet used by nonverbal children to communicate, May 2023
A tablet used by nonverbal children to communicate, May 2023 Copyright AP Photo/Euronews
Copyright AP Photo/Euronews
By Yonah Welker
Share this articleComments
Share this articleClose Button
The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

Algorithms mirror the society that created them, posing the logic that in order to address specific challenges legislation should reflect the society, not just the algorithms themselves, Yonah Welker writes.

ADVERTISEMENT

It’s just weeks after Bletchley’s declaration was signed by 28 countries that agreed on a risk-based approach to frontier AI, areas, types and cases of risks, including health, education, labour and human rights. 

It was followed by the US issuing the first AI executive order, requiring safety assessments, civil rights guidance, and research on labour market impact, also accompanied by the launch of the AI Safety Institute. 

In parallel, the UK introduced the AI Safety Institute and the online Safety Act echoing the approach of the European Union and Digital Services Act.

Despite the general agreement, countries are still in different stages of deployment of this vision, including forming oversight entities, required capacities, risk-based assessment and infrastructure, and connecting existing legislation, directives and frameworks. 

There are also different approaches to how to enforce this oversight, ranging from the more strict approach in the EU — leading to the current opposition from foundational model developers, including Germany's Aleph Alpha and France's Mistral — to a rather “soft” one in the UK. 

There are even bigger questions related to specific and high-risk areas that require more attention such as policing, justice and legal systems, health, education, and designated groups.

This is particularly important for groups such as individuals with disabilities, children, and vulnerable populations. 

A complex task for policy and technology

For instance, it’s known that many legal AI systems were trained without the participation of specific populations, leading to higher errors against them. In some countries, governmental agencies were accused of using data from social media without consent to confirm patients’ disability status for pension programs. 

Immigrants tend to avoid medical examinations and tests in fear of being deported or facing unacceptable medical costs, thus the statistics and public data sets simply do not reflect them.

Finally, AI may not properly identify individuals who lack limbs, with facial differences, asymmetry, speech impairment, different, communication styles or gesticulations, or those who use assistive devices. 

In another example, facial recognition systems may use ear shape or the presence of an ear canal to determine whether or not an image includes a human face. Yet, it may not work for groups with craniofacial syndromes or lacking these parts.

Research and development of disability-centred AI systems is still a complex task both from a technology and policy perspective.
A doctor looks at a screen helped by AI for medical imaging which indicates possible bone fractures and dislocations at the university hospital in Rennes, September 2023
A doctor looks at a screen helped by AI for medical imaging which indicates possible bone fractures and dislocations at the university hospital in Rennes, September 2023DAMIEN MEYER/AFP or licensors

Since the initial proposal of the EU AI Act in 2021, the European Commission has received appeals and comments addressing AI and disability rights, the use of biometric, facial and emotion recognition systems, and cases affecting refugees and immigrants, including automated risk assessment and profiling systems.

However, research and development of disability-centred AI systems is still a complex task both from a technology and policy perspective. 

It includes its intersectional nature, condition, age, gender and spectrum-specific parameters, and involvement of multiple legal frameworks to properly address and protect it.

It’s increasing the role of non-AI-specific frameworks such as the Accessibility Act, the EU Digital Services and Market Acts, the Convention on Rights of Persons with Disabilities, equality and children protection laws, involvement of specialised institutions and multilateral agencies, thus coming beyond just forming generalised “AI Safety Institutes”.

EU, AI, designated groups and legal ecosystems

It’s important to remember that AI systems regulation addressing designated groups or persons with disabilities is not limited to one legal document but rather presented by the spectrum of legal frameworks, laws, conventions, and policies. 

In particular, such cases can be regulated or affected by “AI”-specific acts, related data, consumer and human rights frameworks, memorandums and conventions.

A child works on recognizing vowel sounds during a session in Omaha, January 2017
A child works on recognizing vowel sounds during a session in Omaha, January 2017Kristin Streff/ LINCOLN JOURNAL STAR via AP

For instance, assistive technology used to support dyslexia or autism can be affected by articles of the AI Act, data regulation, specific laws protecting children and designated groups such as the Convention of the Rights of Persons with Disabilities, and country-specific equality and non-discrimination laws.

ADVERTISEMENT

The European Commission started to explore ways to address AI systems and their specific risks even before the AI Act had been fully introduced. 

In particular, previously existing GDPR (General Data Protection Regulation) presented articles related to profiling designated groups, use of “race” and “gender” based data, minors protection, biometrics and facial recognition systems and data.

Creating a safer digital space

From the designated groups' perspective, there is also a significant role in the Digital Services Act (parallel to the Digital Market Act). 

Even though not AI-specific, it aims to “create a safer digital space where the fundamental rights of users are protected”. 

In particular, it introduces requirements to the online platforms’ transparency, accountability, explainability of used algorithms, use of “dark patterns”, minors protection, targeting and profiling, privacy and consent, manipulation, the introduction of “trusted flaggers” and moderators, the feedback loop between platforms and stakeholders, designated “digital service coordinators at member states. 

ADVERTISEMENT

These mechanisms help to better address user protection, cross-member-state cooperation, investigations and legal frameworks, including the involvement of relevant jurisdiction courts and authorities.

A woman with a cat inside her jacket checks her mobile phone as she takes rest in a reception center for displaced persons fleeing Ukraine, in Beregsurany, March 2022
A woman with a cat inside her jacket checks her mobile phone as she takes rest in a reception center for displaced persons fleeing Ukraine, in Beregsurany, March 2022AP Photo/Darko Vojinovic

It’s accompanied by the 2025 Accessibility Act, aimed to improve the functioning of the internal market for accessible products and services, by removing barriers created by divergent rules in member states.

Finally, these efforts are supported by multilateral agencies, such as UNESCO, WHO, and OECD working on area-specific guidelines to address generative AI in education, healthcare, literacy and capacities-oriented recommendations. 

It includes UNESCO's AI competence framework for students and teachers or a recommendation to set the minimum age at 13 years old when generative AI can be used. 

Moreover, its recent action plan to address disinformation and social media’s harms collected responses from 134 countries, including Africa and Latin America.

ADVERTISEMENT

Specialisation, cooperation and way forward

Algorithms mirror the society that created them, posing the logic that in order to address specific challenges legislation should reflect the society, not just algorithms. 

Mitigating algorithm risks towards designated groups is rather a complex process, which brings the increased role of non-AI-specific legislation and coming beyond just forming “AI Safety Institutes”.

It includes building the risk categories and impact assessment specific to these groups, considering social and historical factors behind the algorithms, ensuring access to data, and multi-stakeholder oversight and participation.

Yonah Welker is an AI expert, technology explorer and public voice for algorithmic diversity working on European Commission-funded research, technology and innovation projects.

At Euronews, we believe all views matter. Contact us at view@euronews.com to send pitches or submissions and be part of the conversation.

ADVERTISEMENT
Share this articleComments

You might also like