Euroviews. In 2024 elections, we have to protect minorities from AI-aggravated bias

US Secretary of State Antony Blinken speaks to the media prior to departure from Al Maktoum International Airport in Dubai, December 2023
US Secretary of State Antony Blinken speaks to the media prior to departure from Al Maktoum International Airport in Dubai, December 2023 Copyright AP Photo/Euronews
Copyright AP Photo/Euronews
By Meera Selva
Share this articleComments
Share this articleClose Button
The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

Journalists must give a voice to the underrepresented and underprivileged communities at the receiving end of much of the misinformation that drives polarising narratives and undermines trust in democracy itself, Meera Selva writes.

ADVERTISEMENT

2024 is going to be the year of elections driven by AI-boosted campaigning, global conflict, and ever more pervasive AI tools.

Some 2 billion people will go to the polls in 65 elections to select leaders who will have campaigned, communicated, and fundraised online, and who know their terms in office will be defined by the digital space.

Voting will happen in some of the most densely populated countries in the world, where media has been upended by digital communications, including Indonesia, India, and Mexico. 

And these elections will be among the first to take place after the sudden popularisation of generative AI technologies — casting further uncertainty on how they will play out. 

There is an argument that fears of AI are overblown, and most people will not have their behaviour altered by exposure to AI-generated misinformation. 2024 will offer some evidence as to whether or not that’s true.

Small groups will play big roles. Elections are now often so closely contested that the final results can be turned by proportionately very few voters. 

Mistrust or hostility towards one small group can end up defining the whole national debate. Communities of colour and immigrant communities can be affected disproportionately by misinformation in election times, by both conspiracy theories undermining their trust in the process, and incorrect information on how to vote.

That is why the needs and voices of minority communities must be foregrounded in these elections. Whether AI tools will help or hinder that is still an open question.

No editorial checks will make things worse

Some of the biggest dangers widely accessible AI technologies will pose in global elections stem from a lack of diversity in design and leadership.

There is already a trend for misinformation to spread via mistranslations — words that have different, often more negative connotations when translated from one language, usually English, to another. 

This will only worsen with AI-powered translations done at speed without editorial checks or oversight from native language speakers.

a silent protest in memory of murdered journalist Jan Kuciak and his girlfriend Martina Kusnirova, seen in photo, in Bratislava
a silent protest in memory of murdered journalist Jan Kuciak and his girlfriend Martina Kusnirova, seen in photo, in BratislavaAP Photo/Bundas Engler

Some AI tools also play on existing prejudices against minorities: in Slovakia’s elections this autumn, an alleged audio recording of one candidate telling a journalist about a plan to buy votes from the Roma minority, who are structurally discriminated against and often viewed with hostility, spread fast on Facebook. 

The truth that the recording had been altered came too late: the candidate in question, Michal Simecka, lost to former Prime Minister Robert Fico, who returned to power after having resigned in 2018 following outrage over the murder of an investigative journalist.

Using tech to keep discriminating against others

In India, there are fears that popular AI tools are entrenching existing discrimination on lines of caste, religion and ethnicity. 

During communal riots in Delhi in 2020, police used AI-powered facial recognition technology to arrest rioters. Critics point out the technology is more likely to be used against Muslims, indigenous communities, and those from the Dalit caste as the country’s elections draw near.

These fears are backed up by research from Queens University in Belfast, which showed other ways that the use of AI in election processes can harm minorities. 

If the technology is used for administering mailing lists or deciding where polling stations should be located, there is a real risk that this will result in minority groups being ignored or badly served.
Policemen direct voters to their respective booths at a polling station set up at government run school in Bengaluru, May 2023
Policemen direct voters to their respective booths at a polling station set up at government run school in Bengaluru, May 2023Aijaz Rahi/AP

If the technology is used for administering mailing lists or deciding where polling stations should be located, there is a real risk that this will result in minority groups being ignored or badly served.

Many of the problems of diversity in AI-generated content come from the data sets the technology is trained on, but the demographics of AI teams are also a factor. 

ADVERTISEMENT

A McKinsey report on the state of AI in 2022 shows that women are significantly underrepresented, and a shocking 29% of respondents said they have no minority employees working on their AI solutions. 

As AI researcher Dr Sasha Luccioni recently pointed out, women are even excluded from the way AI is reported on.

There are benefits to AI, too

It’s clear AI will play a significant role in next year's elections. Much of it will be beneficial: it can be used to power chatbots to engage citizens in political processes and can help candidates understand messages from the campaign trail more easily.

I see this first-hand in my daily work: Internews partners up with local, independent media outlets around the world that are creatively using AI tools to improve the public’s access to good information. 

In Zimbabwe, the Center for Innovation and Technology is using an AI-generated avatar as a real-time newsreader, which can have its speech tailored to local accents and dialects, reaching communities that are rarely represented in newsrooms. 

ADVERTISEMENT
The Nobel Peace Prize-winning Maria Ressa warned that the Philippines is the canary in a coal mine on the interface of technology, communications, and democracy; anything that happens there will happen in the rest of the world within a few years.
Filipino journalist Maria Ressa, 2021 Nobel Peace Prize winner and Rappler CEO, talks to reporters after being acquitted by the Pasig Regional Trial Court, September 2023
Filipino journalist Maria Ressa, 2021 Nobel Peace Prize winner and Rappler CEO, talks to reporters after being acquitted by the Pasig Regional Trial Court, September 2023AP Photo/Aaron Favila

And elsewhere in Africa, newsrooms are using AI tools to detect bias and discrimination in their stories.

The same AI tools will almost certainly be used by malicious actors to generate deep fakes, fuel misinformation, and distort public debate at warp speed. 

The Philippines, for example, has had its political discourse upended by social media, to the extent that its most famous editor, the Nobel Peace Prize-winning Maria Ressa, warned that the Philippines is the canary in a coal mine on the interface of technology, communications, and democracy; anything that happens there will happen in the rest of the world within a few years. 

There is pushback however and Filipino society is taking action — ahead of next year’s elections, media organizations and civil society have come together to create ethical AI frameworks as a starting point for how journalists can use this new technology responsibly.

Giving voice to those on the receiving end remains vital

But these kinds of initiatives are only part of the solution. Journalism alone cannot solve the problems posed by generative and program AI in elections, in the same way, it cannot solve the problems of mis and disinformation. 

ADVERTISEMENT

This is an issue regulators, technology companies, and electoral commissions must work on alongside civil society groups — but that alone also won’t suffice. 

It is vital that journalists give a voice to the underrepresented and underprivileged communities at the receiving end of much of the misinformation that drives polarising narratives and undermines trust in elections, and ultimately in democracy itself.

We didn’t pay enough attention to underserved communities and minority groups when social media first upended electoral processes worldwide, contributing to the democratic backsliding and division we see today. Let us not make the same mistake twice.

Meera Selva is the Europe CEO of Internews, a global nonprofit supporting independent media in 100+ countries.

At Euronews, we believe all views matter. Contact us at view@euronews.com to send pitches or submissions and be part of the conversation.

ADVERTISEMENT
Share this articleComments

You might also like

EU hopeful North Macedonia holds presidential elections

Serb-majority municipalities boycott vote to remove Albanian mayors

Basque country holds most hotly contested elections in decades