'We need to demand more scrutiny': How can tech companies better address the problem of AI bias?

"Internet-trained models have internet-scale biases," said researchers in 2020.
"Internet-trained models have internet-scale biases," said researchers in 2020. Copyright Canva
Copyright Canva
By Oceane Duboust
Share this articleComments
Share this articleClose Button

Artificial intelligence (AI) tools - from algorithms to generative software - have in-built biases. So, why hasn’t the problem been fixed?

ADVERTISEMENT

For years, it has been acknowledged that artificial intelligence (AI) tools, including decision-making algorithms and generative software, can have human-like biases. So, why do the most recent releases still exhibit them?

In part, large language models like OpenAI's original GPT are trained on extensive data, so knowing the sources of biases can be complex.

For example, GPT has been trained on data from the Internet - the details of which were not disclosed. OpenAI's own researchers said in 2020: “Internet-trained models have Internet-scale biases; models tend to reflect stereotypes present in their training data".

When its GPT-3 was released it was found to reproduce sexist, racist, and religious stereotypes.

"There should be measures in place at (the development) stage to identify potential biases and to mitigate and address those," Dr Mhairi Aitken, an Ethics Fellow in the Public Policy Programme at The Alan Turing Institute, told Euronews Next.

The global community is pushing for more transparency, including support for open-source AI development. Demands for increased scrutiny and accountability for Big Tech companies are growing, aiming to minimise and address bias issues.

"I think we need to demand a lot more scrutiny in the development phase and a lot more kind of accountability of big tech companies to know what they're doing to minimise and address bias," she added.

And if these tools are so flawed, why were they released to the public in the first place?

"Their release was driven by commercial competitiveness rather than value to society," said Aitken.

AI tools still a work in progress

There are efforts underway to create more responsible AI, and that includes learning from previous releases of the technology.

Companies have put in safeguards to prevent misuse, such as OpenAI’s moderation API which is meant to flag harmful content.

“Do we have to reproduce society as it is? Or do we have to represent society as we would like it to be? But then, according to whose imagination?”
Giada Pistilli
Principal Ethicist at Hugging Face

The US government has also helped to coordinate a hacking convention this year to publicly evaluate AI systems.

Think of it as a massive "red-team" exercise to explore how things can go wrong - a technique widely used in cybersecurity.

Another technique used to address bias is reinforcement learning from human feedback, Giada Pistilli, an ethicist at Hugging Face, told Euronews Next.

This has human workers judge the results of the AI model for it to achieve more natural and less harmful outputs.

But it is a process that is also somewhat limited, with each person bringing their own bias.

For Pistilli, we need to ask ourselves what we want from these models.

"Do we have to reproduce society as it is? Or do we have to represent society as we would like it to be? But then, if the answer is the second case, according to whose imagination?"

ADVERTISEMENT

Anthropic, the company founded by former OpenAI researchers, created its own chatbot, Claude.

To make sure that Claude would behave as appropriately as possible, they endowed it with a constitution that "draws from a range of sources including the UN Declaration of Human Rights, trust and safety best practices and principles proposed by other AI research labs," according to the company’s blog.

"When these models are released, they're not finished products. So they're put out into the public domain for people to use them to further refine them, to test them to develop the next iteration of the model," said Aitken.

"And so when people are using them, especially when they're made freely available, you know, people are then part of the development process without necessarily an awareness of that".

Are AI companies really dedicated to ethics?

AI-producing companies publicly advertise having ethics teams, but some say it has become apparent that these companies would be quick to circumvent ethics in pursuit of innovation.

ADVERTISEMENT
All of these decisions are human decisions, and they're never going to be bias-free.
Dr Mhairi Aitken
Ethics Fellow at The Alan Turing Institute

Aitken says we need "approaches that really ensure accountability".

"And it can't just be something which is done behind closed doors where nobody knows if they actually did anything to address the problems or the harms that were identified," she added.

Google's infamous firing of Timnit Gebru, co-lead of the company’s AI ethics team, and Margaret Mitchell, another top ethics researcher, in 2021, has led to scrutiny of tech companies’ commitment to ethics.

The Washington Post also recently reported that layoffs in tech companies have severely impacted ethics teams as well.

According to Pistilli, there is a growing awareness of what is at stake when it comes to bias in AI.

ADVERTISEMENT

"I think there's going to be a growing need for people from the social and human sciences who are interested in these issues," she says.

But AI is fundamentally a product of people.

"Values are shaping the decisions about what datasets to use, how the models are being designed, about the ways they're developed or the functions that they're used for. All of these decisions are human decisions, and they're never going to be bias-free," said Aitken.

Share this articleComments

You might also like