Sam Altman says 'potentially scary' AI is on the horizon. This is what keeps AI experts up at night

Is AI going to be beneficial for society or will there be "potentially scary" tools that could be disruptive or even risky to our way of life?
Is AI going to be beneficial for society or will there be "potentially scary" tools that could be disruptive or even risky to our way of life? Copyright Canva
By Luke Hurst
Share this articleComments
Share this articleClose Button

OpenAI CEO Sam Altman has said "potentially scary" uses for AI are on the horizon - but some experts say we are already in a "dystopic present".

ADVERTISEMENT

The world may not be “that far from potentially scary” artificial intelligence (AI) tools, the CEO of OpenAI said on the weekend.

Sam Altman, whose company created the wildly popular ChatGPT, was giving his thoughts on the current and future state of AI in a Twitter thread, following the explosion in public interest in generative AI tools.

Some experts, however, have told Euronews Next that rather than “potentially scary” AI applications being around the corner, we are currently living in a “dystopic present” thanks to the use of AI in sensitive settings that have a real impact on people’s opportunities.

Altman was speaking out following the integration of ChatGPT in Microsoft’s Bing search engine, which a number of tech experts and journalists put to the test - with some terrifying results.

Amid a two-hour chat with a New York Times tech columnist, Bing professed its love for him, tried to break up his marriage, and told him "I want to be alive".

Others have reported threats of violence and blackmail emanating from the chatbot, which is still in its testing phase.

Altman said in his Twitter thread “the adaptation to a world deeply integrated with AI tools is probably going to happen pretty quickly,” while admitting the tools were “still somewhat broken”.

"Regulation will be critical and will take time to figure out," he said, adding that “although current-generation AI tools aren’t very scary, I think we are potentially not that far away from potentially scary ones".

So, what do AI ethics experts - the people who are thinking ahead and trying to shape the future integration of AI into our everyday lives - think about this?

'The dystopic present'

While Altman claims "current-generation AI tools aren’t very scary," some experts disagree.

Sarah Myers West, Managing Director of the AI Now Institute, told Euronews Next that "in many senses, that’s already where we are," with AI systems already being used to exacerbate "longstanding patterns of inequality".

AI Now is an American research institute studying the social implications of artificial intelligence, putting them at the forefront of thinking around the challenges that AI poses to society.

"They're used in very sensitive decision-making processes, often without very little oversight or accountability. So I think that we're already seeing that unfold around us. And that's exactly what's animating the drive to look at policy approaches to shape the direction that it takes," Myers West said.

These sensitive decision-making processes include hiring processes and education.

"One area, just as one example of many, is the use of emotion or affect recognition. Which is essentially the claim that you can infer people's inner emotional states or mental states from their facial features, and that there are particular AI systems that can read people's emotional states and even their personality traits," Amba Kak, AI Now’s Executive Director, said.

I think we're also then ceding ground to the fact that a handful of tech companies, would essentially have tremendous and unjustifiable control and power over societies and over people's lives.
Amba Kak
Executive Director, AI Now

These AI systems are based on scientific foundations that are “shaky at best,” and they are “actually shaping people’s access to opportunity in real-time," she added.

"So, there's an urgent need to restrict these systems".

Kak and Myers West both push back on the idea of a dystopian future, as for them, in some ways, we are living in a “dystopic present”.

ADVERTISEMENT

"Yesterday is the right time to introduce friction into that process to redistribute that power," argues Myers West.

"Let's say we accept that these technologies are a kind of inevitable future," said Kak.

"I think we're also then ceding ground to the fact that a handful of tech companies, would essentially have tremendous and unjustifiable control and power over societies and over people's lives, and over how eventually - and I don't think this is hyperbolic to say - even the autonomy we have to think, given just how much algorithms are shaping our information flows and so many aspects of our lives".

To say that AI is not currently regulated would, however, be a misconception, Kak explained.

While the EU and the US are drawing up their AI regulatory frameworks, there are at least indirect regulations already in place.

ADVERTISEMENT

The data and computational infrastructures that make up the components of current AI technologies are "already regulated at many different levels," for example with data protection laws in the EU.

Other kinds of AI systems are already regulated in many countries, especially regarding facial recognition and biometrics, she added.

Regulation means the ability to shape the direction these technologies can take us, Kak says, with it being "less as a kind of constraining force and more as a shaping force in terms of how technologies develop".

What’s coming in terms of regulation, and why?

According to the Organisation for Economic Co-operation and Development’s (OECD) AI Policy Observatory, there are already 69 countries and territories with active AI policy initiatives, but most significantly the EU is currently drafting its own AI Act, which will be the first law on AI put in place by a major regulator.

Currently, the act divides AI into four risk-based categories, with those posing minimal or no risk to citizens - such as spam filters - being exempt from new rules.

ADVERTISEMENT

Limited risk applications include things like chatbots, and will require transparency to ensure users know they are interacting with an AI.

A few years ago we could not imagine many of the capabilities that AI is now supporting in our personal lives and in the operations of many companies.
Francesca Rossi
Fellow and AI Ethics Global Leader, IBM

High risk could include using AI for facial recognition, legal matters, or sorting CVs during employment processes. These could cause harm or limit opportunities, so they will face higher regulatory standards.

AI deemed an unacceptable risk - in other words, systems that are a clear threat to people - "will be banned," according to the European Commission.

Thierry Breton, the European Commissioner for the Internal Market, recently said the sudden rise of the popularity of applications like ChatGPT and the associated risks underscore the urgent need for rules to be established.

According to Francesca Rossi, an IBM fellow and the IBM AI Ethics Global Leader, “companies, standard bodies, civil society organisations, media, policymakers, all AI stakeholders need to play their complementary role” in achieving the goal of making sure AI is trustworthy and used responsibly.

ADVERTISEMENT

"We are supportive of regulation when it uses a 'precision' risk-based approach to AI applications, rather than AI technology: applications which are riskier should be subject to more obligations," she told Euronews Next.

“We are also supportive of transparency obligations that convey the capabilities and limitations of the technology used,” she added, noting this is the approach the EU is taking with its AI Act.

She champions a company-wide AI ethics framework at IBM, a company that is supporting the rapid transformation towards the use of AI in society, "which always comes with questions, concerns, and risks to be considered and suitably addressed".

"A few years ago we could not imagine many of the capabilities that AI is now supporting in our personal lives and in the operations of many companies," Rossi said.

Like the representatives of the AI Now Institute, she believes "we as society and individuals" must steer the trajectory of AI development "so it can serve human’s and the planet's progress and values".

ADVERTISEMENT

AI could 'disrupt the social order'

One of the particular fears expressed by Kak and Myers West about the rollout of AI systems in society was that the negative or positive impacts will not be distributed evenly.

"I feel like sometimes it might appear as if everybody will be equally impacted by the negatives, by the harms of technology, when actually, that's not true," said Kak.

"The people building the technology and people who inhabit similar forms of privilege, whether that's race, privilege, class privilege, all of these things, it feels like those are people that are unlikely to be as harmed by seeing a racist tiering algorithm. And so the question to ask is not just will AI benefit humanity, but who will it work for and who will it work against?"

Are we using technology in a way that is safe, just, and equitable? Are we helping citizens, residents, and employees flourish?
Joanna Bryson
Professor of Ethics and Technology, Hertie School of Governance

This is also an area of interest for Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Governance in Berlin.

For her, the boom in AI could turn out to be a period of technological progress that "disrupts the current social order" while leaving some people behind.

ADVERTISEMENT

"I think society's only stable when we produce those kinds of contexts, that people have an idea of where they belong and how they fit in," she told Euronews Next.

"And they're proud of their job and they're willing to go and compete for this, and they make enough money to be reasonably stable. And so what I'm really worried about is that I just think we're probably going through these periods when technology disrupts the social order we had".

“In the long term, if you aren't keeping people happy and interested and engaged and healthy, and you are adequately well paid and everything else, you're not going to have a secure society".

Writing on her blog at the end of 2022, regarding the question of her biggest concerns around AI ethics, Bryson said the biggest challenges involving AI are around digital governance.

"Are we using technology in a way that is safe, just, and equitable? Are we helping citizens, residents, and employees flourish?" she asked.

ADVERTISEMENT

With the EU still fleshing out its AI Act ahead of presenting it to the EU Parliament at the end of March, these questions may remain unanswered for some time.

In the meantime, Meyers West wants to emphasise that "we have tremendous scope to shape the direction of where our technological future takes us".

"I think that it's really important that these policy conversations proceed in exactly that vein, ensuring they're working in the interests of the broader public and not just in the imaginations of those who are building them and profit from them," she said.

Share this articleComments

You might also like