Euroviews. Artificial intelligence is not the new Tower of Babel. We must beware of technophobia instead

An illustration of a futuristic Tower of Babel
An illustration of a futuristic Tower of Babel Copyright Midjourney/Euronews
Copyright Midjourney/Euronews
By Prof Ioannis Pitas, Chair of the International AI Doctoral Academy (AIDA)
Share this articleComments
Share this articleClose Button
The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

The positive impact of AI systems can greatly outweigh their negative aspects if proper regulatory measures are taken. Technophobia is neither justified nor a solution, Prof Ioannis Pitas writes.

ADVERTISEMENT

Amidst the growing fears over the possibility of increasingly present artificial intelligence spinning out of control, perhaps it would be good to begin by considering the following parable in the style of ancient peoples to help illustrate the narrative.

Once upon a time, the hugely prosperous AIcity — let's call it that — grew at an astonishing pace. 

Its AImasons used to build nice sophisticated houses, then low-rises. As this was a highly profitable undertaking, they started building more complicated high-rises, using more or less the same technologies. 

A few cracks started appearing here and there, but nobody paid close attention. The AImasons were so fascinated by their success that they started building very tall skyscrapers, aptly named "AI towers of Babel", by just scaling the same construction techniques at a frantic pace. 

Their AI towers could house many thousands of inhabitants. However, no AImason could really understand why such very complex buildings functioned so well. 

At the same time, cracks and mishaps continued happening at an alarming rate.

Nobody knows what to do, everybody expects the worst

Now, the AImasons started really being worried: What is the source of the technical problems? Is there any chance that these AI towers will collapse? Did we already cross the safe height limit?

The AI tower owners had more materialistic concerns: What happens if the towers collapse? Who will reimburse the victims?

What regulations and legislature apply in such cases? What is the competition doing? How can we outsmart it?

Originally, the city's population was very fascinated by living in these wonderful AI towers. They were awed by their sheer size. 

In short: nobody knew what to do, but very many started fearing the worst.
AP Photo/Martin Meissner
Visitors watch flowing data sculptures at the exhibition "Refik Anadol. Machine Hallucinations" at the Kunstpalast art museum in Duesseldorf, May 2023AP Photo/Martin Meissner

However, quite a few of them started being concerned when seeing unexplainable problems here and there and projecting them into the future. 

They kept asking, are we really capable of creating such huge complex constructions, and are we safe in such a city?

The AIcity Government was too busy with other pressing problems and did not care to address all these issues.

In short: nobody knew what to do, but very many started fearing the worst.

The parable ends here — and I promise it wasn't an AI chat-generated one.

AI enthusiasm is laced with technophobia

Yet, this is the current state of affairs when it comes to generative AI and Large Language Models like ChatGPT. AI enthusiasm is, in fact, laced with technophobia. 

This is natural for the general public: they like new exciting things, but they are afraid of the unknown.

The new thing is that several prominent scientists became techno-scepticists, if not technophobic themselves. 

ADVERTISEMENT

The case of the scientists and industrialists asking for a six-month ban on AI research, or the scepticism of the top AI scientist Prof Geoffrey Hinton, are such examples. 

These are legitimate concerns that fuel the fear of the unknown, even among prominent scientists. After all, they are humans themselves.
Noah Berger/AP
Computer scientist Geoffrey Hinton, who studies neural networks used in artificial intelligence applications, poses at Google's Mountain View HQ in March 2015Noah Berger/AP

The only related historical equivalent I can recall is the criticism of atomic and nuclear bombs by a part of the scientific community during the Cold War. Luckily, humanity managed to address these concerns in a rather satisfactory way.

Of course, everyone has the right to question the current state of AI affairs. For one, nobody knows why Large Language Models work so well and if they have a limit. 

There are also many dangers that the bad guys might create "AI bombs", particularly if governments remain passive bystanders in terms of regulations.

These are legitimate concerns that fuel the fear of the unknown, even among prominent scientists. After all, they are humans themselves.

ADVERTISEMENT

We need to maximise AI's positive impact

However, can AI research stop, even temporarily? In my view, no, as AI is the response of humanity to a global society and physical world of ever-increasing complexity. 

As the physical and social complexity increase, processes are very deep and seem relentless. AI and citizen morphosis is our only hope to have a smooth transition from the current Information Society to a Knowledge Society. 

Otherwise, we may face a catastrophic social implosion.

The solution is to deepen our understanding of AI advances, speed up its development, and regulate its use towards maximising its positive impact while minimising the already evident and other hidden negative effects. 

Every effort should be made to facilitate the exploration of the positive aspects of AI in social and financial progress and to minimise its negative aspects.
AP Photo/Michael Liedtke
The empty driver's seat is shown in a driverless Chevy Bolt car named Peaches during a ride in San Francisco, September 2022AP Photo/Michael Liedtke

AI research can and should become different: more open, democratic, scientific and ethical. And to that effect, there are ways in which we could approach the issue in a constructive manner.

ADVERTISEMENT

For one, the first word on important AI research issues that have a far-reaching social impact should be delegated to elected parliaments and governments rather than to corporations or individual scientists.

Every effort should be made to facilitate the exploration of the positive aspects of AI in social and financial progress and to minimise its negative aspects.

The positive impact of AI systems can greatly outweigh their negative aspects if proper regulatory measures are taken. Technophobia is neither justified nor a solution.

There are dangers to democracy and progress, but that can be dealt with

In my view, the biggest current threat comes from the fact that such AI systems can remotely deceive too many citizens that have little or average education and/or little investigative capacity. 

This can be extremely dangerous to democracy and any form of socio-economic progress.

ADVERTISEMENT

In the near future, we should counter the big threat coming from LLM and/or CAN use in illegal activities (cheating in university exams is a rather benign use in the space of related criminal possibilities).

As AI systems have a huge societal impact and towards maximising benefits and socio-economic progress, advanced key AI system technologies should become open.
AP Photo/Timothy D. Easley
Bella Whitice talks with classmate Katherine McCormick as they try and outwit the "robot" that was creating writing assignments in a Kentucky elementary school, February 2023AP Photo/Timothy D. Easley

Furthermore, their impact on labour and markets will be very positive in the medium-long run.

To help that, in my opinion, AI systems should: a) be required by international law to be registered in an "AI system register" and b) notify their users that they are talking with or using the results of an AI system.

As AI systems have a huge societal impact and towards maximising benefits and socio-economic progress, advanced key AI system technologies should become open.

AI-related data should be (at least partially) democratised, again towards maximising benefit and socio-economic progress.

ADVERTISEMENT

We can allow progress while maintaining regulatory mechanisms, too

Proper strong financial compensation schemes must be foreseen for AI technology champions to compensate for any profit loss due to the aforementioned openness and to ensure strong future investments in AI R&D, such as through technology patenting and obligatory licensing schemes.

The AI research balance between academia and industry should be reworked to maximise research output while maintaining competitiveness and granting rewards for undertaken R&D risks.

AP Photo/Jeff Chiu
Attendees watch a demonstration of Unity's enemy artificial intelligence (A.I.) system at their booth at the Game Developers Conference 2023 in San Francisco, March 2023AP Photo/Jeff Chiu

Education practices should be revisited at all education levels to maximise the benefit of AI technologies while creating a new breed of creative and adaptable citizens and (AI) scientists.

And finally, proper AI regulatory, supervision, and funding mechanisms should be created and beefed up to ensure the above.

Perhaps then, the allegory above will be nothing more than just a (mildly) entertaining fable.

ADVERTISEMENT

Dr Ioannis Pitas is a professor at the Aristotle University of Thessaloniki – AUTH and the Chair of the International AI Doctoral Academy (AIDA), a leading pan-European AI studies instrument.

At Euronews, we believe all views matter. Contact us at view@euronews.com to send pitches or submissions and be part of the conversation.

Share this articleComments

You might also like

Denmark to get powerful AI supercomputer and research centre

AI revolution is about to ring in the next chapter of Poland's history

In 2024 elections, we have to protect minorities from AI-aggravated bias