Euroviews. AI has been dubbed a 'nuclear' threat to cybersecurity. But it can be also used for defence

An illustration of a person working on their laptop
An illustration of a person working on their laptop Copyright Midjourney/Euronews
Copyright Midjourney/Euronews
By Rodrigo Loureiro, Cybersecurity expert
Share this articleComments
Share this articleClose Button
The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

Things will become more dystopian if organisations continue to ignore rampant problems instead of dealing with the uncomfortable world we’ve created, Rodrigo Loureiro writes.

ADVERTISEMENT

As many businesses are hesitant to allow cybersecurity employees to use AI tools in their work, fearing the field is unregulated and still underdeveloped, key thinkers from various industries have recently written an open letter demanding the halt of AI experiments more advanced than ChatGPT-4. 

Some even say the letter isn’t enough, and society isn’t ready to handle the ramifications of AI.

Unfortunately, Pandora's box has already been opened, and those pretending we can reverse any of these innovations are delusional.

It's not a new invention, either: we’ve been interacting with limited models for years. 

Can you count the times you’ve used a website’s chatbot, your smartphone assistant, or an at-home device like Alexa?

AI has infiltrated our lives just as the internet, smartphones, and the cloud did before it. Fear is justifiable, but companies should be concerned about cyber-criminals and the advancement and increased sophistication of their attacks. 

Outgunned and outsmarted

Hackers using ChatGPT are faster and more sophisticated than before, and cybersecurity analysts who don’t have access to similar tools can very quickly find themselves outgunned and outsmarted by these AI-assisted attackers. 

They’re using ChatGPT to generate code for phishing emails, malware, encryption tools, and even create dark web marketplaces. 

The possibilities for hackers to use AI are endless, and, as a result, many analysts are also resorting to the unauthorised use of AI systems just to get their job done.

96% of security professionals know someone using unauthorised tools within their organisation, and 80% admitted they use prohibited tools themselves.
AP Photo/Michael Dwyer
The OpenAI logo is seen on a mobile phone in front of a computer screen which displays the ChatGPT home Screen, March 2023AP Photo/Michael Dwyer

According to HelpNet Security, 96% of security professionals know someone using unauthorised tools within their organisation, and 80% admitted they use prohibited tools themselves. 

This proves that AI is already a widely used asset in the cybersecurity industry, mostly due to necessity.

Survey participants even said “they would opt for unauthorised tools due to the better user interface (47%), more specialised capabilities (46%), and allow for more efficient work (44%).”

Fatal flaws that can be exploited

Corporations are stumbling to figure out governance around AI, but while they do so, their employees are clearly defying rules and possibly jeopardizing company operations. 

According to a Cyberhaven study of 1.6 million workers, 3.1% input confidential company information into ChatGPT. Although the number seems small, 11% of users' questions include private information. 

This can include names, social security numbers, internal company files, and other confidential information.

This is a fatal flaw for corporate use considering how hackers can manipulate the system into giving them previously hidden information.
JASON REDMOND/AFP
Microsoft CEO Satya Nadella speaks during a keynote address announcing ChatGPT integration for Bing at Microsoft in Redmond, February 2023JASON REDMOND/AFP

ChatGPT learns from every conversation it has with its users and it can regurgitate user information if probed correctly. 

This is a fatal flaw for corporate use considering how hackers can manipulate the system into giving them previously hidden information. 

More importantly, the AI will also know the security mechanisms that the company has when incorporated on a corporate server. 

ADVERTISEMENT

Armed with that information, an attacker could successfully obtain and distribute confidential information.

We can't halt innovation

Whether it be the cloud or the internet, the integration of new technologies has always caused controversy and hesitation. 

But halting innovation is impossible when criminals have gained access to advanced tools that practically do the job for them. 

To correctly address this issue around our society’s security, companies must apply previous governance rules to AI. 

Reusing historically proven procedures would allow companies to catch up with their attackers and eliminate the power imbalance.
THOMAS SAMSON/AFP
An agent of the operational center of the French National Cybersecurity Agency (ANSSI) checks datas on a computer in Paris, November 2022THOMAS SAMSON/AFP

Reusing historically proven procedures would allow companies to catch up with their attackers and eliminate the power imbalance.

ADVERTISEMENT

Streamlined regulation among cybersecurity professionals would allow companies to oversee what tools employees are using, when they are utilising them, and what information is being input. 

Contracts between technology providers and organisations are also common for corporate cloud usage and can be applied to the nebulous sphere of AI.

We can only create safe, controlled environments

We’ve passed the point of no return, and critical adoption is our only solution to live in an AI-driven world. 

Heightened innovation, increased public accessibility, and ease of use have given cybercriminals the upper hand that’s hard to reverse. 

To turn things around, companies must embrace AI in a safe, controlled environment. 

ADVERTISEMENT

The advanced tech is almost uncontrollable, and cybersecurity analysts need to learn how it can be utilised responsibly. 

Companies must stop sticking their head in the sand, hoping for reality to change.
MARCO BERTORELLO/AFP
a computer screen with the home page of the artificial intelligence OpenAI web site, displaying its chatGPT robot in Turin, March 2023MARCO BERTORELLO/AFP

Employee training and development of enterprise tools would strengthen cybersecurity procedures until an industry giant like Microsoft uses the likes of the recently announced security analysis tool Security Copilot to transform the industry. 

In the meantime, companies must stop sticking their head in the sand, hoping for reality to change. 

Things will become more dystopian if organisations continue to ignore rampant problems instead of dealing with the uncomfortable world we’ve created.

Rodrigo Loureiro is a cybersecurity expert. He serves as the CEO of NewPush and the founder and managing partner at CyberVerse Advisors based in Washington, DC.

ADVERTISEMENT

At Euronews, we believe all views matter. Contact us at view@euronews.com to send pitches or submissions and be part of the conversation.

Share this articleComments

You might also like

Denmark to get powerful AI supercomputer and research centre

AI revolution is about to ring in the next chapter of Poland's history

Solutions to major challenges facing global labour markets discussed in Riyadh