Euroviews. Is generative AI truly making disinformation worse?

Deepfake clones of European Commission President Ursula von der Leyen, illustration
Deepfake clones of European Commission President Ursula von der Leyen, illustration Copyright Euronews
Copyright Euronews
By Kalim Ahmed
Share this articleComments
Share this articleClose Button
The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

Without tackling existing flaws within our current ecosystem, we are creating a fertile ground where disinformation and other harmful phenomena will continue to flourish, Kalim Ahmed writes.

ADVERTISEMENT

In recent years, AI has dominated discussions across various sectors. 

Its swift integration and growth outpace the general public's ability to keep up with the developments. 

This has forced journalists, researchers, and civil society organisations to continuously highlight the drawbacks of these emerging technologies, and they are right in doing so.

The discussion was sparked by the latest release of Sora, a text-to-video AI model, with many debating whether tools like these have any beneficial applications or an inherently negative impact on the world. 

Some opted for a more measured approach and sought to investigate the dataset utilized in training models like Sora. They decided to run the prompts demoed by OpenAI via Midjourney, a contemporary of Sora, and discovered striking similarities between the outputs of both models. 

A tech columnist uncovered that a hyperrealistic clip generated by Sora, which gained significant traction on social media, was trained using footage from Shutterstock, a company partnered with OpenAI. This highlights the limited availability of high-quality datasets for training such models.

One of the significant concerns that repeatedly arise in the public discourse is the harmful impacts of AI-assisted disinformation campaigns, particularly given the widespread availability of tools for creating deceptive content such as images, text, deep fakes, and voice cloning. 

The overarching concern is that these tools enable the production of misleading content at scale, potentially skewing public perceptions and even influencing behaviours. 

However, excessively highlighting the potential of AI-assisted disinformation shifts attention away from two important realities. 

Firstly, traditional disinformation methods continue to be effective, and threat actors might exploit today's AI models to propagate disinformation not solely due to its sophistication, but rather because our current infrastructures are flawed, enabling disinformation to flourish regardless of AI involvement. 

Secondly, the application of generative AI may not necessarily be useful enough for disinformation but rather extends to other malicious activities that are being overlooked because of our singular focus on disinformation.

The flaw in our information ecosystem

The excessive attention given to the dangers of AI-enabled disinformation has somewhat reached the levels of science fiction. 

This is largely due to the way public discourse, particularly in popular news media, tends to sensationalize the topic rather than approaching it from a grounded, realistic perspective.

A prominent case worth examining is the incident from last year alleging a blast near the Pentagon, supported by an AI-generated image purportedly depicting the event. 

The initial assertions originated from unreliable sources such as RT and were swiftly amplified by television outlets in India.

One could argue that this is a successful demonstration of the harms of AI-assisted disinformation, however, upon closer examination, this serves as a case in point highlighting a flaw within our current information infrastructure. 

The AI-generated image employed to substantiate the claim was of low quality; a more convincing counterfeit image could be produced using tools like Adobe Photoshop, potentially causing more significant harm. 

Has AI decreased the time needed for malicious actors to generate false information? Undoubtedly. However, the counterfeit image would have still spread rapidly as it was disseminated by premium users on X.
The Pentagon in Washington, March 2018
The Pentagon in Washington, March 2018AP Photo/Charles Dharapak

Has AI decreased the time needed for malicious actors to generate false information? Undoubtedly. However, the counterfeit image would have still spread rapidly as it was disseminated by premium users on X (formerly Twitter). 

ADVERTISEMENT

For the public and many in traditional media, grasping the complete transformation of the platform is challenging; there's no meaning of a verified check mark. It's now simply premium and non-premium users. Relying on the blue check mark to swiftly gauge the authenticity of a claim is obsolete. 

Furthermore, when traditional television news outlets disseminated the claim, they disregarded fundamental principles of media literacy. 

Since the assertion was being propagated by RT, a mouthpiece of the Kremlin engaged in conflict with Ukraine (a nation backed by the US and its allies), this contextual awareness needed additional verification. 

However, the visuals were promptly showcased on television screens across India without undergoing any form of cross-verification.

Zelenskyy's 'cocaine habit' lies did not need AI's help

There have been numerous instances of disinformation campaigns orchestrated by pro-Kremlin actors that don't rely on AI. 

ADVERTISEMENT

The Microsoft Threat Analysis Center (MTAC) uncovered one such campaign where celebrity Cameo videos were manipulated to falsely depict Ukrainian President Volodymyr Zelenskyy as a drug addict. 

Celebrities were paid to deliver messages to an individual named "Vladimir," urging him to seek assistance for substance abuse. 

Moldova's President Maia Sandu greets Ukraine's President Volodymyr Zelenskyy in Bulboaca, June 2023
Moldova's President Maia Sandu greets Ukraine's President Volodymyr Zelenskyy in Bulboaca, June 2023AP Photo/Vadim Ghirda

Subsequently, these videos were doctored to include links, emojis, and logos, creating an illusion of authenticity, as if they were originally posted on social media. 

These videos were then covered by Russian state-affiliated news agencies, including RIA Novosti, Sputnik, and Russia-24. 

Repeatedly, disinformation campaigns orchestrated by unidentified pro-Russia actors have sought to mimic mainstream media outlets to disseminate anti-Ukraine narratives. 

ADVERTISEMENT

They employ fabricated news article screenshots as well as recaps of news videos to achieve this. Neither of these methods necessitates AI; rather, they rely on traditional techniques like skilful manipulation using software such as Adobe Photoshop and Adobe After Effects.

The deficiencies are already in place

The uproar surrounding AI-driven disinformation also serves to protect Big Tech companies from being held accountable. 

To put it plainly, an AI-generated image, text, audio, or video serves little purpose without a mechanism to disseminate it to a wide audience. 

Investigative reporting has continuously shed light on the deficiencies within the advertising policies regulating Meta platforms (Facebook, Instagram, Audience Network, and Messenger). 

Last year, a report revealed a network of crypto scam ads operating on Meta platforms leveraging the images of celebrities. 

ADVERTISEMENT
A Russian disinformation campaign linked to its military intelligence has been using images of celebrities, including Taylor Swift, Beyoncé ... and Cristiano Ronaldo, to promote anti-Ukraine messages on social media by exploiting vulnerabilities in Meta and X's advertising policies.
Taylor Swift performs at Amazon Music's Prime Day concert at the Hammerstein Ballroom in New York, July 2019
Taylor Swift performs at Amazon Music's Prime Day concert at the Hammerstein Ballroom in New York, July 2019AP Photo/Evan Agostini

These advertisements did not employ sophisticated AI; rather, they utilized manipulated celebrity images and exploited a flaw in Meta's ad policies. 

These policies allowed URLs of reputable news outlets to be displayed, but upon clicking, users were redirected to crypto scam websites. This deceptive tactic has been termed a "bait and switch". 

Similarly, a Russian disinformation campaign linked to its military intelligence has been using images of celebrities, including Taylor Swift, Beyoncé, Oprah, Justin Bieber, and Cristiano Ronaldo, to promote anti-Ukraine messages on social media by exploiting vulnerabilities in Meta and X's advertising policies. 

These loopholes represent just one of the numerous flaws within our current system. Given the demonstrated efficacy of political advertising through Big Tech, we must prioritize addressing such flaws.

We still don't know how to handle disinformation

As we enter into the biggest election year to date (involving a whopping 45% of the world population), it is becoming apparent that Big Tech companies are beginning to retract some of their policies regarding misinformation and political advertising. 

ADVERTISEMENT

In a major policy move last year, YouTube announced that it would no longer moderate misleading claims such as those that the 2020 presidential election was stolen from Trump, highlighting the continuing challenge of how to address misinformation and disinformation at scale, particularly on video-based platforms. 

Even Meta, which has typically touted its third-party fact-checking initiative, recently introduced new controls. Users now have the option to determine whether they want fact-checked posts to be displayed prominently on their feed (accompanied by a fact-checked label) or to be pushed further down based on their preference.

Fact-checking initiatives supported by Meta and Google have gained global momentum and should continue to receive support. However, this does not mean we should become content with what we have achieved and stop identifying other societal variables that drive disinformation.
Facebook, Messenger and Instagram apps are are displayed on an iPhone, March 2019
Facebook, Messenger and Instagram apps are are displayed on an iPhone, March 2019AP Photo/Jenny Kane

However, this policy is not without its flaws. For instance, if an influential figure encourages all their followers to adjust their settings to prioritize fact-checked posts, it could potentially make the entire purpose of fact-checking useless. 

After all, it is not the first time public personalities have attempted to game algorithms using their following. 

These major policy moves are happening in the US, where Big Tech companies are much more rigorous with their election interference and manipulation policies. 

ADVERTISEMENT

It should be noted that universally, Meta does not extend its fact-checking policies to political advertisements or posts made by politicians, which is a major policy issue that has been criticized for years. 

It's generally agreed by experts in the field of disinformation that these companies tend not to be as meticulous with third-world countries, with some using the term "step-child treatment" to describe this phenomenon. 

So it begs the question: if their policies were to have leaks or insufficient measures in place, what impact would this have on these countries?

It's not just the content, it's the narrative it generates

A linear approach to disinformation i.e., assuming that fact-checking is simply going to solve the problem, is an intellectually dishonest measure. 

Even with potential flaws, these fact-checking initiatives supported by Meta and Google have gained global momentum and should continue to receive support. 

ADVERTISEMENT

However, this does not mean we should become content with what we have achieved and stop identifying other societal variables that drive disinformation.

Additionally, for individuals vulnerable to misinformation and disinformation, the determining factor is not necessarily the quality of the content, such as sophisticated AI-generated images and videos, but rather the overarching narrative that these contents endorse. 

For disinformation producers, it's a binary classification challenge, much like how scammers, masquerading as Nigerian Princes, differentiate between susceptible and unsusceptible users, allowing them to target sufficient victims for profit.
French President Emmanuel Macron waits on the steps of the Elysee Palace, March 2024
French President Emmanuel Macron waits on the steps of the Elysee Palace, March 2024AP Photo/Michel Euler

For example, a deepfake video recently went viral showing an alleged France 24 broadcast in which a presenter announces President Emmanuel Macron has cancelled a scheduled visit to Ukraine over fears of an assassination attempt. 

The broadcast was never aired by France24 but it was already significantly viral in the Russian information sphere that former Russian president and deputy chair of the Security Council of Russia Dmitry Medvedev was quick to share his opinion on X about the alleged news report. 

The so-called piece of information about the assassination attempt on Macron could’ve been easily cross-verified by reading reports of other reputable news outlets, yet the authenticity of it did not matter to those who were already naive.

ADVERTISEMENT

For disinformation producers, it's a binary classification challenge, much like how scammers, masquerading as Nigerian Princes, differentiate between susceptible and unsusceptible users, allowing them to target sufficient victims for profit. 

Furthermore, when examining the impact of AI-enabled disinformation, it becomes evident that beyond disseminating false information, the predominant harm has been observed in the proliferation of non-consensual intimate images (NCII) and scams. 

This is where bad-faith actors have identified the most impactful application of generative AI thus far.

Efforts by malicious actors are already in motion

Moreover, when considering foreign influence in local elections, the hacking of the Clinton campaign prior to the 2016 US Elections proved to be considerably advantageous for bad-faith actors. 

They exploited an already vulnerable information environment in which the trust in traditional media appeared to diminish. 

ADVERTISEMENT
The bottom line is, while we can't underplay the challenges posed by AI, we must do so with a clear grasp of reality.
FBI Director Christopher Wray testifies during a hearing of the Senate Judiciary Committee to examine Horowitz's report of the FBI's Clinton email probe, June 2018
FBI Director Christopher Wray testifies during a hearing of the Senate Judiciary Committee to examine Horowitz's report of the FBI's Clinton email probe, June 2018AP Photo/Alex Brandon

Russian hacking groups accessed emails from the Clinton campaign, subsequently sharing them with WikiLeaks, which then released the stolen emails in the lead-up to the November election. This triggered a series of detrimental news cycles for Clinton.

Threat actors were able to achieve their goals simply by exploiting our existing vulnerabilities. 

The question now arises, can malicious actors leverage AI models for cyber activities akin to the 2016 Clinton campaign hacking? 

The research conducted collaboratively by OpenAI and Microsoft Threat Intelligence in February of this year revealed efforts by state-affiliated threat actors to exploit these models, resulting in the disruption of five such state-affiliated malicious actors.

Let's keep it real

The bottom line is, while we can't underplay the challenges posed by AI, we must do so with a clear grasp of reality. 

ADVERTISEMENT

This means tackling existing flaws, including shortcomings in Big Tech policies, declining trust in the media, and other risks associated with emerging technologies. 

Without addressing these issues within our current ecosystem, we are creating a fertile ground where disinformation and other harmful phenomena will continue to flourish.

Kalim Ahmed is a digital investigator focusing on disinformation and influence operations.

At Euronews, we believe all views matter. Contact us at view@euronews.com to send pitches or submissions and be part of the conversation.

Share this articleComments

You might also like