As technology advances, experts warn that manipulated satellite images could be used as a form of disinformation aiming to downplay or cover up real-world situations.
With the growth of technology, experts are warning that deepfake satellite media could become a new form of disinformation.
Deepfakes are videos, audio, or images that have been manipulated using artificial intelligence (AI) and technology in order to change the content.
Most recently, a deepfake video of Tom Cruise went viral on TikTok, while another manipulated clip showed former US President Donald Trump proclaiming: "AIDS is over".
But researchers warn that AI could now be used to create realistic satellite images and maps that are false.
A recent study by the University of Washington examined the concept of "deepfake geography", where technology AI manipulation could make it appear that towns - or even something as small as a bridge - appear in an area where it does not.
The study noted that these deepfake images could be used as a form of disinformation, aiming to downplay or cover up events, for example, if a village has been destroyed, or natural disasters like wildfires or floods.
"A satellite image is just like any other image," said Sam Gregory, programme director at Witness, a human rights video and technology network.
"You can create a satellite image that never existed ... you could also manipulate an element within a satellite image."
The idea of manipulating a map has been done throughout history, such as during wars when governments or authorities were trying to deceive their enemies.
But Gregory argues that manipulated maps hold the same dangers as other deepfake content.
"Authoritarian governments and governments will manipulate images and claim something happened when it didn't," he told The Cube, Euronews' social media newsdesk.
"But people can also tell us not to believe any satellite image in what's called a liar's dividend."
Last month, the European Union unveiled draft regulations that state that deepfakes and other AI technology should be labelled so people know they are interacting with a machine.
But experts say that citizen's awareness of deepfake technology is key in combating the potential spread of this type of disinformation.
"We all need some scepticism around images that we see, but I think we need to be careful we don't verge into believing nothing is true," Gregory told Euronews.
"That actually plays into the ability of governments and people with power to use this liars dividend to claim that nothing is true and you can't believe anything you see.
"We will soon have tools to help us detect deepfakes, but a healthy degree of scepticism, not an excessive one, is important," he added.