Images uploaded to AI chatbots could be retained for an unknown amount of time and, if in the wrong hands, could lead to impersonation, scams, and fake social media accounts.
The artificial intelligence (AI) created caricature trend that showcases everything a chatbot knows about someone in a colourful picture, can pose serious security risks, according to cybersecurity experts.
Users upload a photo of themselves with a company logo or details about their role and ask OpenAI’s ChatGPT to create a caricature of them and their job using what the chatbot knows about them.
Cybersecurity experts told Euronews Next that social media challenges, such as AI caricatures, can provide fraudsters with a treasure trove of valuable information. A single image, paired with personal details, can be more revealing than users realise.
“You are doing fraudsters’ work for them - giving them a visual representation of who you are,” according to Bob Long, vice-president at age authentication company Daon.
The wording of itself should raise red flags, he argued, because it “sounds like it was intentionally started by a fraudster looking to make the job easy.”
What happens to images once they’re uploaded?
When a user uploads an image to an AI chatbot, the system processes the image to extract data, such as the person’s emotion, environment, or information that could disclose their location, according to cybersecurity consultant Jake Moore. That information may then be stored for an unknown period of time.
Long said the images collected from users can be used and retained to train AI image generators as part of their datasets.
A data breach at a company like OpenAI could mean sensitive data, such as uploaded images and personal information gathered by the chatbot, could fall into the hands of bad actors who could exploit it.
In the wrong hands, a single, high-resolution image could be used to create fake social media accounts or realistic AI deepfakes that could be used to run a scam, according to Charlotte Wilson, head of enterprise at Check Point, an Israeli cybersecurity company.
“Selfies help criminals move from generic scams to personalised, high-conviction impersonation,” she said.
OpenAI’s privacy settings state that uploaded images may be used to improve the model, which can include training it. When asked about the model’s privacy settings, ChatGPT clarified that this does not mean every photo is placed in a public database.
Instead, the chatbot said it uses patterns from user content to refine how the system generates images.
What to do if you want to participate in AI trends
For those who still want to follow the trend, experts recommend limiting what you share.
Wilson said users should avoid uploading images that reveal any identifying information.
“Crop tightly, keep the background plain, and do not include badges, uniforms, work lanyards, location clues or anything that ties you to an employer or a routine,” she said.
Wilson cautioned against oversharing personal information in the prompts, such as their job title, city or employer.
Meanwhile, Moore recommended reviewing privacy settings before participating, including the option to remove data from AI training.
OpenAI has a privacy portal which lets users opt out of AI data training by clicking on “do not train on my content.”
Users can also opt out of training from their text conversations with ChatGPT by turning off an “improve the model for everyone” setting.
Under EU law, users can request the deletion of personal data collected by the company. However, OpenAI notes it may retain some information even after deletion to address fraud, abuse and security concerns.