OpenAI has confirmed a security breach involving a third-party analytics provider, Mixpanel.
ChatGPT maker OpenAI has confirmed a security incident, which it says is not its fault.
The data breach involves a third-party analytics provider, Mixpanel, which resulted in the exposure of limited user data associated with its API platform.
“This was not a breach of OpenAI’s systems. No chat, API requests, API usage data, passwords, credentials, API keys, payment details, or government IDs were compromised or exposed,” the company said in an email notifying users on Thursday.
Mixpanel reportedly became aware of an attacker on November 9, OpenAI said.
The threat actor gained unauthorised access to part of its systems and exported a dataset which had limited customer-identifiable information and analytics data.
OpenAI said the information that may have been affected was limited to names, email addresses, and user identifiers.
OpenAI said that it had terminated its use of Mixpanel and reaffirmed that the breach wasn’t caused by any vulnerabilities in OpenAI’s systems.
What does it mean for your data?
The company said it would investigate the breach and urged users to be additionally vigilant of phishing-type attacks and social engineering scams that might attempt to leverage the stolen data.
Users have been encouraged to enable multi-factor authentication as an additional protective measure for their accounts.
While OpenAI said no conversations with ChatGPT were exposed, the incident is a reminder of how much personal data OpenAI has access to as people bear their souls to chatbots.
OpenAI said that it plans to enforce stricter security requirements for all external partners.
While OpenAI's use of Mixpanel analytics is standard practice, it tracked data like email addresses and location that wasn't necessary for product improvement, potentially violating GDPR's data minimisation principle, said Moshe Siman Tov Bustan, a security research team lead at OX Security, an AI security company.
"Companies – from tech giants like OpenAI to one-person startups – should always aim to over-protect and anonymise customer data sent to third parties in order to avoid that type of information being stolen or breached," he told Euronews Next.
"Even when using legitimate, vetted vendors, every piece of identifiable data sent externally creates another potential exposure point".