OpenAI has revealed a security incident linked to Mixpanel, a third-party analytics provider previously used on the frontend of its API platform. The company said the breach occurred in Mixpanel’s systems, not OpenAI’s, and no sensitive API data or user content was compromised.
Mixpanel informed OpenAI on November 9 that an unauthorized actor had accessed part of its internal systems and exported an analytics dataset. The full scope of the exposure was confirmed on November 25.
OpenAI clarified that the incident did not affect ChatGPT users or any other consumer products. No chat history, API requests, passwords, credentials, API keys, payment information, or government IDs were compromised.
The exposed dataset included limited account-related information for some API users on platform.openai.com. This data may have contained:
Name provided on the API account
Email address associated with the account
Approximate location based on browser (city, state, country)
Operating system and browser used
Referring websites
Organisation or user IDs associated with the account
OpenAI has removed Mixpanel from its production systems and is conducting a full internal security review along with Mixpanel. Affected organisations, admins, and users are being contacted directly.
While no evidence shows that other systems or data were accessed, OpenAI is strengthening security across its vendor ecosystem and reviewing all third-party partners.
The company also warned that the leaked data could be used for phishing or social engineering. Users are advised to be cautious of suspicious emails or messages appearing to come from OpenAI. OpenAI reminded users that it never asks for passwords, API keys, or verification codes via email or chat, and recommended enabling multi-factor authentication for added security.