OpenAI updates ChatGPT macOS app to encrypt conversations

This blunder raised privacy concerns as it made it possible for malicious actors or unauthorized applications to easily access users’ chat history.
OpenAI updates ChatGPT macOS app to encrypt conversations

OpenAI has issued a patch for a security flaw in its ChatGPT macOS app, following the discovery that conversations were being stored in plain text.

Developer Pedro José Pereira Vieito brought the issue to light, demonstrating on Threads how easily another app could access and display these conversations. Vieito’s findings revealed that the ChatGPT app did not utilize macOS’s sandbox protections designed to isolate app data from the rest of the system. Consequently, the app’s data, including chat histories, was stored in a way that was readily accessible without encryption.

He shared an app with Jay Peters that he developed that could read ChatGPT conversations with a simple click, allowing him to create a video showing how straightforward it was to access the stored data. This oversight from OpenAI meant that any app or process with access to the computer could potentially read these plain text conversations.

OpenAI spokesperson Taya Christianson confirmed that the company has released an update for the macOS app that now encrypts chat conversations. “We are aware of this issue and have shipped a new version of the application which encrypts these conversations,” Christianson said in a statement to The Verge. “We’re committed to providing a helpful user experience while maintaining our high security standards as our technology evolves.”

Following this update, Peters confirmed that his app could no longer access the conversations, indicating that the encryption effectively mitigated the issue.

The discovery raised questions about why OpenAI’s app did not initially employ Apple’s sandboxing requirements. Typically, sandboxing is a security mechanism that ensures an app runs in an isolated environment, preventing it from accessing other system parts without explicit permission. This practice is standard for iOS apps but optional for macOS apps distributed outside the Mac App Store. Sandboxing would have added an extra layer of security by isolating the app’s data from other system processes and applications.

While OpenAI’s privacy policies already state that user conversations with ChatGPT may be reviewed to improve the language model, the prospect of this data being easily accessible to third parties was alarming for many users. The plaintext storage meant that sensitive information could be vulnerable to unauthorized access.

Posted by Alex Ivanovs

Alex is the lead editor at Stack Diary and covers stories on tech, artificial intelligence, security, privacy and web development. He previously worked as a lead contributor for Huffington Post for their Code column.