Until Friday, OpenAI’s recently launched ChatGPT macOS app had a significant security concern: users could easily locate their chats stored on their computers and access them in plain text. This vulnerability meant that if a malicious actor gained access to your device, they could read your ChatGPT conversations and any sensitive information contained within. Pedro José Pereira Vieito showcased this issue on Threads, demonstrating that another app could quickly access and display these files.
After a tech media site alerted OpenAI to the problem, the company released an update that implements encryption for user conversations. “We are aware of this issue and have shipped a new version of the application which encrypts these conversations,” stated OpenAI spokesperson Taya Christianson. “We’re committed to providing a helpful user experience while maintaining our high security standards as our technology evolves.” Following the update, Pereira Vieito’s app ceased to function, and my own conversations could no longer be accessed in plain text.
Pereira Vieito discovered the vulnerability out of curiosity regarding OpenAI's decision not to use app sandbox protections. This led him to investigate where the app stored its data. Since OpenAI only offers the ChatGPT macOS app via its website, it bypasses Apple’s sandboxing requirements that typically apply to software distributed through the Mac App Store.
Although OpenAI may review ChatGPT conversations to enhance safety and improve its models, this privilege shouldn't extend to unauthorized third parties who might exploit access to these conversations. Fortunately, the app did not store all data visible on the computer in plain text, which mitigates the potential risk.