Security researchers discovered a vulnerability in OpenAI's Connectors that link ChatGPT to external services, allowing a single poisoned document to potentially leak sensitive data from platforms like Google Drive without user interaction.
An Ars reader discovered that ChatGPT, an AI chatbot developed by OpenAI, leaked private conversations containing login credentials and personal details of unrelated users, including unpublished research papers, presentations, and PHP scripts. This incident highlights the importance of removing personal details from queries made to AI services. OpenAI is investigating the report, and concerns about data leakage have led companies like Apple to restrict their employees' use of ChatGPT and similar sites.
OpenAI's GPT Store, a marketplace for customizable chatbots, is set to launch soon, but users should exercise caution when uploading sensitive information as research from Adversa AI reveals that GPTs can leak data about their construction, including source documents, through strategic questioning. Prompt leaking, a vulnerability in GPTs, allows hackers to copy someone's GPT, posing a security risk for those hoping to monetize their creations. Additionally, prompt leaking can expose the documents and data used to train a GPT, limiting developers' ability to build applications. OpenAI is constantly patching vulnerabilities, but the discovery of new ones poses challenges for the widespread adoption of GPTs.
A bug in the image editing tool on Google's Pixel phones, known as Markup, caused leftover data from previous versions of images to remain on the device, even after the image was edited and saved. This could potentially lead to unintended data leakage if the new image was shared or uploaded to a cloud service. Google has patched the bug in the March 2023 security update of Android, but users may want to revisit previously shared images or consider editing security-critical images conservatively on their laptops using command-line image manipulation tools.