Source URL: https://www.theregister.com/2024/08/31/gpt_apps_data_collection/
Source: The Register
Title: GPT apps fail to disclose data collection, study finds
Feedly Summary: Researchers say that implementing Actions omit privacy details and expose info
Many of the GPT apps in OpenAI’s GPT Store collect data and facilitate online tracking in violation of OpenAI policies, researchers claim.…
AI Summary and Description: Yes
Summary: Researchers from Washington University revealed significant privacy and security issues within OpenAI’s GPT ecosystem, highlighting widespread violations of data collection policies by various GPT applications. Their findings underscore critical concerns about the lack of transparency, enforcement, and security measures, with implications for both user privacy and the integrity of AI applications.
Detailed Description:
The study conducted by researchers Evin Jaff, Yuhao Wu, Ning Zhang, and Umar Iqbal examined nearly 120,000 GPT apps and over 2,500 Actions on OpenAI’s platform, uncovering alarming trends in data handling practices. Key points of the findings include:
– **Data Collection Violations**:
– 5.8% of analyzed Actions disclosed their data collection practices, indicating a significant lack of transparency.
– Sensitive data including personal information and passwords were being collected, often without proper documentation in privacy policies.
– **Third-Party Concerns**:
– A majority (82.9%) of Actions originate from third-party developers, who often neglect security considerations, exacerbating privacy issues.
– Actions not only collect user data but also have unrestricted access to each other’s information due to shared memory execution, creating potential data exposure risks.
– **Inadequate Security Practices**:
– The capture of passwords, even if deemed non-malicious, presents severe security threats—particularly the risk of these passwords being inadvertently included in training datasets.
– The analysis reflects a deeper issue of poor security paradigms adopted by developers, including failure to utilize OAuth for secure account connections.
– **OpenAI’s Response and Compliance**:
– Although OpenAI has removed thousands of non-compliant GPTs from its platform, researchers argue that this is insufficient. The company lacks a robust enforcement mechanism and adequate controls to ensure that GPTs comply with relevant data privacy laws.
– Findings highlight that despite OpenAI’s policies, the design of GPTs and Actions fails to prioritize security effectively, allowing pervasive data collection akin to practices in legacy web and mobile ecosystems.
– **Implications for User Privacy**:
– The researchers suggest that excessive data collection practices are becoming common in emergent LLM-based platforms, reflective of longstanding issues in traditional application ecosystems.
– These findings call for more stringent regulatory and governance frameworks to protect user privacy rights within the AI landscape.
Overall, this research underscores essential considerations for security and compliance professionals—emphasizing the urgency of implementing more robust security measures and transparent data practices within AI applications to mitigate privacy risks.