Hacker News: LinkedIn does not use European users’ data for training its AI

Source URL: https://www.techradar.com/pro/security/the-linkedin-ai-saga-shows-us-the-need-for-eu-like-privacy-regulations
Source: Hacker News
Title: LinkedIn does not use European users’ data for training its AI

Feedly Summary: Comments

AI Summary and Description: Yes

Summary: The text discusses the controversy surrounding LinkedIn’s use of user data for training its generative AI models without explicit consent, particularly focusing on the implications of privacy regulations like GDPR in the EU. It highlights a growing trend among tech giants to leverage personal data for AI development while facing backlash from privacy advocates and regulatory bodies.

Detailed Description:
The content explores various aspects related to the use of personal data by social media platforms for AI training purposes, with a specific focus on LinkedIn’s recent actions and their implications under privacy laws. Here are the key points:

– **LinkedIn’s Data Use Without Consent**: Users expressed concerns over LinkedIn’s practice of training generative AI tools using their data without prior consent. The change in data usage policies was noted on September 18, along with an update in their terms of service.

– **Comparison with Other Social Media Companies**: The text compares LinkedIn’s actions with those of Meta (Facebook) and X (formerly Twitter), which also started incorporating user data for their AI models but faced significant pushback from EU regulators.

– **Regulatory Backlash in Europe**: The European Union (EU) has exhibited a strong stance against the unauthorized use of personal data for AI training, with notable instances including:
– **Meta halted its AI launch** in Europe following privacy complaints.
– **X received a formal complaint** regarding GDPR violations, leading to an agreement to cease data collection from EU users.

– **Implications of Strong Privacy Frameworks**: The ongoing conflict between tech companies and privacy advocates underscores the importance of regulatory frameworks like GDPR in protecting user data. Europe’s robust regulations are seen as a model for privacy protection.

– **User Empowerment and Opt-Out Mechanism**: LinkedIn’s revised terms require users to actively opt-out if they do not wish for their data to be used in AI training, indicating a shift to an auto opt-in policy that raises ethical concerns.

– **Industry Criticism**: Experts and privacy advocates argue that companies engaging in auto opt-in practices undermine user choice and transparency. The call for user empowerment in privacy decisions is a significant theme voiced by professionals like ethical hacker Rachel Tobac.

– **Steps to Opt-Out**: The text provides practical guidance for LinkedIn users on how to disable the setting that allows their data to be used for AI training, emphasizing that this opt-out will not affect previously collected data.

Overall, this discussion highlights the ongoing tension between innovation in AI and the need for user consent and privacy protections, particularly in the context of European regulations that aim to maintain rights and safeguards for individuals. This scenario illustrates the critical role compliance and transparency play in ensuring ethical practices in the fast-evolving landscape of AI.