The Register: LinkedIn: If our AI gets something wrong, that’s your problem

Source URL: https://www.theregister.com/2024/10/09/linkedin_ai_misinformation_agreement/
Source: The Register
Title: LinkedIn: If our AI gets something wrong, that’s your problem

Feedly Summary: Artificial intelligence still no substitute for the real thing
Microsoft’s LinkedIn will update its User Agreement next month with a warning that it may show users generative AI content that’s inaccurate or misleading.…

AI Summary and Description: Yes

**Summary:** LinkedIn is set to update its User Agreement, emphasizing that users must take responsibility for verifying the accuracy of generative AI content produced by the platform. This shift aims to mitigate liability for LinkedIn, which has faced scrutiny for using user data for AI training without explicit consent. Key privacy implications arise, especially in light of differing regulations across regions.

**Detailed Description:**
LinkedIn’s forthcoming changes to its User Agreement encompass several significant points regarding the use of generative AI and user data:

– **Generative AI Liability:** Users are warned that the content generated by LinkedIn’s AI tools may be inaccurate or misleading. Consequently, users will be responsible for any misinformation they share.

– **Policy Update Timeline:** The new terms will take effect on November 20, 2024, marking a proactive approach by LinkedIn to clarify user responsibility in relation to AI-generated content.

– **Community Policy Standards:** LinkedIn’s Professional Community Policies require users to share “real and authentic” information, drawing attention to the inconsistencies in AI-generated content not meeting this standard.

– **User Control Over Data:** The company has stated its commitment to giving users control over their data, particularly concerning an opt-out feature for AI model training, though widespread compliance and transparency remain in question, especially within the European Economic Area (EEA).

– **Regulatory Scrutiny:** The UK’s Information Commissioner’s Office has intervened, prompting LinkedIn to halt AI training on member data from the EEA, Switzerland, and the UK until further notice, indicating significant regulatory pressure on the platform regarding user consent and data usage.

– **Consequences for Misinformation:** LinkedIn outlines consequences for violating its policies, including content visibility restrictions, labeling misinformation, and account suspensions for repeat offenders.

– **Applications of AI Tools on LinkedIn:** Specific features potentially susceptible to generating misleading AI content include personalized InMail messages for recruiters, AI-enhanced job descriptions, and AI writing assistants for user profiles.

– **Expert Commentary:** Legal experts highlight that while users may be held accountable for their shared content, there is a notable tension between the marketing of AI capabilities and the inherent unreliability of such tools which need to be communicated clearly to users.

This development is particularly relevant for security and compliance professionals considering the implications of AI content generation, data privacy, user responsibility, and regulatory compliance across different jurisdictions. Addressing these aspects fosters a more robust understanding of potential risks associated with leveraging AI in user-generated content on platforms like LinkedIn.