Wired: We Need a New Right to Repair for Artificial Intelligence

Source URL: https://www.wired.com/story/we-need-new-right-to-repair-for-artificial-intelligence/
Source: Wired
Title: We Need a New Right to Repair for Artificial Intelligence

Feedly Summary: A growing movement to allow access to algorithmic workings won’t stop the ubiquitous spread of artificial intelligence, but it could restore public confidence in it.

AI Summary and Description: Yes

Summary: The text highlights increasing public resistance to AI technologies, particularly concerns surrounding data ownership and copyright infringements. It discusses emerging practices like red teaming as a way to enhance accountability in AI development and outlines the potential for a “right to repair” that could empower users to have greater control over AI systems, which is essential in fostering trust and compliance.

Detailed Description:
The provided content focuses on the societal and legal shifts regarding artificial intelligence (AI), notably emphasizing the following key points:

– **Public Rejection of Unsolicited AI Use**: There is a rising trend among individuals and organizations to push back against AI technologies that utilize their data without consent.
– Examples include:
– A lawsuit filed by The New York Times against OpenAI and Microsoft for copyright infringement.
– A class action suit by authors against Nvidia for allegedly training its AI on their copyrighted materials.
– Legal threats from Scarlett Johansson over the use of her AI-impersonated voice.

– **Declining Confidence in AI**: Research indicates a growing concern regarding AI among the public:
– Over half of Americans express more fear than excitement about AI developments.
– Similar sentiments arise in various global communities, highlighting widespread unease.

– **Emergence of Red Teaming**: Red teaming, a practice primarily used in cybersecurity, is gaining traction in AI communities as a method to evaluate systems for vulnerabilities and ensure compliance with legal standards.
– Key initiatives:
– DLA Piper’s use of red teaming within legal frameworks to assess AI systems.
– Humane Intelligence’s work involving non-technical experts to address issues of bias and discrimination in AI models.

– **Demand for a “Right to Repair”**: Moving towards a future where users demand accountability and control over AI technologies:
– Users could run diagnostics on AI systems, report issues, and receive updates on fixes.
– Ethical hackers could develop patches accessible to all, or independent audits could validate and customize AI systems for individual needs.

– **Societal Shift and Future Outlook**: The text posits that 2025 will mark a turning point where users expect greater control and ethical standards in AI use.
– Advocates are increasingly calling for a shift away from unchecked AI deployment and towards a structure where individuals have rights to oversight and adaptations of AI technologies impacting their lives.

In conclusion, the dynamics surrounding AI usage and its implications for data rights and personal autonomy are evolving. Security and compliance professionals must keep abreast of these trends to navigate the complex landscape of AI governance effectively.