Tag: accountability

  • CSA: Why Create an AI Whistleblower Policy for Compliance?

    Source URL: https://cloudsecurityalliance.org/articles/why-you-should-have-a-whistleblower-policy-for-ai Source: CSA Title: Why Create an AI Whistleblower Policy for Compliance? Feedly Summary: AI Summary and Description: Yes Summary: The text outlines the importance of establishing a whistleblower policy in organizations to navigate emerging regulations around AI, such as the EU AI Act. It emphasizes the need for internal compliance frameworks to…

  • Hacker News: Update on Reflection-70B

    Source URL: https://glaive.ai/blog/post/reflection-postmortem Source: Hacker News Title: Update on Reflection-70B Feedly Summary: Comments AI Summary and Description: Yes Summary: The text provides a detailed post-mortem analysis of the Reflection 70B model, highlighting the confusion around benchmark reproducibility, the rushed launch process, and subsequent community criticisms. It emphasizes the importance of transparency and community involvement in…

  • The Register: Singapore tires of Big Tech’s slow and half-hearted help for abused users

    Source URL: https://www.theregister.com/2024/10/02/singapore_cyberbully_agency_smart_nation/ Source: The Register Title: Singapore tires of Big Tech’s slow and half-hearted help for abused users Feedly Summary: PM promises agency to handle complaints as he outlines new digital nation plan Singapore is working on legislation and a dedicated agency that would hold online service providers more accountable for cyber bullying, according…

  • AlgorithmWatch: Why we need to audit algorithms and AI from end to end

    Source URL: https://algorithmwatch.org/en/auditing-algorithms-and-ai-from-end-to-end/ Source: AlgorithmWatch Title: Why we need to audit algorithms and AI from end to end Feedly Summary: The full picture of algorithmic risks and harms is a complicated one. So how do we approach the task of auditing algorithmic systems? There are various attempts to simplify the picture into overarching, standardized frameworks;…

  • New York Times – Artificial Intelligence : Artificial Intelligence Requires Specific Safety Rules

    Source URL: https://www.nytimes.com/2024/09/29/opinion/ai-risks-safety-whistleblower.html Source: New York Times – Artificial Intelligence Title: Artificial Intelligence Requires Specific Safety Rules Feedly Summary: Artificial intelligence poses unique risks, so the people warning us of safety threats deserve unique protections. AI Summary and Description: Yes Summary: The text discusses OpenAI’s past use of nondisclosure agreements to prevent criticism from employees…

  • Slashdot: Can AI Developers Be Held Liable for Negligence?

    Source URL: https://yro.slashdot.org/story/24/09/29/0122212/can-ai-developers-be-held-liable-for-negligence?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Can AI Developers Be Held Liable for Negligence? Feedly Summary: AI Summary and Description: Yes Summary: The text discusses a proposal from Bryan Choi advocating for a shift of AI liability from the technology itself to the individuals and organizations behind AI systems. This approach emphasizes a negligence-based framework,…

  • Schneier on Security: An Analysis of the EU’s Cyber Resilience Act

    Source URL: https://www.schneier.com/blog/archives/2024/09/an-analysis-of-the-eus-cyber-resilience-act.html Source: Schneier on Security Title: An Analysis of the EU’s Cyber Resilience Act Feedly Summary: A good—long, complex—analysis of the EU’s new Cyber Resilience Act. AI Summary and Description: Yes Summary: The EU’s new Cyber Resilience Act is a significant regulatory framework aimed at enhancing the cybersecurity posture of software and hardware…

  • Slashdot: Human Reviewers Can’t Keep Up With Police Bodycam Videos. AI Now Gets the Job

    Source URL: https://slashdot.org/story/24/09/24/2049204/human-reviewers-cant-keep-up-with-police-bodycam-videos-ai-now-gets-the-job?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Human Reviewers Can’t Keep Up With Police Bodycam Videos. AI Now Gets the Job Feedly Summary: AI Summary and Description: Yes Summary: The text discusses the utilization of large language model AI technologies to analyze body camera footage from police officers, revealing insights that could enhance accountability and performance…

  • Hacker News: OpenAI Threatening to Ban Users for Asking Strawberry About Its Reasoning

    Source URL: https://futurism.com/the-byte/openai-ban-strawberry-reasoning Source: Hacker News Title: OpenAI Threatening to Ban Users for Asking Strawberry About Its Reasoning Feedly Summary: Comments AI Summary and Description: Yes Summary: The text discusses OpenAI’s new AI model, “Strawberry,” and its controversial policy prohibiting users from exploring the model’s reasoning process. This move has brought into question the model’s…