Source URL: https://www.theregister.com/2024/09/13/ai_deepfake_pledge/
Source: The Register
Title: AI giants pinky swear (again) not to help make deepfake smut
Feedly Summary: Oh look, another voluntary, non-binding agreement to do better
Some of the largest AI firms in America have given the White House a solemn pledge to prevent their AI products from being used to generate non-consensual deepfake pornography and child sexual abuse material.…
AI Summary and Description: Yes
Summary: Major AI firms have pledged to avoid the misuse of their technologies, specifically in generating non-consensual deepfake pornography and child sexual abuse material. This commitment reflects growing concerns regarding the ethical implications of AI applications and the need for better governance in AI development and deployment.
Detailed Description: The text outlines a significant initiative involving prominent AI companies’ pledges to prevent their products from being used for malicious purposes, mainly deepfake pornography and child sexual abuse materials. Here are the key points:
– **Commitment Overview**: Firms such as Adobe, Anthropic, Cohere, Microsoft, OpenAI, and Common Crawl have made non-binding commitments to protect their technologies from misuse.
– **Rising Threats**: The initiative’s urgency is underscored by a statement from the Biden administration, indicating that image-based sexual abuse, including AI-generated imagery, is one of the fastest-growing harmful uses of AI.
– **Data Management**: There is a commitment from these companies to properly source their datasets and to establish mechanisms to avoid incorporating harmful content into their training materials.
– **Common Crawl’s Role**: Notably, Common Crawl has been excluded from specific pledges that focus on model training and dataset cleaning since it does not create AI models itself. They support the initiative but were only asked to endorse one provision.
– **Previous Initiatives**: This is not the first voluntary commitment from major AI players. A similar promise made in July 2023 included testing models and implementing watermarking to prevent misuse.
– **International Context**: The pledges reflect a global trend toward voluntary commitments around AI safety, paralleled by agreements in the UK and South Korea aimed at promoting ethical AI usage.
– **Deepfake Proliferation**: Deepfakes are increasingly targeting both common individuals and public figures, raising concerns among experts about the potential for manipulation as a crucial election year approaches.
– **Regulatory Landscape**: The text contrasts the EU’s robust AI policies with those in the US, where companies may resist formal regulations in favor of voluntary commitments.
This information signifies a critical moment for AI security and governance, as professionals in security and compliance should advocate for stronger regulatory frameworks to address these pressing issues effectively.