Wired: Google, Apple, and Discord Let Harmful AI ‘Undress’ Websites Use Their Sign-On Systems

Source URL: https://www.wired.com/story/undress-app-ai-harm-google-apple-login/
Source: Wired
Title: Google, Apple, and Discord Let Harmful AI ‘Undress’ Websites Use Their Sign-On Systems

Feedly Summary: Single sign-on systems from several Big Tech companies are being incorporated into deepfake generators, WIRED found. Discord and Apple have started to terminate some developers’ accounts.

AI Summary and Description: Yes

**Summary:** The text addresses the troubling proliferation of deepfake websites that use generative AI technology to create nonconsensual intimate images, particularly targeting women and girls. It highlights the complicity of major tech companies like Google, Apple, and Discord, which have made it easier for users to access these harmful sites through their authentication systems. This situation raises significant concerns regarding security, privacy, and the ethical implications of AI technologies.

**Detailed Description:** The issue presented involves major technology companies and their role in facilitating access to harmful deepfake websites that exploit generative AI technology. Here are the key points and implications for security and compliance professionals:

– **Rise of Deepfake Technology:** The emergence of deepfake technology has resulted in a marked increase in the production of nonconsensual intimate images, particularly facilitated by “undress” or “nudify” websites.

– **Role of Major Tech Companies:** Companies like Google, Apple, and Discord have enabled easier access to these malicious sites by allowing users to sign in using their existing accounts, effectively providing a perceived legitimacy to these services.

– **Statistics and Impact:** Reports indicate that some of these websites received around 200 million visits in just the first half of the year, illustrating the vast reach and impact of this technology on individuals, especially women and girls.

– **Tokenization of Consent and Privacy:** The use of sign-in APIs and existing authentication methods raises severe questions about privacy policies, user consent, and the responsibilities of tech companies in preventing abuse.

– **Regulatory and Legal Actions:** Local government officials, such as San Francisco’s city attorney, are now taking legal action against these websites, emphasizing the need for clear regulations in the domain of AI and online abusive behaviors.

– **Industry Response:** Companies like Discord have begun taking action against the exploitation of their services, demonstrating the necessity for ongoing vigilance and compliance to ensure their platforms aren’t facilitating harmful practices.

– **Call for Ethical Standards:** Experts are advocating for stricter controls and ethical standards surrounding AI applications and the responsibilities of tech companies to protect users from technologies that can be weaponized for abuse.

For security and compliance professionals, this situation underlines the dual challenges of addressing technological abuse while ensuring adherence to ethical standards and regulatory requirements in the rapidly evolving landscape of AI and online safety. This case serves as a critical reminder of the need for comprehensive strategies to mitigate the risks associated with generative technologies and to uphold user privacy and protection against exploitation.