The Register: Google Cloud Document AI flaw (still) allows data theft despite bounty payout

Source URL: https://www.theregister.com/2024/09/17/google_cloud_document_ai_flaw/
Source: The Register
Title: Google Cloud Document AI flaw (still) allows data theft despite bounty payout

Feedly Summary: Chocolate Factory downgrades risk, citing the need for attacker access
Overly permissive settings in Google Cloud’s Document AI service could be abused by data thieves to break into Cloud Storage buckets and steal sensitive information.…

AI Summary and Description: Yes

Summary: The text discusses a vulnerability in Google Cloud’s Document AI service that allows potential data exfiltration due to overly permissive access settings. Despite being reported and acknowledged by Google, the issue remains unresolved, raising concerns about security in cloud environments.

Detailed Description:

The report highlights a significant security vulnerability in Google Cloud’s Document AI service, revealing potential risks for sensitive data stored in Google Cloud Storage:

– **Vulnerability Identification**:
– A principal security researcher from Vectra AI, Kat Traxler, reported a critical flaw in early April, where document processing capabilities could be exploited due to excessive permissions granted to service agents.
– The flaw enables attackers to access Google Cloud Storage buckets they shouldn’t normally have access to.

– **Access Control Issues**:
– The service uses a Google-managed service account (service agent) to perform batch processing. The permissions of this service agent are set too broadly, allowing access to all Cloud Storage buckets within a project, regardless of the caller’s permissions.
– This leads to “transitive access abuse,” where an attacker can exfiltrate data from one bucket to another without triggering expected access controls, compromising data confidentiality.

– **Reporting Journey**:
– Initially, Google deemed the documentation insufficient for a bug bounty reward, but later awarded Traxler $3133.70 after further communications and a proof-of-concept (POC) submission that demonstrated the vulnerability.
– Despite receiving a reward, Traxler contends that the issue remains unaddressed, as Google marked it as “fixed” without implementing necessary changes.

– **Potential Risk Factors**:
– Traxler noted that while finding and exploiting such vulnerabilities requires sophistication, environments with less stringent security measures may be particularly vulnerable to data exfiltration attacks.
– Her insights underscore the necessity for rigorous access control measures and aligning with the principle of least privilege to mitigate risks.

– **Continued Advocacy**:
– Traxler continues to advocate for addressing this vulnerability, having reiterated concerns at industry events while persisting with communications with Google to ensure the flaw is adequately resolved.

This incident spotlights the critical importance of robust access controls in cloud services, especially in machine learning applications that handle sensitive data. It serves as a reminder for companies to regularly review their configurations and ensure compliance with security best practices to protect against data theft.