Cloud Blog: Cloud CISO Perspectives: AI vendors should share vulnerability research. Here’s why

Source URL: https://cloud.google.com/blog/products/identity-security/cloud-ciso-perspectives-ai-vendors-should-share-vulnerability-research-heres-why/
Source: Cloud Blog
Title: Cloud CISO Perspectives: AI vendors should share vulnerability research. Here’s why

Feedly Summary: Welcome to the first Cloud CISO Perspectives for October 2024. Today I’m discussing new AI vulnerabilities that Google’s security teams discovered and helped fix, and why it’s important for AI vendors to share vulnerability research impacting their technology.As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.–Phil Venables, VP, TI Security & CISO, Google Cloud

aside_block
), (‘btn_text’, ‘Visit the hub’), (‘href’, ‘https://cloud.google.com/solutions/security/leaders?utm_source=cloud_sfdc&utm_medium=email&utm_campaign=FY23-Q2-global-PROD418-email-oi-dgcsm-CISOPerspectivesNewsletter&utm_content=ciso-hub&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>

Why AI developers need to share vulnerability researchBy Phil Venables, VP, TI Security & CISO, Google CloudAt Google, we take cybersecurity research very seriously. Exploring new ways to protect systems and code, and to fix vulnerabilities when they are found, plays an important role in technology — especially rapidly-developing technology like AI.We have made important progress in exploring the risks organizations can face from AI, including developing our AI Red Team, sharing our Secure AI Framework (SAIF) efforts, and investing in better cyber defenses. By necessity, securing AI reflects the complexity of AI itself, and includes defending infrastructure, application security, detection and response, trust and safety controls, and vulnerability research.

Phil Venables, VP, TI Security & CISO, Google Cloud

Importantly, the ongoing rapid growth of AI technology means that its attack surface can also quickly shift. This is a crucial time to invest in AI security research, and ensure that it is as secure as possible as it matures.At Google, we test our products, platform, and infrastructure, and we open ourselves to and partner with vulnerability researchers through our bug bounty program. We also have our in-house Google Cloud Vulnerability Research team, which was tasked with focusing on our AI platform, Vertex AI, prior to the launch of Gemini in 2023.During their research, the CVR team discovered previously-unknown vulnerabilities in Vertex AI — and remediated them. You can read the CVR team’s detailed research, where we also cover some of the architectural adjustments to structurally harden the platform, at the Bug Hunters blog.

“We detail our findings, how we found and fixed the issues internally, and how we reported our findings to similar cloud providers,” the CVR team said. Importantly, they didn’t limit their research to Vertex AI. “We continued this research on another large cloud provider, discovered similar vulnerabilities in their tuning architecture, and reported these vulnerabilities using their standard vulnerability disclosure process.”

While the AI industry is learning together how best to analyze and secure AI, it’s essential that we normalize the discussion of AI vulnerability research.

At Google, we place great importance on delivering our AI technology to our customers, and part of that process means security testing and researching our AI products. It’s simply part of our culture. Fixing and mitigating vulnerabilities is crucially important for building trust in new technology, and so is disclosing those findings and discussing them.As an emerging technology, AI will face intense scrutiny from attackers and researchers. While the AI industry is learning together how best to analyze and secure AI, it’s essential that we normalize the discussion of AI vulnerability research.That’s why it’s vital for AI developers to normalize sharing AI security research now. Google Cloud intends to lead efforts to enhance security standards globally by promoting transparency, sharing insights, and driving open discussions about AI vulnerabilities so we can collectively work towards a future where gen AI is secure by default.Conversely, not sharing vulnerabilities once they’ve been remediated raises the risk that similar or identical vulnerabilities will continue to exist on other platforms. As an industry, we should be making it easier to find and fix vulnerabilities, not harder.Reaching that future will require communication and collaboration. The Coalition for Secure AI and the open-source Secure AI Framework (SAIF) that it’s based on have important roles to play. By investing in and developing an AI security framework that stretches across the public and private sectors, we can make sure that developers safeguard the technology that supports AI advancements. This will help ensure that AI models are secure by default when they’re implemented.We want to expand the strong security foundations that have been developed over the past two decades to protect AI systems, applications, and users. Similarly, we advocate for consistent control frameworks that can support AI risk mitigation and scale protections across platforms and tools. Doing so can help ensure that the best protections are available to all AI applications in a scalable and cost efficient manner.Stigmatizing the discovery of vulnerabilities will only help attackers. We hope that encouraging vulnerability transparency and driving open discussions will empower developers and other Cloud providers to follow, addressing security issues without fear of reprisal. It is this mentality that will ultimately help push the AI industry forward.Let’s raise the bar of AI security industry-wide, as we collectively work towards a future where foundation models are secure by default.For more leadership guidance from Google Cloud experts, please see our CISO Insights hub.

aside_block
<ListValue: [StructValue([(‘title’, ‘Join the Google Cloud CISO Community’), (‘body’, <wagtail.rich_text.RichText object at 0x3e47b9f468b0>), (‘btn_text’, ‘Learn more’), (‘href’, ‘https://rsvp.withgoogle.com/events/ciso-community-interest?utm_source=cgc-blog&utm_medium=blog&utm_campaign=2024-cloud-ciso-newsletter-events-ref&utm_content=-&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>

In case you missed itHere are the latest updates, products, services, and resources from our security teams so far this month:Confetti cannons or fire extinguishers? Read our guide to securing cloud security surprises: Too often, security teams get late invites to product launches. Here’s our guide for how to secure surprise projects so you can quickly swap emergency fire hoses for confetti cannons. Read more.How virtual red teams can find high-risk cloud issues before attackers: One of Security Command Center’s advanced capabilities is detecting attack paths with a virtual red team. Here’s how it works and why you need it. Read more.New Confidential Computing updates for more hardware security options: Several new Confidential Computing options and updates in the Google Cloud attestation service are now generally available. Here’s what’s new. Read more.Project Shield expands free DDoS protection: Marginalized groups and non-profit arts and sciences organizations can tap into the power of Project Shield for protection against DDoS attacks, free of charge. Read more.How to protect your site from DDoS attacks with Cloud Networking: Tap the power of Google Cloud Networking and Network Security to protect workloads anywhere on the web, just like Project Shield does. Read more.You can now sign Microsoft Windows artifacts with keys protected by Cloud HSM: You now can perform code signing in your Microsoft ecosystem using SignTool, while protecting your keys with Cloud HSM. Read more.How Google Cloud supports telecom regulatory compliance: Operating a telecom network is more than just connecting phone calls. Here’s how Google Cloud is helping them to maintain regulatory compliance. Read more.Please visit the Google Cloud blog for more security stories published this month.

aside_block
<ListValue: [StructValue([(‘title’, ‘Tell us what you think’), (‘body’, <wagtail.rich_text.RichText object at 0x3e47b9f46730>), (‘btn_text’, ‘Vote now’), (‘href’, ‘https://www.linkedin.com/feed/update/urn:li:ugcPost:7251623791947051008’), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>

Threat Intelligence newsGet reverse-engineering analysis in your browser with capa Explorer Web: We’re introducing capa Explorer Web, a web-based interface to display the results found by the capa reverse-engineering tool. Read more.Please visit the Google Cloud blog for more threat intelligence stories published this month.
Now hear this: Google Cloud Security and Mandiant podcastsHow to secure inherited cloud projects: Following our blog on organizing cloud security, Google Cloud’s Taylor Lehmann, director, Office of the CISO, and Luis Urena, cloud security architect, discuss with podcast host Anton Chuvakin how to respond when a security team is invited to secure a cloud project late in the process. Listen here.Can AI keep a secret: Google Cloud’s Nelly Porter, director of product management, Cloud Security, explains to Anton exactly what customer problems Confidential Computing can solve when combined with AI. Listen here.Defender’s Advantage: Using LLMs to analyze Windows binaries: Vicente Diaz, threat intelligence strategist, VirusTotal, joins host Luke McNamara to discuss his research into using Gemini to analyze Windows binaries and how this can help security operations. Listen here.Defender’s Advantage: How threat actors bypass multi-factor authentication: Josh Fleischer, principal security analyst, Mandiant, chats with Luke on the latest trends in MFA bypass, and how threat actors are conducting adversary in the middle (AiTM) attacks to gain access to targeted organizations. Listen here.To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in two weeks with more security-related updates from Google Cloud.

AI Summary and Description: Yes

Summary: The text discusses the imperative for AI developers to share vulnerability research to enhance AI security. It underscores the findings of Google’s Cloud Vulnerability Research team regarding previously-unknown vulnerabilities in AI platforms, particularly Vertex AI, and highlights the importance of collaboration within the AI industry to promote transparency and improve security.

Detailed Description:

The article emphasizes the ongoing developments in AI technology and the necessity of robust security measures as these technologies evolve rapidly. Key points include:

– **Discovery of Vulnerabilities**: Google’s Cloud Vulnerability Research (CVR) team identified and mitigated vulnerabilities within Vertex AI, demonstrating proactive approaches to AI platform security.
– **Importance of Collaboration**: There is a call for AI developers to share their vulnerability research openly. This transparency is fundamental to reducing the industry’s attack surface and fostering a secure environment.
– **Vulnerability Disclosure**: The CVR team not only focused on Google’s own platform but extended their research to third-party services, using standard vulnerability disclosure protocols to report their findings.
– **Industry Best Practices**: The article advocates for normalizing discussions around AI vulnerabilities to build trust and ensure that emerging AI technologies are secure by default.
– **Frameworks and Standards Development**: Mention of the Coalition for Secure AI and the Secure AI Framework (SAIF) highlights the strategic partnerships needed to create a unified approach to AI security.
– **Future Focus**: Google aims to raise the baseline of AI security through improved standards and collaboration efforts between public and private sectors, ensuring that security practices scale appropriately across applications and platforms.

Overall, the text serves as both an insight into current AI vulnerabilities and a call to action for the industry to enhance AI security standards collectively. This is particularly relevant for professionals in cloud security, AI security, and compliance, as it addresses comprehensive approaches to safeguarding rapidly expanding technologies.