IDC Names Securiti a Worldwide Leader in Data Privacy

View

Generative AI Security Risks & How to Mitigate Them

Published December 17, 2023 / Updated December 21, 2023

Listen to the content

Introduction

The rise of Generative AI is ushering in a profound transformation across many industries. McKinsey expects the economic potential of generative AI to add between $2.6 trillion to $4.4 trillion to global corporate profits annually.

While Generative AI brings several benefits, there are new and serious security and privacy risks. Concerns about its misuse in cyber attacks, misinformation from and data poisoning of models essential to company processes, and the very real possibility of data exfiltration using models trained on data are raising the question of how to properly manage such models while retaining their benefits.

A recent Malwarebytes survey revealed that 81% of respondents are concerned about the security concerns of Generative AI. This growing concern demonstrates the need for Generative AI governance frameworks and tooling that enterprises can adopt to confidently use Generative AI.

Security Concerns in the Age of Generative AI Advancements

The large-language models behind the Generative AI revolution stand in the billions of parameters, and they almost act as data systems with a natural-language querying interface. Further, there is a growing proliferation of Generative AI models with uneven built-in protections against malicious use, hallucination, and prompt-based attacks. The landscape of concerns can be broken into four distinct areas:

accelerate-genAI-by-enabling-ai-safety

1. AI Model Safety

This reflects the procedures and policies put in place to ensure that AI models function reliably, ethically, and without causing intentional harm. This comprises addressing concerns such as bias, robustness, transparency, and accountability to mitigate the risks of implementing AI systems in various applications.

AI model safety encompasses a Generative AI model’s abilities to act ethically and responsibly, comply with instructions (that may even reflect legal requirements), and resist bias. A model that does not meet these standards may be unsuitable for use regardless of other protections placed around it.

As AI models proliferate across geographies, understanding and cataloging the data sources and inputs used in AI models is crucial to identify potential vulnerabilities, biases, and security risks. Additionally, any new app with poor security algorithms exposes the organization to various vulnerabilities. Since Generative AI features complex algorithms, it gets challenging for in-house security teams to identify security risks and verify that the tool is safe to use.

The complex interplay of model capabilities, geographies, data, and entitlements make AI Safety challenging, but that challenge is reducible to the following focus areas:

  • AI Model Discovery – maintain an inventory of all AI models.
  • AI Model Risk Assessment – assess the risks associated with using AI models, including adherence to instructions, hallucination, bias, and fairness.
  • AI Model Security – establish security protections for AI systems to prevent model tampering (e.g., model poisoning) and model exfiltration.
  • AI Model Entitlements – establish a comprehensive assessment of all model access privileges.

2. Enterprise Data Usage with Generative AI

A core value proposition for Generative AI in the enterprise is their ability to work with enterprise data. Models may either be trained directly on enterprise data or augmented with external or third-party data (see Retrieval Augmented Generation) in order to answer queries about it. In either case, it's crucial to understand and gain holistic insights into the data available to the Generative AI model so that appropriate data controls can be applied. This protects any sensitive data underpinning the Generative AI system and ensures regulatory compliance.

The increased need for data controls in the Generative AI era is not simply tied to enterprise models, though. Generative AI's ability to imitate human communication in almost any style prompts serious concerns about automated social engineering attacks. Preying on users’ existing susceptibilities to such attacks can result in the disclosure of sensitive data or engaging them in security-compromising behavior.

Responsible use of enterprise data for Generative AI includes:

  • Data Inventory – inventory audit of all currently stored, used, and managed data.
  • Data Classification – assess and catalog all data types, including sensitive data and third-party data.
  • Data Access & Entitlements – gain insights into the personnel, applications, and models with access to data.
  • Data Consent, Retention, & Residency – obtain insights into all metadata related to consent, retention, and residency obligations.
  • Data Usage Audit – maintain an audit trail of the data currently being fed to AI models.

3. Prompt Safety

The input into a Generative AI model is called a prompt. Prompts themselves can be broken into (a) system prompts or instructions and (b) user prompts or queries. Often, a Generative AI system will include the system prompt with the user prompt to shape the model’s behavior. Both are also vectors for Generative AI model attacks.

System prompts should be structured with accurate, informative, and unbiased commands to steer models toward acceptable behavior. For example, system prompts may establish ethical boundaries and provide positive and negative examples of responses. System prompts may also help the model reject potentially dangerous user prompts.

But even the best system prompt cannot defend against all malicious user prompts. Consequently, Generative AI systems must scan user prompts independently from the Generative AI model to quickly identify security concerns in real-time, such as looking for prompt injection attacks, requests for sensitive information, and anomalous requests.

Ensuring prompt safety necessitates scanning for:

  • Prompt Injection & Jailbreak: analyze prompts for attempts to discover or override system instructions in order to have the model behave maliciously.
  • Sensitive Data Phishing: analyze prompts for attempts to gain access to sensitive information.
  • Model Hijacking / Knowledge Phishing: analyze prompts for attempts to use the model for unintended purposes, such as extracting information, which can be costly.
  • Denial of Service: prevent behavior that may lead to stalling the model for legitimate uses.
  • Anomalous Behavior: scan for general anomalous access or prompt content that warrants additional inspection.

4. AI Regulations

The AI regulation landscape is rapidly evolving, with dozens of new AI regulations in flight in addition to pre-existing data protection regulations, such as the EU’s General Data Protection Regulation (GDPR) & EU Artificial Intelligence Act (EU AI Act).

Leveraging Generative AI effectively necessitates compliance with existing data protection laws and expected AI governance laws designed to secure sensitive data. A few upcoming developments include:

  • European Commission guidelines on Ethical Use of Artificial Intelligence in educational settings
  • UK DPA Guidance on AI and data protection and data protection risk toolkit
  • French DPA Self-Assessment Guide for AI systems
  • Spanish DPA Guide on machine learning
  • NIST draft AI Risk Management Framework
  • Australian NSW AI Assurance Framework
  • Singapore Infocomm Media Development Authority AI testing toolkit
  • China Cyberspace Administration draft policy on Measures on the Management of Generative Artificial Intelligence
  • India Council of Medical Research Guidelines on the use of AI in biomedical research and healthcare
  • Vietnam draft National Standard on Artificial Intelligence and Big Data

In a dynamic data-driven landscape, data-hungry organizations will need to implement policies and processes that Enable the Safe Use of Generative AI and empower organizations to honor various obligations imposed by AI regulations.

Generative AI Security Requires a Data Command Center

Generative AI security hinges on ensuring the utmost privacy and security of sensitive data that’s been fed into the AI model.

Securiti Data Command Center comes packed with a data controls strategy enabling contextual and automated controls around data and ensuring swift compliance with evolving laws. It helps with:

  • A comprehensive inventory of data that exists;
  • Contextual data classification to identify sensitive data/confidential data;
  • Compliance with regulations that apply to the data fed to the training model, including meeting data consent, residency, and retention requirements;
  • Inventory of all AI models to which data is being fed via various data pipelines;
  • Governance of entitlements to data through granular access controls, dynamic masking, or differential privacy techniques; and
  • Enabling data security posture management to ensure data stays secure at all times.

Download the CPOs Guide to learn about the responsible use of Generative AI and watch the webinar - Managing Privacy in the Era of Generative AI to explore how privacy professionals are navigating governance around these emerging technologies and how a framework of unified data controls across silos can help organizations across industries.

Request a demo today to witness Securiti in action.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox

Share


More Stories that May Interest You

Follow