Securiti launches Gencore AI, a holistic solution to build Safe Enterprise AI with proprietary data - easily

View

5 Steps to AI Governance: From Shadow AI to Strategic Oversight

Published March 26, 2024 / Updated May 21, 2024

Listen to the content

The groundswell adoption of GenAI has introduced a number of privacy and security concerns to the global data landscape — and has already initiated regulatory action around the globe, most recently in the passing of the EU’s AI Act on March 13, 2024.

Central to these risk factors is shadow AI — one of the biggest challenges facing enterprises that have AI deployment and integration on their 2024 roadmap. These orgs may have every intention of integrating AI models into their data systems in a secure, compliant, transparent, and trustworthy way, but shadow AI can divert the best of intentions, posing a threat to security, ethics, and compliance.

What is Shadow AI?

Shadow AI refers to AI systems that are being developed, deployed, stored, and even shared without security controls or policies in place. The proliferation of shadow AI presents a growing threat, as having no visibility into your AI systems makes them more vulnerable to unauthorized access and can lead to unpleasant surprises.

How Shadow AI Challenges Safe AI Adoption

Blind spots: Orgs often don’t know which AI models are currently active across their enterprise, which are sanctioned versus unsanctioned, or which are open source versus developed by commercial providers. Shadow systems may include AI models deployed directly by developers in various production and non-production environments — or systems shared by vendors as part of your SaaS environment.

Unknown risk level: Even if orgs have visibility into their AI systems, they may not have an accurate picture — or any sense at all — of their active systems’ risk ratings. Orgs need to understand various risk parameters around each of their AI models to know which they should sanction and which they should block. Lack of awareness around model risks may lead to issues like malicious use, toxicity, hallucinatory responses, bias, and discrimination.

Insufficient security controls: Large language models (LLM), like those that power GenAI models, are essentially vast data systems with massive amounts of compressed information yielding outputs trained on valuable data. Without security controls, this data is subject to manipulation, data leakage, and malicious attacks. Organizations must apply the right security controls within or around these models to protect them from unauthorized usage.

Lack of transparency into enterprise data that goes into AI systems: Orgs need to understand which enterprise data is going into their AI models. Lack of clarity around what — and how — data is being used for training, tuning, or inference may raise concerns about entitlements and the potential leakage of sensitive data.

Protecting data generated by AI systems: True to its name, GenAI exists to generate data — and the data generated needs to be protected against internal and external threats. Security teams need full visibility into which data is being generated by which AI system. AI assistants, prompts, and agents, while serving as channels for legitimate queries, are the biggest conduits for attacks and malicious usage. Unguarded prompts, agents, and assistants open the door to harmful interactions, threatening user safety and ethical principles.

Ever-evolving global AI regulations: In addition to the EU AI Act and the NIST AI Risk Management Framework (RMF), countries like China, the UK, Japan, Italy, Israel, Canada, and Brazil are either proposing or enacting AI legislation. Last October, even the US Biden administration issued a first-of-its-kind executive order addressing the proliferation of AI development — and calling for its “safe, secure, and trustworthy” use. The regulatory landscape around AI is only expected to grow in scope and complexity, so starting with safe and compliant practices is key for orgs that want to avoid violations in the future.

5 Steps to Strategic AI Oversight — How to Take Control of Your AI Landscape

1. Discover and catalog AI models that are in use across public clouds, SaaS applications, and private environments — including all of your org’s shadow AI. Uncover AI data that lurks in blind spots, and shine a light in every corner of your AI landscape.

  1. Identify all AI models active across your public clouds, covering both production and non-production environments.
  2. Highlight which data systems are tied to which AI models — and which computing resources are operating on each, thereby tying them to the applications.
  3. Additionally, facilitate the collection of comprehensive details about your AI models, whether they're operating within your SaaS applications or internal projects.

2. Assess risks and classify AI models: This capability is required for aligning AI systems and models with the risk categories outlined by the EU AI Act — and other classifications imposed by global regulatory bodies.

  1. Provide risk ratings for AI models through model cards. These ratings provide comprehensive details, covering aspects such as toxicity, maliciousness, bias, copyright considerations, hallucination risks, and even model efficiency in terms of energy consumption and inference runtime.
  2. Based on these ratings, you can decide which models to sanction and which to block.

3. Map and monitor data + AI flows: It is important not only to know the AI models active within your organization but also to understand how these models are related to enterprise data, sensitive information, data processing, applications, vendors, risks, and so on.

  1. Establish data + AI mapping for all the AI systems in your environment. Comprehensive mapping will enable your privacy, compliance, security, and data teams to identify dependencies, pinpoint potential risks, and ensure that AI governance is proactive rather than reactive.

4. Implement data + AI controls for privacy, security, and compliance — for input and output: If sensitive data finds its way into LLM models, securing it becomes extremely difficult. Similarly, if enterprise data is converted into vector forms, securing it becomes more challenging.

  1. On the input side, ensure that all enterprise data flowing into your AI models — whether unstructured or structured — is inspected, classified, and sanitized. This includes masking, redacting, anonymizing, or tokenizing the data to adhere to enterprise policies. By defining rules for classifying and sanitizing this data in-line, this creates safe AI pipelines. You start by ensuring the secure ingestion of data into AI models, in alignment with your enterprise data policies and user entitlements.
  2. On the data generation and output side, LLM firewalls protect against malicious use and attacks on prompts, assistants, and agreements. These firewalls can defend against various vulnerabilities highlighted in the Open Web Application Security Project (OWASP) Top 10 Web Application Security Risks for LLMs and in NIST AI RMF, including prompt injection attacks and data exfiltration attacks. These data + AI controls can be applied at the retrieval, prompt, or response stages.

5. Comply with regulations: Organizations need comprehensive compliance, preferably with automation, for AI, encompassing an extensive list of global AI regulations and frameworks, including the NIST AI RMF and the EU AI Act. This allows you to define multiple AI projects within the system and check the required controls for each project.

Enabling fast, Safe Adoption of AI in Your Organization

By adhering to these five steps, you can achieve: full transparency into your sanctioned and unsanctioned AI systems, clear visibility of your AI risks, comprehensive data + AI mapping, strong automated AI + data controls, and compliance with global AI regulations.

With these guardrails and strategic AI oversight in place, you can enable both safe and fast adoption of AI throughout your enterprise, driving innovation and tapping into the vast business opportunity that the groundbreaking world of GenAI brings to the technology landscape. Read the whitepaper to learn more about the five steps to secure AI governance, the risks of shadow AI, and the business value that lies ahead for those integrating AI in a secure, transparent, trustworthy, and compliant manner.

Explore AI Governance Center https://securiti.ai/ai-governance/

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

What's
New