Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

OWASP Top 10 for LLM Applications

Mitigate AI Security Risks with the Broadest Coverage of OWASP Top 10 for LLMs

LLM01 - Prompt Injection

Manipulating prompts to alter the model's intended behavior or bypass its restrictions.

Mitigation Strategies

  • Deploy a context-aware prompt. retrieval, and response firewall to detect and block malicious inputs.
  • Sanitize and restrict sensitive data from being included in model training or response generation.
  • Enforce least-privileged data access to minimize unauthorized manipulation.

Example: The user manipulates an LLM to summarize a webpage with instructions to ignore previous commands.

Learn More
LLM01 - Prompt Injection

LLM02 - Sensitive
Information Disclosure

Exposing confidential or proprietary data through training, fine-tuning, or user interactions.

Mitigation Strategies

  • Implement data masking and sanitization to protect sensitive information in training data.
  • Use multi-layered firewalls at prompt, retrieval, and response stages to block data leaks.
  • Enforce strict entitlement controls to ensure only authorized access to sensitive data.

Example: Samsung employees once inadvertently exposed confidential internal data to ChatGPT.

Learn More
LLM02 - Sensitive Information Disclosure

LLM03 - Supply Chain
Vulnerability

Introducing risks via compromised models, datasets, or third-party integrations in the AI development pipeline.

Mitigation Strategies

  • Conduct model and agent discovery to identify sanctioned and shadow AI.
  • Use model cards to evaluate risks like bias, toxicity, and misconfigurations in third-party components.
  • Conduct third-party risk assessments for all supplier components to proactively identify vulnerabilities.

Example: Pre-trained models and poisoned datasets can introduce biases, backdoors, and vulnerabilities, enabling data theft, system compromise, and harmful content generation.

Learn More
LLM03 - Supply Chain Vulnerability

LLM04 - Data and Model Poisoning

Contaminating datasets or tampering with models to introduce biases or malicious backdoors.

Mitigation Strategies

  • Inspect AI outputs using LLM response firewalls to detect harmful responses.
  • Validate third-party datasets and pre-trained models for biases or backdoors.
  • Monitor LLM outputs to identify unintended or malicious behaviors.

Example: Microsoft’s Tay chatbot was manipulated by malicious users with offensive inputs, causing it to generate harmful content.

Learn More
LLM04 - Data and Model Poisoning

LLM05 - Improper Output Handling

Failing to validate or sanitize model responses before passing them to downstream systems.

Mitigation Strategies

  • Deploy a response firewall to filter and validate AI outputs against company policies.
  • Use built-in firewall policies to block sensitive, offensive, or malicious content.
  • Validate system outputs before execution in downstream applications.

Example: An LLM generates an unrestricted SQL query, enabling attackers to delete sensitive database records.

Learn More
LLM05 - Improper Output Handling

LLM06 - Excessive Agency

Granting LLMs unauthorized autonomy or functionality beyond their intended scope.

Mitigation Strategies

  • Discover and manage AI models and entitlements to enforce least-privileged access.
  • Cross-verify the identity of users sending prompts with their entitlements to prevent unauthorized data access.
  • Combine data source-level access controls with AI system specific inline access governance with Securiti's holistic AI security approach.

Example: A car dealership’s LLM agent with excessive privileges exposes confidential next-quarter sales promotions, impacting current sales and revenue.

Learn More
LLM06 - Excessive Agency

LLM07 - System Prompt
Leakage

Extracting sensitive information or internal instructions embedded in system prompts.

Mitigation Strategies

  • Use prompt firewalls to block attempts to extract sensitive system-level prompts.
  • Exclude API keys and internal configurations from meta prompts.
  • Apply strict controls to protect internal system configurations from exposure.

Example: Threat actors extracted GPT-4o voice mode system prompts, revealing behavior guidelines and sensitive configuration details.

Learn More
LLM07 - System Prompt Leakage

LLM08 - Vector and
Embedding Weaknesses

Exploiting vulnerabilities in vector databases or RAG pipelines to inject malicious content or retrieve sensitive data.

Mitigation Strategies

  • Classify and sanitize sensitive data before storing it as embeddings in vector DBs.
  • Enforce entitlement controls to prevent unauthorized embedding retrieval.
  • Use retrieval firewall to detect and block malicious or tampered embeddings.

Example: In multi-tenant environments, an adversary retrieves another tenant’s embeddings, exposing confidential business data.

Learn More
LLM08 - Vector and Embedding Weaknesses 

LLM09 - Misinformation

Generating false or misleading information that appears credible but lacks factual accuracy.

Mitigation Strategies

  • Ground LLM outputs in trusted, verified internal knowledge bases to prevent hallucinations.
  • Assess and select models based on industry benchmarks like Stanford HELM.
  • Minimize redundant or obsolete data from AI workflows to enhance response accuracy.

Example: An airline customer support chatbot incorrectly stated that bereavement fare refunds were available within 90 days, contradicting the company's policy.

Learn More
LLM09 - Misinformation

LLM10 - Unbounded
Consumption

Overusing LLMs for resource exhaustion, service degradation, or replicating the model's functionality.

Mitigation Strategies

  • Limit excessive requests using response firewalls to prevent resource exhaustion.
  • Monitor and block patterns indicating denial-of-service or resource abuse.
  • Enforce quotas and rate-limiting to safeguard against overuse.

Example: An adversary floods the LLM's API with synthetic data requests to train a duplicate model.

Learn More
LLM10 - Unbounded Consumption

Resources

FAQs

The Open Web Application Security Project (OWASP) is a nonprofit organization focused on improving software security. It provides guidelines and frameworks, such as the OWASP Top 10 for LLM security, to help organizations mitigate vulnerabilities in large language models (LLMs).

Data and model poisoning involve injecting biased or malicious data into training datasets, leading to manipulated outputs. Preventative measures include validating datasets, inspecting AI outputs, and monitoring LLM behavior.

To prevent unintentional data exposure, organizations should implement data masking, use multi-layered firewalls to block leaks, and enforce strict access controls to limit unauthorized data retrieval.

Videos

View More

Mitigating OWASP Top 10 for LLM Applications 2025

Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...

View More

DSPM vs. CSPM – What’s the Difference?

While the cloud has offered the world immense growth opportunities, it has also introduced unprecedented challenges and risks. Solutions like Cloud Security Posture Management...

View More

Top 6 DSPM Use Cases

With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...

View More

Colorado Privacy Act (CPA)

What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...

View More

Securiti for Copilot in SaaS

Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...

View More

Top 10 Considerations for Safely Using Unstructured Data with GenAI

A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....

View More

Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes

As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...

View More

Navigating CPRA: Key Insights for Businesses

What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...

View More

Navigating the Shift: Transitioning to PCI DSS v4.0

What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...

View More

Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)

AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...

Spotlight Talks

Spotlight 14:21

AI Governance Is Much More than Technology Risk Mitigation

AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3

You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge

Watch Now View
Spotlight 47:42

Cybersecurity – Where Leaders are Buying, Building, and Partnering

Rehan Jalil
Watch Now View
Spotlight 27:29

Building Safe AI with Databricks and Gencore

Rehan Jalil
Watch Now View
Spotlight 46:02

Building Safe Enterprise AI: A Practical Roadmap

Watch Now View
Spotlight 13:32

Ensuring Solid Governance Is Like Squeezing Jello

Watch Now View
Spotlight 40:46

Securing Embedded AI: Accelerate SaaS AI Copilot Adoption Safely

Watch Now View
Spotlight 10:05

Unstructured Data: Analytics Goldmine or a Governance Minefield?

Viral Kamdar
Watch Now View
Spotlight 21:30

Companies Cannot Grow If CISOs Don’t Allow Experimentation

Watch Now View
Spotlight 2:48

Unlocking Gen AI For Enterprise With Rehan Jalil

Rehan Jalil
Watch Now View

Latest

The ROI of Safe Enterprise AI View More

The ROI of Safe Enterprise AI: A Business Leader’s Guide

The fundamental truth of today’s competitive landscape is that businesses harnessing data through AI will outperform those that don’t. Especially with 90% of enterprise...

View More

Accelerating Safe Enterprise AI: Securiti’s Gencore AI with Databricks and Anthropic Claude

Securiti AI collaborates with the largest firms in the world who are racing to adopt and deploy safe generative AI systems, leveraging their own...

New Draft Amendments to China Cybersecurity Law View More

New Draft Amendments to China Cybersecurity Law

Gain insights into the new draft amendments to the China Cybersecurity Law (CSL). Learn more about legal responsibilities, noncompliance penalties, the significance of the...

View More

What are Data Security Controls & Its Types

Learn what are data security controls, the types of data security controls, best practices for implementing them, and how Securiti can help.

View More

Top 10 Privacy Milestones That Defined 2024

Discover the top 10 privacy milestones that defined 2024. Learn how privacy evolved in 2024, including key legislations enacted, data breaches, and AI milestones.

View More

2025 Privacy Law Updates: Key Developments You Need to Know

Download the whitepaper to discover privacy law updates in 2025 and the key developments you need to know. Learn how Securiti helps ensure swift...

Comparison of RoPA Field Requirements Across Jurisdictions View More

Comparison of RoPA Field Requirements Across Jurisdictions

Download the infographic to compare Records of Processing Activities (RoPA) field requirements across jurisdictions. Learn its importance, penalties, and how to navigate RoPA.

Navigating Kenya’s Data Protection Act View More

Navigating Kenya’s Data Protection Act: What Organizations Need To Know

Download the infographic to discover key details about navigating Kenya’s Data Protection Act and simplify your compliance journey.

Gencore AI and Amazon Bedrock View More

Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock

Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...

DSPM Vendor Due Diligence View More

DSPM Vendor Due Diligence

DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...

What's
New