Securiti leads GigaOm's DSPM Vendor Evaluation with top ratings across technical capabilities & business value.

View

OWASP Top 10 for LLM Applications

Mitigate AI Security Risks with the Broadest Coverage of OWASP Top 10 for LLMs

LLM01 - Prompt Injection

Manipulating prompts to alter the model's intended behavior or bypass its restrictions.

Mitigation Strategies

  • Deploy a context-aware prompt. retrieval, and response firewall to detect and block malicious inputs.
  • Sanitize and restrict sensitive data from being included in model training or response generation.
  • Enforce least-privileged data access to minimize unauthorized manipulation.

Example: The user manipulates an LLM to summarize a webpage with instructions to ignore previous commands.

Learn More
LLM01 - Prompt Injection

LLM02 - Sensitive
Information Disclosure

Exposing confidential or proprietary data through training, fine-tuning, or user interactions.

Mitigation Strategies

  • Implement data masking and sanitization to protect sensitive information in training data.
  • Use multi-layered firewalls at prompt, retrieval, and response stages to block data leaks.
  • Enforce strict entitlement controls to ensure only authorized access to sensitive data.

Example: Samsung employees once inadvertently exposed confidential internal data to ChatGPT.

Learn More
LLM02 - Sensitive Information Disclosure

LLM03 - Supply Chain
Vulnerability

Introducing risks via compromised models, datasets, or third-party integrations in the AI development pipeline.

Mitigation Strategies

  • Conduct model and agent discovery to identify sanctioned and shadow AI.
  • Use model cards to evaluate risks like bias, toxicity, and misconfigurations in third-party components.
  • Conduct third-party risk assessments for all supplier components to proactively identify vulnerabilities.

Example: Pre-trained models and poisoned datasets can introduce biases, backdoors, and vulnerabilities, enabling data theft, system compromise, and harmful content generation.

Learn More
LLM03 - Supply Chain Vulnerability

LLM04 - Data and Model Poisoning

Contaminating datasets or tampering with models to introduce biases or malicious backdoors.

Mitigation Strategies

  • Inspect AI outputs using LLM response firewalls to detect harmful responses.
  • Validate third-party datasets and pre-trained models for biases or backdoors.
  • Monitor LLM outputs to identify unintended or malicious behaviors.

Example: Microsoft’s Tay chatbot was manipulated by malicious users with offensive inputs, causing it to generate harmful content.

Learn More
LLM04 - Data and Model Poisoning

LLM05 - Improper Output Handling

Failing to validate or sanitize model responses before passing them to downstream systems.

Mitigation Strategies

  • Deploy a response firewall to filter and validate AI outputs against company policies.
  • Use built-in firewall policies to block sensitive, offensive, or malicious content.
  • Validate system outputs before execution in downstream applications.

Example: An LLM generates an unrestricted SQL query, enabling attackers to delete sensitive database records.

Learn More
LLM05 - Improper Output Handling

LLM06 - Excessive Agency

Granting LLMs unauthorized autonomy or functionality beyond their intended scope.

Mitigation Strategies

  • Discover and manage AI models and entitlements to enforce least-privileged access.
  • Cross-verify the identity of users sending prompts with their entitlements to prevent unauthorized data access.
  • Combine data source-level access controls with AI system specific inline access governance with Securiti's holistic AI security approach.

Example: A car dealership’s LLM agent with excessive privileges exposes confidential next-quarter sales promotions, impacting current sales and revenue.

Learn More
LLM06 - Excessive Agency

LLM07 - System Prompt
Leakage

Extracting sensitive information or internal instructions embedded in system prompts.

Mitigation Strategies

  • Use prompt firewalls to block attempts to extract sensitive system-level prompts.
  • Exclude API keys and internal configurations from meta prompts.
  • Apply strict controls to protect internal system configurations from exposure.

Example: Threat actors extracted GPT-4o voice mode system prompts, revealing behavior guidelines and sensitive configuration details.

Learn More
LLM07 - System Prompt Leakage

LLM08 - Vector and
Embedding Weaknesses

Exploiting vulnerabilities in vector databases or RAG pipelines to inject malicious content or retrieve sensitive data.

Mitigation Strategies

  • Classify and sanitize sensitive data before storing it as embeddings in vector DBs.
  • Enforce entitlement controls to prevent unauthorized embedding retrieval.
  • Use retrieval firewall to detect and block malicious or tampered embeddings.

Example: In multi-tenant environments, an adversary retrieves another tenant’s embeddings, exposing confidential business data.

Learn More
LLM08 - Vector and Embedding Weaknesses 

LLM09 - Misinformation

Generating false or misleading information that appears credible but lacks factual accuracy.

Mitigation Strategies

  • Ground LLM outputs in trusted, verified internal knowledge bases to prevent hallucinations.
  • Assess and select models based on industry benchmarks like Stanford HELM.
  • Minimize redundant or obsolete data from AI workflows to enhance response accuracy.

Example: An airline customer support chatbot incorrectly stated that bereavement fare refunds were available within 90 days, contradicting the company's policy.

Learn More
LLM09 - Misinformation

LLM10 - Unbounded
Consumption

Overusing LLMs for resource exhaustion, service degradation, or replicating the model's functionality.

Mitigation Strategies

  • Limit excessive requests using response firewalls to prevent resource exhaustion.
  • Monitor and block patterns indicating denial-of-service or resource abuse.
  • Enforce quotas and rate-limiting to safeguard against overuse.

Example: An adversary floods the LLM's API with synthetic data requests to train a duplicate model.

Learn More
LLM10 - Unbounded Consumption

Resources

FAQs

The Open Web Application Security Project (OWASP) is a nonprofit organization focused on improving software security. It provides guidelines and frameworks, such as the OWASP Top 10 for LLM security, to help organizations mitigate vulnerabilities in large language models (LLMs).

Data and model poisoning involve injecting biased or malicious data into training datasets, leading to manipulated outputs. Preventative measures include validating datasets, inspecting AI outputs, and monitoring LLM behavior.

To prevent unintentional data exposure, organizations should implement data masking, use multi-layered firewalls to block leaks, and enforce strict access controls to limit unauthorized data retrieval.

Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Spotlight 13:32
Ensuring Solid Governance Is Like Squeezing Jello
Watch Now View
Latest
View More
Databricks AI Summit (DAIS) 2025 Wrap Up
5 New Developments in Databricks and How Securiti Customers Benefit Concerns over the risk of leaking sensitive data are currently the number one blocker...
Inside Echoleak View More
Inside Echoleak
How Indirect Prompt Injections Exploit the AI Layer and How to Secure Your Data What is Echoleak? Echoleak (CVE-2025-32711) is a vulnerability discovered in...
What Is Data Risk Assessment and How to Perform it? View More
What Is Data Risk Assessment and How to Perform it?
Get insights into what is a data risk assessment, its importance and how organizations can conduct data risk assessments.
What is AI Security Posture Management (AI-SPM)? View More
What is AI Security Posture Management (AI-SPM)?
AI SPM stands for AI Security Posture Management. It represents a comprehensive approach to ensure the security and integrity of AI systems throughout the...
Beyond DLP: Guide to Modern Data Protection with DSPM View More
Beyond DLP: Guide to Modern Data Protection with DSPM
Learn why traditional data security tools fall short in the cloud and AI era. Learn how DSPM helps secure sensitive data and ensure compliance.
Mastering Cookie Consent: Global Compliance & Customer Trust View More
Mastering Cookie Consent: Global Compliance & Customer Trust
Discover how to master cookie consent with strategies for global compliance and building customer trust while aligning with key data privacy regulations.
View More
Key Amendments to Saudi Arabia PDPL Implementing Regulations
Download the infographic to gain insights into the key amendments to the Saudi Arabia PDPL Implementing Regulations. Learn about proposed changes and key takeaways...
Understanding Data Regulations in Australia’s Telecom Sector View More
Understanding Data Regulations in Australia’s Telecom Sector
Gain insights into the key data regulations in Australia’s telecommunication sector. Learn how Securiti helps ensure swift compliance.
Gencore AI and Amazon Bedrock View More
Building Enterprise-Grade AI with Gencore AI and Amazon Bedrock
Learn how to build secure enterprise AI copilots with Amazon Bedrock models, protect AI interactions with LLM Firewalls, and apply OWASP Top 10 LLM...
DSPM Vendor Due Diligence View More
DSPM Vendor Due Diligence
DSPM’s Buyer Guide ebook is designed to help CISOs and their teams ask the right questions and consider the right capabilities when looking for...
What's
New