Veeam Completes Acquisition of Securiti AI to Create the Industry’s First Trusted Data Platform for Accelerating Safe AI at Scale

View

OWASP Top 10 for LLM Applications

Mitigate AI Security Risks with the Broadest Coverage of OWASP Top 10 for LLMs

LLM01 - Prompt Injection

Manipulating prompts to alter the model's intended behavior or bypass its restrictions.

Mitigation Strategies

  • Deploy a context-aware prompt. retrieval, and response firewall to detect and block malicious inputs.
  • Sanitize and restrict sensitive data from being included in model training or response generation.
  • Enforce least-privileged data access to minimize unauthorized manipulation.

Example: The user manipulates an LLM to summarize a webpage with instructions to ignore previous commands.

Learn More
LLM01 - Prompt Injection

LLM02 - Sensitive
Information Disclosure

Exposing confidential or proprietary data through training, fine-tuning, or user interactions.

Mitigation Strategies

  • Implement data masking and sanitization to protect sensitive information in training data.
  • Use multi-layered firewalls at prompt, retrieval, and response stages to block data leaks.
  • Enforce strict entitlement controls to ensure only authorized access to sensitive data.

Example: Samsung employees once inadvertently exposed confidential internal data to ChatGPT.

Learn More
LLM02 - Sensitive Information Disclosure

LLM03 - Supply Chain
Vulnerability

Introducing risks via compromised models, datasets, or third-party integrations in the AI development pipeline.

Mitigation Strategies

  • Conduct model and agent discovery to identify sanctioned and shadow AI.
  • Use model cards to evaluate risks like bias, toxicity, and misconfigurations in third-party components.
  • Conduct third-party risk assessments for all supplier components to proactively identify vulnerabilities.

Example: Pre-trained models and poisoned datasets can introduce biases, backdoors, and vulnerabilities, enabling data theft, system compromise, and harmful content generation.

Learn More
LLM03 - Supply Chain Vulnerability

LLM04 - Data and Model Poisoning

Contaminating datasets or tampering with models to introduce biases or malicious backdoors.

Mitigation Strategies

  • Inspect AI outputs using LLM response firewalls to detect harmful responses.
  • Validate third-party datasets and pre-trained models for biases or backdoors.
  • Monitor LLM outputs to identify unintended or malicious behaviors.

Example: Microsoft’s Tay chatbot was manipulated by malicious users with offensive inputs, causing it to generate harmful content.

Learn More
LLM04 - Data and Model Poisoning

LLM05 - Improper Output Handling

Failing to validate or sanitize model responses before passing them to downstream systems.

Mitigation Strategies

  • Deploy a response firewall to filter and validate AI outputs against company policies.
  • Use built-in firewall policies to block sensitive, offensive, or malicious content.
  • Validate system outputs before execution in downstream applications.

Example: An LLM generates an unrestricted SQL query, enabling attackers to delete sensitive database records.

Learn More
LLM05 - Improper Output Handling

LLM06 - Excessive Agency

Granting LLMs unauthorized autonomy or functionality beyond their intended scope.

Mitigation Strategies

  • Discover and manage AI models and entitlements to enforce least-privileged access.
  • Cross-verify the identity of users sending prompts with their entitlements to prevent unauthorized data access.
  • Combine data source-level access controls with AI system specific inline access governance with Securiti's holistic AI security approach.

Example: A car dealership’s LLM agent with excessive privileges exposes confidential next-quarter sales promotions, impacting current sales and revenue.

Learn More
LLM06 - Excessive Agency

LLM07 - System Prompt
Leakage

Extracting sensitive information or internal instructions embedded in system prompts.

Mitigation Strategies

  • Use prompt firewalls to block attempts to extract sensitive system-level prompts.
  • Exclude API keys and internal configurations from meta prompts.
  • Apply strict controls to protect internal system configurations from exposure.

Example: Threat actors extracted GPT-4o voice mode system prompts, revealing behavior guidelines and sensitive configuration details.

Learn More
LLM07 - System Prompt Leakage

LLM08 - Vector and
Embedding Weaknesses

Exploiting vulnerabilities in vector databases or RAG pipelines to inject malicious content or retrieve sensitive data.

Mitigation Strategies

  • Classify and sanitize sensitive data before storing it as embeddings in vector DBs.
  • Enforce entitlement controls to prevent unauthorized embedding retrieval.
  • Use retrieval firewall to detect and block malicious or tampered embeddings.

Example: In multi-tenant environments, an adversary retrieves another tenant’s embeddings, exposing confidential business data.

Learn More
LLM08 - Vector and Embedding Weaknesses 

LLM09 - Misinformation

Generating false or misleading information that appears credible but lacks factual accuracy.

Mitigation Strategies

  • Ground LLM outputs in trusted, verified internal knowledge bases to prevent hallucinations.
  • Assess and select models based on industry benchmarks like Stanford HELM.
  • Minimize redundant or obsolete data from AI workflows to enhance response accuracy.

Example: An airline customer support chatbot incorrectly stated that bereavement fare refunds were available within 90 days, contradicting the company's policy.

Learn More
LLM09 - Misinformation

LLM10 - Unbounded
Consumption

Overusing LLMs for resource exhaustion, service degradation, or replicating the model's functionality.

Mitigation Strategies

  • Limit excessive requests using response firewalls to prevent resource exhaustion.
  • Monitor and block patterns indicating denial-of-service or resource abuse.
  • Enforce quotas and rate-limiting to safeguard against overuse.

Example: An adversary floods the LLM's API with synthetic data requests to train a duplicate model.

Learn More
LLM10 - Unbounded Consumption

Resources

FAQs

The Open Web Application Security Project (OWASP) is a nonprofit organization focused on improving software security. It provides guidelines and frameworks, such as the OWASP Top 10 for LLM security, to help organizations mitigate vulnerabilities in large language models (LLMs).

Data and model poisoning involve injecting biased or malicious data into training datasets, leading to manipulated outputs. Preventative measures include validating datasets, inspecting AI outputs, and monitoring LLM behavior.

To prevent unintentional data exposure, organizations should implement data masking, use multi-layered firewalls to block leaks, and enforce strict access controls to limit unauthorized data retrieval.

Videos
View More
Mitigating OWASP Top 10 for LLM Applications 2025
Generative AI (GenAI) has transformed how enterprises operate, scale, and grow. There’s an AI application for every purpose, from increasing employee productivity to streamlining...
View More
Top 6 DSPM Use Cases
With the advent of Generative AI (GenAI), data has become more dynamic. New data is generated faster than ever, transmitted to various systems, applications,...
View More
Colorado Privacy Act (CPA)
What is the Colorado Privacy Act? The CPA is a comprehensive privacy law signed on July 7, 2021. It established new standards for personal...
View More
Securiti for Copilot in SaaS
Accelerate Copilot Adoption Securely & Confidently Organizations are eager to adopt Microsoft 365 Copilot for increased productivity and efficiency. However, security concerns like data...
View More
Top 10 Considerations for Safely Using Unstructured Data with GenAI
A staggering 90% of an organization's data is unstructured. This data is rapidly being used to fuel GenAI applications like chatbots and AI search....
View More
Gencore AI: Building Safe, Enterprise-grade AI Systems in Minutes
As enterprises adopt generative AI, data and AI teams face numerous hurdles: securely connecting unstructured and structured data sources, maintaining proper controls and governance,...
View More
Navigating CPRA: Key Insights for Businesses
What is CPRA? The California Privacy Rights Act (CPRA) is California's state legislation aimed at protecting residents' digital privacy. It became effective on January...
View More
Navigating the Shift: Transitioning to PCI DSS v4.0
What is PCI DSS? PCI DSS (Payment Card Industry Data Security Standard) is a set of security standards to ensure safe processing, storage, and...
View More
Securing Data+AI : Playbook for Trust, Risk, and Security Management (TRiSM)
AI's growing security risks have 48% of global CISOs alarmed. Join this keynote to learn about a practical playbook for enabling AI Trust, Risk,...
AWS Startup Showcase Cybersecurity Governance With Generative AI View More
AWS Startup Showcase Cybersecurity Governance With Generative AI
Balancing Innovation and Governance with Generative AI Generative AI has the potential to disrupt all aspects of business, with powerful new capabilities. However, with...

Spotlight Talks

Spotlight 50:52
From Data to Deployment: Safeguarding Enterprise AI with Security and Governance
Watch Now View
Spotlight 11:29
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Not Hype — Dye & Durham’s Analytics Head Shows What AI at Work Really Looks Like
Watch Now View
Spotlight 11:18
Rewiring Real Estate Finance — How Walker & Dunlop Is Giving Its $135B Portfolio a Data-First Refresh
Watch Now View
Spotlight 13:38
Accelerating Miracles — How Sanofi is Embedding AI to Significantly Reduce Drug Development Timelines
Sanofi Thumbnail
Watch Now View
Spotlight 10:35
There’s Been a Material Shift in the Data Center of Gravity
Watch Now View
Spotlight 14:21
AI Governance Is Much More than Technology Risk Mitigation
AI Governance Is Much More than Technology Risk Mitigation
Watch Now View
Spotlight 12:!3
You Can’t Build Pipelines, Warehouses, or AI Platforms Without Business Knowledge
Watch Now View
Spotlight 47:42
Cybersecurity – Where Leaders are Buying, Building, and Partnering
Rehan Jalil
Watch Now View
Spotlight 27:29
Building Safe AI with Databricks and Gencore
Rehan Jalil
Watch Now View
Spotlight 46:02
Building Safe Enterprise AI: A Practical Roadmap
Watch Now View
Latest
View More
DataAI Security: Why Healthcare Organizations Choose Securiti
Discover why healthcare organizations trust Securiti for Data & AI Security. Learn key blockers, five proven advantages, and what safe data innovation makes possible.
View More
The Anthropic Exploit: Welcome to the Era of AI Agent Attacks
Explore the first AI agent attack, why it changes everything, and how DataAI Security pillars like Intelligence, CommandGraph, and Firewalls protect sensitive data.
View More
Aligning Your AI Systems With GDPR: What You Need to Know
Securiti’s latest blog walks you through all the important information and guidance you need to ensure your AI systems are compliant with GDPR requirements.
Network Security: Definition, Challenges, & Best Practices View More
Network Security: Definition, Challenges, & Best Practices
Discover what network security is, how it works, types, benefits, and best practices. Learn why network security is core to having a strong data...
View More
Data & AI Security Challenges in the Credit Reporting Industry
Explore key data and AI security challenges facing credit bureaus—PII exposure, model risk, data accuracy, access governance, AI bias, and compliance with FCRA, GDPR,...
EU AI Act: What Changes Now vs What Starts in 2026 View More
EU AI Act: What Changes Now vs What Starts in 2026
Understand the EU AI Act rollout—what obligations apply now, what phases in by 2026, and how providers and deployers should prepare for risk tiers,...
View More
Solution Brief: Microsoft Purview + Securiti
Extend Microsoft Purview with Securiti to discover, classify, and reduce data & AI risk across hybrid environments with continuous monitoring and automated remediation. Learn...
Top 7 Data & AI Security Trends 2026 View More
Top 7 Data & AI Security Trends 2026
Discover the top 7 Data & AI security trends for 2026. Learn how to secure AI agents, govern data, manage risk, and scale AI...
View More
Navigating HITRUST: A Guide to Certification
Securiti's eBook is a practical guide to HITRUST certification, covering everything from choosing i1 vs r2 and scope systems to managing CAPs & planning...
The DSPM Architect’s Handbook View More
The DSPM Architect’s Handbook: Building an Enterprise-Ready Data+AI Security Program
Get certified in DSPM. Learn to architect a DSPM solution, operationalize data and AI security, apply enterprise best practices, and enable secure AI adoption...
What's
New