Securiti launches Gencore AI, a holistic solution to build Safe Enterprise AI with proprietary data - easily

View

OWASP Top 10 for LLM Applications

Published June 27, 2024

Listen to the content

Large Language Models (LLMs) are powering smart chatbots, enhancing business analytics, accelerating strategic decisions, and automating processes. AI has indeed brought a world of innovation, ease, and scalability. However, it also introduces a plethora of critical risks and vulnerabilities, hindering safer innovation.

Take, for instance, the NOYB’s privacy complaint against an AI tech giant. The complaint was made in response to made-up answers generated by a popular GenAI application when prompted about the date of birth of NOYB’s founder, Max Schrems. Hallucination is a known problem in LLMs. In fact, the New York Times reported that “chatbots invent information at least 3 percent of the time – and as high as 27 percent”. Hallucination in LLMs is the result of overreliance, which happens when the AI system is left without proper oversight and controls.

Maximizing the potential of LLMs while ensuring their responsible development, deployment, and use necessitates a complete understanding of common AI risks and the implementation of appropriate mitigation measures.

The OWASP Top 10 for LLM Applications is an ideal guide compiled by hundreds of experts to enable the safe adoption of AI technologies.

OWASP Top 10 of LLM Applications

The Open Worldwide Application Security Project (OWASP) is a non-profit organization that offers free resources and guides. It is a platform where expert developers, engineers, data scientists, and analysts come together to research and propose best practices, insights, and guidelines that help safeguard systems, applications, and LLMs against cyber threats.

OWASP is a known name in cybersecurity, having first introduced the OWASP Top 10 for web application security. However, in late 2023, the organization introduced its latest project, the OWASP Top 10 List for LLM Applications, in response to the growing LLM landscape and the emerging dangerous risks.

The latest project is the result of the collective insights of 500 experts from across the globe. The experts researched, analyzed, and identified 43 unique threats to LLMs and narrowed the list to the top 10 most critical threats.

This blog will provide a quick rundown of the top 10 risks facing LLM systems and applications, with some quick examples and prevention tips. For a detailed overview of the risks and the proposed mitigation strategies, read the complete OWASP Top 10 List for LLM Applications.

LLM01 - Prompt Injection

Injection threats have existed for a long time, enabling threat actors to execute malicious codes or actions and compromise applications. Prompt injection is one such threat. Ranked as the topmost critical vulnerability, prompt injection allows the attacker to compromise the LLM through meticulously crafted prompts. It manipulates the LLM applications to bypass the initial instructions and follow the attacker's intentions.

OWASP lists two different kinds of prompt injections commonly employed by attackers: direct and indirect prompts.

In direct prompt injection, the attacker interacts with the LLM directly, manipulating it through malicious prompts. For instance, the attacker could ask the LLM to pretend like a chatbot and, if given an instruction, how it would respond. Often, the prompt may include exploits, such as a remote execution code, in between the prompt instructions.

In the indirect prompt injection case, the attacker attempts to execute malicious codes through a corrupt webpage or file. If a user prompts the LLM to go through that website, for instance, to summarize the content, it would inadvertently execute the code.

Prompt injection is a serious security risk in LLMs that enterprises should be wary of as it could allow the attackers to jailbreak the LLM and gain control over it.

Preventive Measures

  • Access to the LLMs should be limited. Role-based permissions should be applied to give minimum-level access.
  • Keep humans in the loop for approvals, especially when LLMs access external systems, such as in the indirect prompt injection action above.
  • A meticulous system, controls, or policies are required to segregate unwanted or malicious content from the prompt.
  • All LLMs should be treated as untrusted, and thus, when the system is designed, appropriate security and governance controls must be applied.

LLM02 - Insecure Output Handling

As the name implies, it is an LLM output-related risk. Suppose a use case where the LLM generates SQLs, allowing teams to streamline and accelerate data analysis. Now, what could go wrong if a malicious actor gets to the LLM and manipulates it to generate SQL that drops all the tables from the database? It would be disastrous for the organization. Threats like SQL vulnerabilities could happen if the downstream agent accepts LLM outputs without careful handling, i.e., sanitization, validation, or filtering.

One reason insecure output handling is placed right after prompt injection is that this risk usually arises with a successful prompt injection attack.

Preventive Measures

  • Apply a zero-trust security approach when dealing with LLMs. Consider these systems as untrusted users that require sanitization and validation.
  • Consider the guidelines provided under the OWASP Application Security Verification Standard (ASVS) framework for secure output handling.

LLM03 - Training Data Poisoning

It is common knowledge that LLMs are trained on vast volumes of data. In fact, data is also needed to fine-tune LLMs' performance and efficiency. However, if the ingested data is corrupt, the LLM will behave abnormally, leading to unethical, biased, or harmful responses. Data poisoning is a tough nut to crack.

To understand a data poisoning attack, let’s take the following situation as an example. Imagine a user asking an LLM to review certain files and databases and summarize its analysis for investment purposes. Without the user's knowledge, an attacker plants false data in the files or databases. As a result, the LLM generates an incorrect or flawed analysis, negatively impacting the user's decision.

Preventive Measures

  • Gain complete visibility of all the sources and ensure they are reliable.
  • Verify the legitimacy of the training data across its lifecycle.
  • Mechanisms should be in place to validate the LLM's outputs and ensure they are correct, accurate, and trustworthy.

LLM04 - Model Denial of Service

The threat listed at number 4 is the equivalent of Distributed Denial of Service (DDoS). In the LLM case, the attacker could send a complex query, prompting the LLM to produce unexpected content that might require consuming excessive resources. Another example is an attacker overwhelming the LLM with too many inputs, causing the model or the application that the model powers to slow down.

Prevent Measures

  • LLM’s resource utilization should be monitored for abnormal spikes.
  • Input should be validated and limited, preventing the LLM from being overwhelmed with excessive prompts.
  • API requests should also be limited, and the resources should be capped.

LLM05 - Supply Chain Vulnerabilities

It is an umbrella term for different types of vulnerabilities, from prompt injections and data poisoning to compromised ML models and biased results. For example, a malicious agent may poison the data sources of a weather forecasting model, causing the model to make false alerts due to inaccurate predictions.

Preventive Measures

  • Perform supplier assessment, evaluating not only the supplier but also their policies or terms and conditions.
  • As a security measure, apply model monitoring for abnormal behavior.
  • Use plugins that you have tested and trust.
  • Apply controls recommended in the OWASP Top Ten's A06:2021 – Vulnerable and Outdated Components to scan for vulnerabilities and patch components.

LLM06 - Sensitive Information Disclosure

Another common risk in LLMs is the leakage of personally identifiable information (PII), especially sensitive data. Since vast volumes of data are used to fine-tune or train LLMs, these datasets often contain sensitive data. If appropriate controls, such as dynamic masking or sensitive data redaction, aren’t used around LLMs, they may inadvertently disclose sensitive information.

Take, for instance, an LLM-powered healthcare application. If the LLM is inadvertently trained on real patient records, the application might reveal sensitive information about actual patients. Simultaneously, most LLMs, such as the ChatGPT, use users’ prompts for training. In such scenarios, sensitive data may be exposed if users unintentionally expose sensitive information in prompts.

Prevent Measures

  • Create and implement data sanitization controls to remove sensitive data from training datasets.
  • With validation policies and controls, security teams can prevent threats like data poisoning by filtering out malicious prompts.
  • It is important to create awareness through employee training on the risks associated with LLMs and data.

LLM07 - Insecure Plugin Design

Insecure plugins are a threat that existed even before LLM, as seen with web services or web-based applications. These separate codes can lead to harmful consequences, such as exfiltration, unauthorized authentication, or indirect prompt injection.

A common example of this vulnerability is a plugin that doesn’t verify inputs and performs actions without authentication. If an attacker discovers this gap in an LLM, they could execute malicious code or gain excessive privileges.

Preventive Measures

  • Least privileged access should be implemented to limit access to all the plugins to only authorized users.
  • Always test and validate the plugins to check for vulnerabilities and risks.
  • Leverage OAuth2 and API keys to enable plugins to authenticate only authorized identities.

LLM08 - Excessive Agency

Excessive agency is a new vulnerability found and listed in the OWASP Top 10 for LLM Applications. The problem occurs when the LLM is provided with excessive permissions to read, write, or execute codes or given total autonomy, enabling the application to execute open-ended functions. For instance, a developer integrates a plugin into the LLM, giving it the ability to read documents for analysis. However, the plugin the developers leverage for the purpose may include the additional functionality of modifying and even deleting the document. If an indirect prompt injection occurs, it could manipulate the agent into deleting valuable documents from the repository, consequently harming the business and its reputation.

Preventive Measures

  • Validate plugins and ensure minimum necessary functionalities are provided to the agents.
  • Leverage only specific plugins to avoid open-ended functions.
  • Human-in-the-loop practice should be considered to validate and approve actions.

LLM09 - Overreliance

Overreliance is another new category of risk associated with LLMs. It occurs when people or applications start relying on LLMs without proper controls, policies, or oversight. Over-dependence on LLMs can result in contextual or factual errors in the outcomes of the LLM, leading to misinformation or misguidance.

Take, for instance, an employee who uses an LLM to summarize the quarterly performance of the business. The company makes strategic decisions based on the data presented in the LLM-generated report. Now, one quarter later, the team realizes that the data is incorrect. However, by that time, the damage had been done.

Preventive Measures

  • It is critical to regularly monitor the outcome generated by the LLM, ensuring it is reliable and consistent.
  • Establish mechanisms to monitor the app’s or users’ interactions with the LLM, such as content filtering, sensitive data redaction, etc.
  • Build automated validations to cross-verify facts.

LLM10 - Model Theft

The last one on the OWASP Top 10 List for LLM Applications is the model theft risk. It is a threat where the malicious actor gets unauthorized access to LLM repositories or infiltrates proprietary LLMs, leading to sensitive data exposure, unauthorized usage, or loss of reputation. Suppose that an LLM has a misconfiguration setting unchecked. If a malicious user discovers such a vulnerability in a proprietary LLM, they will get unauthorized access to it. This could mean they can extract the parameters to replicate functionality or inject malicious prompts, to name a few.

Preventive Measures

  • To mitigate the risk, it is strongly recommended that access control be implemented to limit access to the LLM repositories and training environments.
  • Continuously monitor the access logs to identify abnormal trends in LLM access or usage.
  • Prevent side-channel attacks that occur due to prompt injections by implementing appropriate controls.

Protect Your LLM Landscape & Enable Its Safe Use with Securiti

Securiti Data+AI Command Center is built to empower organizations to enable the safe use of their Data and AI by leveraging contextual intelligence and automated controls. The solution provides unified contextual intelligence and automation for AI and data security, governance, privacy and compliance.

Our Data+AI Command Center can help you safeguard your LLMs and data landscape against the OWASP Top 10 LLM threats:

  • Discover AI models & agents (LLM-08 Excessive Agency, LLM-09 Overreliance, LLM-10 Model Theft)
  • Assess AI model risks (LLM-08 Excessive Agency, LLM-09 Overreliance, LLM-10 Model Theft)
  • Understand data use with AI (LLM-03 Data Model Poisoning, LLM-06 Sensitive Data Disclosure, LLM-08 Excessive Agency, LLM-09 Overreliance)
  • Implement in-line data and AI controls for security & privacy - Monitor and mitigate risks at various LLM interactions: response, prompt, and retrieval via context-aware LLM Firewalls (LLM-01 Prompt Injection, LLM-02  Insecure Output Handling, LLM-04 Model Denial of Service, LLM-06 Sensitive Information Disclosure, LLM-09 Overreliance), and
  • Comply with global regulations and standards - Leverage common tests and controls to automate compliance with global data and AI laws and frameworks.

Request a demo to learn more about the Data+AI Command Center.

Join Our Newsletter

Get all the latest information, law updates and more delivered to your inbox


Share


More Stories that May Interest You

What's
New