The rapid adoption of generative AI has introduced unprecedented security challenges. With the release of the OWASP Top 10 for LLM Applications 2025, organizations now have an updated roadmap to identify and mitigate critical vulnerabilities, from prompt injection to system prompt leakage, ensuring secure and compliant AI deployments.
The OWASP Top 10 for LLM Applications 2025 provides a critical framework for identifying and addressing the most severe security risks in generative AI systems. As enterprises increasingly embed large language models into their core operations, understanding these vulnerabilities is essential for maintaining robust AI governance and ensuring regulatory compliance.
The Open Worldwide Application Security Project (OWASP) first launched this community-driven initiative in 2023 to highlight security issues specific to artificial intelligence. The newly released 2025 version reflects the evolving cybersecurity landscape in AI, incorporating insights from a diverse global group of contributors. It addresses both persistent threats and emerging vulnerabilities discovered through real-world exploits.
Key Updates in the 2025 LLM Security List
The updated OWASP Top 10 for LLM Applications 2025 introduces significant changes that reflect how attackers are adapting to new AI architectures. While Prompt Injection remains the most critical vulnerability at the number one spot, several new risks have emerged that demand immediate attention from security teams.
Two entirely new entries highlight the complexities of modern AI deployments. System Prompt Leakage addresses the reality that instructions embedded within applications are rarely securely isolated, leading to the exposure of sensitive operational data. Additionally, Vector and Embedding Weaknesses has been added to address vulnerabilities in Retrieval-Augmented Generation (RAG) systems, which have become foundational for grounding model outputs in enterprise data.
The list also expands on previously identified risks. Misinformation now encompasses the dangers of overreliance, emphasizing the risks of trusting model outputs without verification. Unbounded Consumption, previously categorized as denial of service, now includes the severe financial and operational risks associated with uncontrolled resource usage. Furthermore, as agentic architectures grant models more autonomy, the risks associated with Excessive Agency have been significantly elevated.
The Complete OWASP Top 10 for LLM Applications 2025

Understanding the full spectrum of these vulnerabilities is the first step toward comprehensive AI security auditing. The complete 2025 list includes:
- LLM01: Prompt Injection – Occurs when user prompts alter the model's behavior in unintended or malicious ways.
- LLM02: Sensitive Information Disclosure – The exposure of confidential data affecting both the model and its host application.
- LLM03: Supply Chain – Vulnerabilities stemming from compromised third-party models, datasets, or plugins.
- LLM04: Data and Model Poisoning – Manipulation of pre-training, fine-tuning, or embedding data to compromise outputs.
- LLM05: Improper Output Handling – Insufficient validation and sanitization of the content generated by the model.
- LLM06: Excessive Agency – Risks arising when AI systems are granted unchecked autonomy and permissions.
- LLM07: System Prompt Leakage – The unauthorized exposure of foundational system instructions and operational logic.
- LLM08: Vector and Embedding Weaknesses – Security flaws within the retrieval mechanisms and data embeddings used by the system.
- LLM09: Misinformation – The generation of false or misleading information, compounded by user overreliance.
- LLM10: Unbounded Consumption – Excessive resource utilization leading to operational strain and unexpected costs.
Building Digital Resilience Through AI Governance
The release of the OWASP Top 10 for LLM Applications 2025 underscores that securing artificial intelligence requires more than traditional perimeter defenses; it demands comprehensive AI governance standards. As organizations integrate these models into customer-facing applications and internal workflows, they must adopt security-by-design principles that address the unique characteristics of generative AI.
This updated framework aligns closely with international efforts to standardize AI safety, such as the NIST Artificial Intelligence Risk Management Framework. By mapping the OWASP vulnerabilities to established compliance requirements, organizations can transform security obligations into a competitive advantage, demonstrating to stakeholders that their AI systems are both innovative and trustworthy.
Nemko Digital provides expert AI trust and compliance services to help organizations navigate these complex security challenges. By leveraging the OWASP Top 10 project guidelines alongside international standards like ISO/IEC 42001, we enable enterprises to build resilient, compliant, and secure AI architectures.

