7 of 10 OWASP LLM categories
LLM01, LLM02, LLM05, LLM06, LLM07, LLM08, and LLM10 covered — 2 strong, 5 partial. In progress: contributions welcome.
7 of 10 OWASP LLM categories
LLM01, LLM02, LLM05, LLM06, LLM07, LLM08, and LLM10 covered — 2 strong, 5 partial. In progress: contributions welcome.
Python
Static analysis for Python LLM applications: OpenAI, Anthropic, LangChain, CrewAI, Smolagents, Google ADK, and more.
MIT Licensed
Fully open source under the MIT license. Audit the code, fork it, extend it — no vendor lock-in.
Zero Runtime Overhead
Pure static analysis — no agents, no proxies, no API calls. Runs offline in seconds.
The snippet below shows a common LLM01 (prompt injection) vulnerability — user-controlled input
interpolated into the system role — and the llmarmor scan finding it produces.
from flask import requestfrom langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o")
def handle_query(): user_input = request.json["query"] # taint source system_role = request.json.get("role", "assistant") messages = [ {"role": "system", "content": f"You are a {system_role}. Answer concisely."}, {"role": "user", "content": user_input}, ] response = llm.invoke(messages) return response.contentLLM01 — Prompt Injection [HIGH] app.py:11 f"You are a {system_role}. Answer concisely." Tainted variable `system_role` reaches the system role — attacker can override system instructions. Fix: validate or allowlist `system_role` before interpolating it into the system message. Ref: https://owasp.org/www-project-top-10-for-large-language-model-applications/Open Source & MIT Licensed
Fully open source under the MIT license. Audit the code, fork it, extend it — no vendor lock-in.
OWASP LLM Top 10 Aligned
Rules map directly to LLM01–LLM10. Every finding includes the OWASP reference and a suggested fix.
CI/CD Friendly
Structured exit codes (0 clean, 1 HIGH/MEDIUM, 2 CRITICAL) and SARIF output let you gate pipelines instantly.
Regex + AST Taint Tracking
Two complementary layers: fast regex for common patterns, plus Python AST taint analysis for aliasing, dict spreading, and multi-line concatenation.
Zero Runtime Overhead
Pure static analysis — no agents, no proxies, no instrumentation. Runs offline with no external calls.
Works With Any LLM Stack
Covers OpenAI, Anthropic, LangChain, CrewAI, Smolagents, Google ADK, Semantic Kernel, MCP, and more.
pip install llmarmorllmarmor scan ./your-app/LLM01 — Prompt Injection 🟢
Supported. 6 injection vectors with role-aware AST taint analysis and str.join() detection.
LLM02 — Sensitive Info Disclosure 🟡
Partial. Detects leaked API keys (OpenAI, Anthropic, Google, HuggingFace) across all file types.
LLM03 — Supply Chain Vulnerabilities 🔴
Out of scope. Requires dependency-tree analysis beyond static source scanning.
LLM04 — Data & Model Poisoning 🔴
Out of scope. Requires runtime monitoring, not static analysis.
LLM05 — Improper Output Handling 🟡
Partial. eval/exec/shell/SQL/HTML sinks with taint tracking.
LLM06 — Insecure Plugin Design 🟡
Partial. @tool functions with dangerous sinks flagged.
LLM07 — System Prompt Leakage 🟡
Partial. Hardcoded prompts in source code and config files.
LLM08 — Excessive Agency 🟢
Supported. 8 pattern categories including dynamic dispatch and disabled approval gates.
LLM09 — Misinformation 🔴
Out of scope. Requires runtime factual verification.
LLM10 — Unbounded Consumption 🟡
Partial. Missing max_tokens on LLM API calls with **config dict spread resolution.
See the full OWASP LLM Top 10 Coverage reference for rule-by-rule details.
LLMArmor is purpose-built for OWASP LLM Top 10 static analysis. See how it differs from dynamic fuzzing tools and commercial runtime guards.
0 clean, 1 HIGH/MEDIUM findings, 2 CRITICAL) and SARIF output for GitHub Code Scanning. See the CI/CD integration guide for step-by-step GitHub Actions examples.pip install llmarmor, then run llmarmor scan ./your-app/. See the Quick Start guide to scan your first project in under 60 seconds.# llmarmor: ignore comment on the flagged line, list paths in a .llmarmorignore file, or configure rule exclusions in .llmarmor.yaml. See the suppressing false positives guide for full details. To report a false positive in the rules, open an issue on GitHub.