Skip to content

How It Works

LLM Armor applies two complementary analysis layers to every Python file.

Fast line-by-line pattern matching for common vulnerability patterns. Runs on all files regardless of whether they parse as valid Python.

Python’s ast module parses each file into a syntax tree and performs source-based taint tracking. This catches patterns regex cannot detect: variable aliasing, role-aware dict construction, multi-line string concatenation, and **kwargs dict spreading.

If a file has syntax errors, the AST layer falls back gracefully, leaving regex results intact.

When both layers detect the same issue on the same line, only one finding is reported.

Tainted (user-controlled)Example
HTTP requestdata = request.json["prompt"]
HTTP formdata = request.form.get("field")
Django requestdata = request.POST["query"]
stdindata = input("Enter: ")
CLI argumentsdata = sys.argv[1]
WebSocketdata = websocket.receive()
Function parameterdef handle(user_msg):
@tool parameter@tool def my_tool(command: str):
SourceExample
Config lookupprompt = config.get("default_prompt")
Environment variableprompt = os.environ["PROMPT"]
Database callprompt = db.fetch_prompt(id)
String literalprompt = "You are a helpful assistant."

Taint propagates through direct alias assignments but not through function calls, so clean = sanitize(raw) does not taint clean.