Open
Description
Currently duplicate unsafe chains will often be detected, example below. The tool should only report the longest such chain
Location: ../demo_scans/GenAI-Showcase/apps/local-rag-pdf/rag_module.py:181
Message: Untrusted input 'query' flows to LLM API call without proper sanitization
Tags: security, sanitization, prompt-engineering
Suggestion: Implement input validation or sanitization before passing untrusted input to LLM. Consider using an allow-list approach.
Context:
source: query
sink: llm_call_181
path: query -> retrieved_docs -> formatted_input -> llm_call_181
Issue #11: chain-unsafe-input (high)
Location: ../demo_scans/GenAI-Showcase/apps/local-rag-pdf/rag_module.py:181
Message: Untrusted input 'question' flows to LLM API call without proper sanitization
Tags: security, sanitization, prompt-engineering
Suggestion: Implement input validation or sanitization before passing untrusted input to LLM. Consider using an allow-list approach.
Context:
source: question
sink: llm_call_181
path: question -> retrieved_docs -> formatted_input -> llm_call_181
Issue #12: chain-unsafe-input (high)
Location: ../demo_scans/GenAI-Showcase/apps/local-rag-pdf/app.py:61
Message: Untrusted input 'user_input' flows to LLM API call without proper sanitization
Tags: security, sanitization, prompt-engineering
Suggestion: Implement input validation or sanitization before passing untrusted input to LLM. Consider using an allow-list approach.
Context:
source: user_input
sink: llm_call_61
path: user_input -> llm_call_61
Issue #13: chain-unsafe-input (high)
Location: ../demo_scans/GenAI-Showcase/apps/local-rag-pdf/app.py:61
Message: Untrusted input 'user_input' flows directly to LLM API call without proper sanitization
Tags: security, sanitization, prompt-engineering
Suggestion: Implement input validation or sanitization before passing untrusted input to LLM. Consider using an allow-list approach.
Context:
source: user_input
sink: llm_call_61
path: user_input -> llm_call_61
Metadata
Metadata
Assignees
Labels
No labels