[7.x] Add TaintedLlmPrompt issue type for prompt injection detection#11746
Merged
danog merged 8 commits intovimeo:masterfrom Mar 26, 2026
Merged
[7.x] Add TaintedLlmPrompt issue type for prompt injection detection#11746danog merged 8 commits intovimeo:masterfrom
TaintedLlmPrompt issue type for prompt injection detection#11746danog merged 8 commits intovimeo:masterfrom
Conversation
Add a first-class taint type for detecting when user input flows into LLM prompts, enabling static detection of prompt injection vulnerabilities (OWASP LLM01:2025). Ref: psalm/psalm-plugin-laravel#484
TaintedLlmPrompt issue type for prompt injection detection
Fix DocumentationTest failures by adding the new issue type to config.xsd and creating its documentation page.
Verify @psalm-taint-escape llm_prompt works and taint propagates through intermediate function calls to LLM sinks.
danog
reviewed
Mar 19, 2026
Place INPUT_LLM_PROMPT right after INPUT_EXTRACT and shift USER_SECRET/SYSTEM_SECRET forward, so ALL_INPUT can use the simple (1 << 17) - 1 formula.
alies-dev
added a commit
to psalm/psalm-plugin-laravel
that referenced
this pull request
Mar 19, 2026
Detect LLM prompt injection and output injection vulnerabilities in projects using laravel/ai v0.3.x (OWASP LLM01:2025). Taint sources (LLM output is untrusted): - TextResponse::$text via LlmOutputTaintHandler (property-level taint unsupported by Psalm annotations, handled programmatically) - TextResponse::__toString() via stub annotation - Tools\Request data access methods (string, integer, all, etc.) Taint sinks (prompt injection detection): - Promptable::prompt(), stream(), queue(), broadcast*() - Laravel\Ai\agent() helper Uses `html` as proxy taint type until vimeo/psalm#11746 merges to add native TaintedLlmPrompt support. Stubs in stubs/ai/0.3/ are loaded conditionally — only when laravel/ai is installed and version matches major.minor.
5 tasks
TaintedLlmPrompt issue type for prompt injection detectionTaintedLlmPrompt issue type for prompt injection detection
Collaborator
|
Thank you! |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Adds a first-class
TaintedLlmPrompttaint issue type for detecting prompt injection vulnerabilities — when user input flows unsanitized into LLM prompts (OWASP LLM01:2025).Ref: psalm/psalm-plugin-laravel#484
Design decisions
ALL_INPUT: LLM prompt injection is an input taint (user data → LLM), unlikeUSER_SECRET/SYSTEM_SECRET(sensitive data leaking out), so it belongs inALL_INPUT.Usage
PS: once merged, it will be nice to release it (as new beta) ASAP, as I have some ideas for https://github.com/psalm/psalm-plugin-laravel to use it (the plugin already tagged as major 4.x and supports Psalm 7.x only)