Server-Side Request Forgery (SSRF) in ChatOpenAI Image Token Counting
Summary
The ChatOpenAI.get_num_tokens_from_messages() method fetches arbitrary image_url values without validation when computing token counts for vision-enabled models. This allows attackers to trigger Server-Side Request Forgery (SSRF) attacks by providing malicious image URLs in user input.
Severity
Low - The vulnerability allows SSRF attacks but has limited impact due to:
- Responses are not returned to the attacker (blind SSRF)
- Default 5-second timeout limits resource exhaustion
- Non-image responses fail at PIL image parsing
Impact
An attacker who can control image URLs passed to get_num_tokens_from_messages() can:
- Trigger HTTP requests from the application server to arbitrary internal or external URLs
- Cause the server to access internal network resources (private IPs, cloud metadata endpoints)
- Cause minor resource consumption through image downloads (bounded by timeout)
Note: This vulnerability occurs during token counting, which may happen outside of model invocation (e.g., in logging, metrics, or token budgeting flows).
Details
The vulnerable code path:
get_num_tokens_from_messages() processes messages containing image_url content blocks
- For images without
detail: "low", it calls _url_to_size() to fetch the image and compute token counts
_url_to_size() performs httpx.get(image_source) on any URL without validation
- Prior to the patch, there was no SSRF protection, size limits, or explicit timeout
File: libs/partners/openai/langchain_openai/chat_models/base.py
Patches
The vulnerability has been patched in langchain-openai==1.1.9 (requires langchain-core==1.2.11).
The patch adds:
- SSRF validation using
langchain_core._security._ssrf_protection.validate_safe_url() to block:
- Private IP ranges (RFC 1918, loopback, link-local)
- Cloud metadata endpoints (169.254.169.254, etc.)
- Invalid URL schemes
- Explicit size limits (50 MB maximum, matching OpenAI's payload limit)
- Explicit timeout (5 seconds, same as
httpx.get default)
- Allow disabling image fetching via
allow_fetching_images=False parameter
Workarounds
If you cannot upgrade immediately:
- Sanitize input: Validate and filter
image_url values before passing messages to token counting or model invocation
- Use network controls: Implement egress filtering to prevent outbound requests to private IPs
References
Server-Side Request Forgery (SSRF) in ChatOpenAI Image Token Counting
Summary
The
ChatOpenAI.get_num_tokens_from_messages()method fetches arbitraryimage_urlvalues without validation when computing token counts for vision-enabled models. This allows attackers to trigger Server-Side Request Forgery (SSRF) attacks by providing malicious image URLs in user input.Severity
Low - The vulnerability allows SSRF attacks but has limited impact due to:
Impact
An attacker who can control image URLs passed to
get_num_tokens_from_messages()can:Note: This vulnerability occurs during token counting, which may happen outside of model invocation (e.g., in logging, metrics, or token budgeting flows).
Details
The vulnerable code path:
get_num_tokens_from_messages()processes messages containingimage_urlcontent blocksdetail: "low", it calls_url_to_size()to fetch the image and compute token counts_url_to_size()performshttpx.get(image_source)on any URL without validationFile:
libs/partners/openai/langchain_openai/chat_models/base.pyPatches
The vulnerability has been patched in
langchain-openai==1.1.9(requireslangchain-core==1.2.11).The patch adds:
langchain_core._security._ssrf_protection.validate_safe_url()to block:httpx.getdefault)allow_fetching_images=FalseparameterWorkarounds
If you cannot upgrade immediately:
image_urlvalues before passing messages to token counting or model invocationReferences