Agent API Automatically Removes LLM think Tags #13091
-
|
I’ve found a flaw in RAGFLOW: during the final message reply stage when creating an Agent, it automatically deletes the think tags generated by the LLM. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
|
Hi @wangchoufei! I'm Dosu and I’m helping the ragflow team. This is a known issue in RAGFlow. The Agent API often removes or loses tags from LLM output due to a combination of intentional output processing and bugs in the stream parser. Specifically, if the final chunk ends with , the closing tag may be dropped, so the reasoning content is lost in API responses but still appears in the UI preview, which handles these tags differently. There’s no built-in agent-level switch to control this; tag removal is typically done manually in code (for example, using regex in the Generate component) and is not user-configurable in the UI. If you want to preserve tags for third-party clients, you’ll need to adjust the backend logic that assembles the API response, such as modifying the streaming output in the agent components to merge or retain the tags as needed. Upgrading to the latest version or using the nightly Docker image may resolve some related bugs, as recent updates have improved output parsing and tag handling. For more details and code examples, see this issue, this discussion, and this code reference. To reply, just mention @dosu. How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other |
Beta Was this translation helpful? Give feedback.
-
|
Agent API removing Why it happens: If you WANT to keep them: # Option 1: Custom output parser
def preserve_think_tags(response):
# Don't strip tags
return response.content # Raw, with tags
# Option 2: Config flag (if available)
config = {
"strip_think_tags": False
}
# Option 3: Different tag format
# Use <reasoning> instead of <think>
# May not be auto-strippedWhy you might want tags:
Recommended approach: # Log thinking separately
if "<think>" in raw_response:
thinking = extract_think(raw_response)
logger.info(f"Agent thinking: {thinking}")
clean_response = strip_think(raw_response)
return clean_responseAlternative: We handle agent transparency at RevolutionAI. What's your use case — debugging or user-facing? |
Beta Was this translation helpful? Give feedback.
Hi @wangchoufei! I'm Dosu and I’m helping the ragflow team.
This is a known issue in RAGFlow. The Agent API often removes or loses tags from LLM output due to a combination of intentional output processing and bugs in the stream parser. Specifically, if the final chunk ends with , the closing tag may be dropped, so the reasoning content is lost in API responses but still appears in the UI preview, which handles these tags differently. There’s no built-in agent-level switch to control this; tag removal is typically done manually in code (for example, using regex in the Generate component) and is not user-configurable in the UI. If you want to preserve tags for third-party clients, you’ll …