English Translation
At the moment, MIDSCENE_MODEL_REASONING_ENABLED=false only adds the "enable_thinking": false parameter for Qwen models.
This works when calling the Alibaba Cloud Model Studio API, but it does not work for self-hosted open-source Qwen models. For self-hosted models, an extra parameter is required:
"chat_template_kwargs": {"enable_thinking": false}
Proposed API: no API change is needed. This only requires additional internal logic.
What problem does this feature solve?
目前MIDSCENE_MODEL_REASONING_ENABLED=false只为qwen模型添加了参数"enable_thinking": False
在使用阿里云 Model Studio 的 API时是可以的,但是使用自部署的开源qwen模型时是无效的,必须要加额外的参数
"chat_template_kwargs": {"enable_thinking": False}
What does the proposed API look like?
不改变API,只是增加内部逻辑