Skip to content

Conversation

@aquagull
Copy link
Contributor

@aquagull aquagull commented Nov 27, 2025

Motivation

none

💡 If this PR is a Cherry Pick, the PR title needs to follow the format by adding the [Cherry-Pick] label at the very beginning and appending the original PR ID at the end. For example, [Cherry-Pick][CI] Add check trigger and logic(#5191)

💡 如若此PR是Cherry Pick,PR标题需遵循格式,在最开始加上[Cherry-Pick]标签,以及最后面加上原PR ID,例如[Cherry-Pick][CI] Add check trigger and logic(#5191)

Modifications

  1. Added conditional conversion of tasks to tensors in zmq->scheduler when FD_ENABLE_E2W_TENSOR_CONVERT is enabled.
  2. Updated scheduling logic to allow multiple multimodal prefill requests when FD_ENABLE_MAX_PREFILL is enabled.
  3. Modified extract_vision_features_qwen to handle batched image tensors

Usage or Command

export FD_ENABLE_E2W_TENSOR_CONVERT=1
export FD_ENABLE_MAX_PREFILL=1

Accuracy Tests

none

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

Copilot AI review requested due to automatic review settings November 27, 2025 08:15
@paddle-bot
Copy link

paddle-bot bot commented Nov 27, 2025

Thanks for your contribution!

Copilot finished reviewing on behalf of aquagull November 27, 2025 08:17
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds multi-batch prefill support for Qwen2.5-VL models to improve throughput and efficiency. The implementation introduces conditional logic controlled by environment variables FD_ENABLE_MAX_PREFILL and FD_ENABLE_E2W_TENSOR_CONVERT.

  • Enables multi-batch prefill scheduling for multimodal requests when FD_ENABLE_MAX_PREFILL is set
  • Adds tensor conversion in the zmq→scheduler pipeline when FD_ENABLE_E2W_TENSOR_CONVERT is enabled
  • Updates vision feature extraction to handle batched inputs for Qwen models

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.

File Description
fastdeploy/worker/gpu_model_runner.py Modified _apply_mm_inputs to conditionally batch vision inputs and updated extract_vision_features_qwen to handle batched image tensors
fastdeploy/entrypoints/engine_client.py Added conditional tensor conversion in _send_task before sending tasks via ZMQ
fastdeploy/engine/sched/resource_manager_v1.py Updated scheduling logic to allow multiple multimodal prefill requests when max prefill is enabled

Co-authored-by: Copilot <[email protected]>
@codecov-commenter
Copy link

Codecov Report

❌ Patch coverage is 16.66667% with 10 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (develop@ce9a49f). Learn more about missing BASE report.

Files with missing lines Patch % Lines
fastdeploy/worker/gpu_model_runner.py 22.22% 4 Missing and 3 partials ⚠️
fastdeploy/entrypoints/engine_client.py 0.00% 2 Missing ⚠️
fastdeploy/engine/sched/resource_manager_v1.py 0.00% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #5269   +/-   ##
==========================================
  Coverage           ?   60.12%           
==========================================
  Files              ?      320           
  Lines              ?    39061           
  Branches           ?     5875           
==========================================
  Hits               ?    23487           
  Misses             ?    13718           
  Partials           ?     1856           
Flag Coverage Δ
GPU 60.12% <16.66%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants