Skip to content

Conversation

@kevincheng2
Copy link
Collaborator

Motivation

💡 If this PR is a Cherry Pick, the PR title needs to follow the format by adding the [Cherry-Pick] label at the very beginning and appending the original PR ID at the end. For example, [Cherry-Pick][CI] Add check trigger and logic(#5191)

💡 如若此PR是Cherry Pick,PR标题需遵循格式,在最开始加上[Cherry-Pick]标签,以及最后面加上原PR ID,例如[Cherry-Pick][CI] Add check trigger and logic(#5191)

修复多模perfix cache 和 cudagraph 同时开启时,cuda error 700问题

Modifications

Usage or Command

Accuracy Tests

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@paddle-bot
Copy link

paddle-bot bot commented Nov 27, 2025

Thanks for your contribution!

@yuanlehome yuanlehome requested a review from Copilot November 27, 2025 06:26
Copilot finished reviewing on behalf of yuanlehome November 27, 2025 06:28
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

这个PR修复了多模态(multi-modal)模型同时启用prefix cache和CUDAGraph时出现的cuda error 700问题。通过移除之前强制禁用prefix caching的限制代码,允许这两个特性可以同时工作。

Key Changes

  • 移除了在fastdeploy/config.py中当同时启用多模态和CUDAGraph时强制禁用prefix caching的逻辑

Comment on lines 1690 to 1693
else:
# It will hang when real batch_size < tp_size
self.graph_opt_config.filter_capture_size(tp_size=self.parallel_config.tensor_parallel_size)
if self.model_config.enable_mm and self.graph_opt_config.use_cudagraph:
self.cache_config.enable_prefix_caching = False
logger.info("Multi-modal models do not support prefix caching when using CUDAGraph!")

Copy link

Copilot AI Nov 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

建议在PR描述中补充更详细的信息:

  1. 根本原因分析:为什么之前会有cuda error 700错误?是什么改动修复了这个问题?
  2. 修复方案说明:为什么移除这个限制就可以解决问题?是否有其他地方的改动配合这个修复?
  3. 测试验证:如何验证这个修复是有效的?是否在特定的配置下进行了测试?

更完善的描述有助于未来的维护者理解这个改动的背景和原因。

Copilot generated this review using guidance from repository custom instructions.
Comment on lines 1690 to 1693
else:
# It will hang when real batch_size < tp_size
self.graph_opt_config.filter_capture_size(tp_size=self.parallel_config.tensor_parallel_size)
if self.model_config.enable_mm and self.graph_opt_config.use_cudagraph:
self.cache_config.enable_prefix_caching = False
logger.info("Multi-modal models do not support prefix caching when using CUDAGraph!")

Copy link

Copilot AI Nov 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This bugfix removes a restriction that was preventing the combination of multi-modal models, prefix caching, and CUDAGraph. While there are existing tests for multi-modal with CUDAGraph (e.g., test_paddleocr_vl_serving.py, test_EB_VL_Lite_sot_serving.py), there doesn't appear to be a test that specifically validates this three-way combination (enable_mm + use_cudagraph + enable_prefix_caching) to prevent regression of the cuda error 700 issue.

Consider adding a test case that explicitly enables all three features together to ensure this bugfix works as expected and to prevent future regressions.

Copilot uses AI. Check for mistakes.
@codecov-commenter
Copy link

Codecov Report

✅ All modified and coverable lines are covered by tests.
⚠️ Please upload report for BASE (develop@84c7fa4). Learn more about missing BASE report.

Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #5266   +/-   ##
==========================================
  Coverage           ?   61.01%           
==========================================
  Files              ?      317           
  Lines              ?    38799           
  Branches           ?     5846           
==========================================
  Hits               ?    23673           
  Misses             ?    13263           
  Partials           ?     1863           
Flag Coverage Δ
GPU 61.01% <ø> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants