-
Notifications
You must be signed in to change notification settings - Fork 227
Add profiling multimodal model step and fix the OOM bug when profilin… #1408
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add profiling multimodal model step and fix the OOM bug when profilin… #1408
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1408 +/- ##
==========================================
+ Coverage 27.39% 34.14% +6.75%
==========================================
Files 56 63 +7
Lines 6191 7315 +1124
==========================================
+ Hits 1696 2498 +802
- Misses 4495 4817 +322
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
please do a rebase |
0d13396
to
4d92137
Compare
need code review @Yikun @wangxiyuan @yiz-liu @shen-shanshan . By the way, why didn't the accuracy test trigger after I added the label? |
You should first add The previous job link: |
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
LGTM. |
292d018
to
4d89890
Compare
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
…g the multimodal model. Signed-off-by: ChenTaoyu-SJTU <[email protected]>
afb3a20
to
e285e6c
Compare
What this PR does / why we need it?
This PR is to add the multimodal profile functionality. It also fixes the out-of-memory bug (using a solution that might be temporary).
The reason this approach is temporary is that on the CUDA platform in the upstream vLLM, the Qwen2.5 VL's ViT layers (i.e., the encoder layers) do not run out of memory even without adding
@torch.inference_mode()
. However, under the current PyTorch and Ascend hardware environment, during the forward propagation through the multiple blocks of the ViT (over 30 layers in total), the memory increases by approximately 1.2 GB per layer, which is unacceptable. Yet, after adding@torch.inference_mode()
, this issue disappears, and the memory increase behavior becomes almost similar to the growth pattern observed on the CUDA platform. But the underlying cause currently remains unknown, need more research.Does this PR introduce any user-facing change?
no
How was this patch tested?
By pass the CI