-
Notifications
You must be signed in to change notification settings - Fork 48
Pull requests: HabanaAI/vllm-hpu-extension
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
Add dynamic_quant_for_gaudi2.py script to convert model
#387
opened Oct 29, 2025 by
wenbinc-Bin
Loading…
[SW-238300] Disabling dynamic quantization in mlp module
#383
opened Oct 26, 2025 by
HolyFalafel
Loading…
pass
chunk_size and global_num_experts to the MoE kernel
#369
opened Sep 19, 2025 by
yangulei
Loading…
Allow usage of fused_block_softmax_adjustment for Qwen with Lazy
#246
opened Jun 27, 2025 by
mswiniarsk
•
Draft
[SW-225565] Enable triangular softmax with merged prefill
#197
opened May 26, 2025 by
kamil-kaczor
•
Draft
Previous Next
ProTip!
Filter pull requests by the default branch with base:main.