Skip to content

Conversation

@odjuricicTT
Copy link
Contributor

Ticket

N/A

Problem description

The vLLM plugin needed support for configuring the tt-mlir optimization level through TTConfig, similar to how enable_const_eval is propagated.

What's changed

  • Added optimization_level field to TTConfig dataclass with default value of 0
  • Propagated optimization_level through get_pjrt_compile_config() to torch_xla.set_custom_compile_options()
  • Set optimization_level = 1 for Qwen models in the batched inference pooling test

Checklist

  • New/Existing tests provide coverage for changes

@codecov-commenter
Copy link

codecov-commenter commented Dec 23, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 28.51%. Comparing base (e3d9541) to head (92492fb).

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #2663   +/-   ##
=======================================
  Coverage   28.51%   28.51%           
=======================================
  Files          31       31           
  Lines        4075     4075           
=======================================
  Hits         1162     1162           
  Misses       2913     2913           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@odjuricicTT odjuricicTT force-pushed the odjuricic/vllm-optimization-level branch 2 times, most recently from c9eac9c to 4043be2 Compare December 23, 2025 15:45
- Add optimization_level field to TTConfig with default value 0
- Propagate optimization_level through get_pjrt_compile_config()
- Set optimization_level = 1 for Qwen models in batched inference test
@odjuricicTT odjuricicTT force-pushed the odjuricic/vllm-optimization-level branch from 4043be2 to af5f45b Compare December 23, 2025 15:52
@odjuricicTT odjuricicTT enabled auto-merge (squash) December 26, 2025 15:45
@odjuricicTT odjuricicTT merged commit efaac38 into main Jan 15, 2026
48 checks passed
@odjuricicTT odjuricicTT deleted the odjuricic/vllm-optimization-level branch January 15, 2026 10:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants