[Bugfix][Nixl] Fix full prefix cache hit bug #18632
Merged
Mergify / Summary
succeeded
Jun 5, 2025 in 0s
2 rules match and 11 potential rules
Rule: label-documentation (label)
- any of:
-
files~=^[^/]+\.md$ -
files~=^docs/ -
files~=^examples/
-
Rule: label-ci-build (label)
- any of:
-
files=CMakeLists.txt -
files=setup.py -
files~=\.buildkite/ -
files~=^\.github/ -
files~=^cmake/ -
files~=^docker/Dockerfile -
files~=^requirements.*\.txt
-
Rule: label-frontend (label)
-
files~=^vllm/entrypoints/
Rule: label-multi-modality (label)
- any of:
-
files=tests/models/test_vision.py -
files~=^tests/models/*/audio_language/ -
files~=^tests/models/*/vision_language/ -
files~=^tests/models/multimodal/ -
files~=^tests/multimodal/ -
files~=^vllm/multimodal/
-
Rule: label-structured-output (label)
- any of:
-
files=benchmarks/benchmark_serving_structured_output.py -
files=benchmarks/run_structured_output_benchmark.sh -
files=docs/features/structured_outputs.md -
files=examples/offline_inference/structured_outputs.py -
files=examples/online_serving/openai_chat_completion_structured_outputs.py -
files=examples/online_serving/openai_chat_completion_structured_outputs_with_reasoning.py -
files=tests/entrypoints/llm/test_guided_generate.py -
files=tests/model_executor/test_guided_processors.py -
files=tests/v1/entrypoints/llm/test_guided_generate.py -
files~=^benchmarks/structured_schemas/ -
files~=^tests/v1/structured_output/ -
files~=^vllm/model_executor/guided_decoding/ -
files~=^vllm/v1/structured_output/
-
Rule: label-speculative-decoding (label)
- any of:
-
files=vllm/model_executor/layers/spec_decode_base_sampler.py -
files~=^tests/spec_decode/ -
files~=^vllm/spec_decode/
-
✅ Rule: label-v1 (label)
- any of:
-
files~=^tests/v1/ -
files~=^vllm/v1/
-
Rule: label-tpu (label)
- any of:
-
files~=/tpu/ -
files~=_tpu -
files~=pallas -
files~=tpu.py -
files~=tpu_
-
✅ Rule: label-tpu-remove (label)
- all of:
-
-files~=/tpu/ -
-files~=_tpu -
-files~=pallas -
-files~=tpu.py -
-files~=tpu_
-
Rule: label-tool-calling (label)
- any of:
-
files=docs/features/tool_calling.md -
files=examples/offline_inference/chat_with_tools.py -
files=examples/online_serving/openai_chat_completion_client_with_tools.py -
files=examples/online_serving/openai_chat_completion_client_with_tools_required.py -
files=examples/online_serving/openai_chat_completion_tool_calls_with_reasoning.py -
files=tests/entrypoints/openai/test_chat_with_tool_reasoning.py -
files~=^examples/tool_chat_* -
files~=^tests/entrypoints/openai/tool_parsers/ -
files~=^tests/mistral_tool_use/ -
files~=^tests/tool_use/ -
files~=^vllm/entrypoints/openai/tool_parsers/
-
Rule: ping author on conflicts and add 'needs-rebase' label (comment, label)
-
-closed -
conflict
Rule: assign reviewer for tensorizer changes (assign)
-
files~=^tests/entrypoints/openai/test_tensorizer_entrypoint.py -
files~=^tests/tensorizer_loader/ -
files~=^vllm/model_executor/model_loader/tensorizer.py -
files~=^vllm/model_executor/model_loader/tensorizer_loader.py
Rule: remove 'needs-rebase' label when conflict is resolved (label)
-
-closed -
-conflict
💖 Mergify is proud to provide this service for free to open source projects.
🚀 You can help us by becoming a sponsor!
Mergify commands and options
More conditions and actions can be found in the documentation.
You can also trigger Mergify actions by commenting on this pull request:
@Mergifyio refreshwill re-evaluate the rules@Mergifyio rebasewill rebase this PR on its base branch@Mergifyio updatewill merge the base branch into this PR@Mergifyio backport <destination>will backport this PR on<destination>branch
Additionally, on Mergify dashboard you can:
- look at your merge queues
- generate the Mergify configuration with the config editor.
Finally, you can contact us on https://mergify.com
Loading