Skip to content

Commit 1900335

Browse files
gshtrassunyicode0012Unprincess17aws-satyajithluccafong
authored
Upstream merge 2025 05 27 (#557)
* Add files via uploadAdd fused MoE kernel tuning configs (fp8_w8a8) for DeepSeek V3/R1 on a single-node 8x NVIDIA H20 96GB setup (vllm-project#18337) * [Misc] Fix typo (vllm-project#18330) * Neuron up mistral (vllm-project#18222) Signed-off-by: Satyajith Chilappagari <[email protected]> * fix CUDA_check redefinition in vllm-project#17918 (vllm-project#18287) Signed-off-by: Lucia Fang <[email protected]> Co-authored-by: Lucia (Lu) Fang <[email protected]> * [neuron] fix authorization issue (vllm-project#18364) Signed-off-by: Liangfu Chen <[email protected]> * [Misc] Allow `AutoWeightsLoader` to skip loading weights with specific substr in name (vllm-project#18358) Signed-off-by: Isotr0py <[email protected]> * [Core] [Bugfix]: tensor parallel with prompt embeds (vllm-project#18171) Signed-off-by: Nan2018 <[email protected]> Co-authored-by: Andrew Sansom <[email protected]> * [release] Change dockerhub username for TPU release (vllm-project#18389) * [Bugfix] fix adding bias twice in ipex GPTQ quantization (vllm-project#18363) Signed-off-by: rand-fly <[email protected]> * [doc] update env variable export (vllm-project#18391) Signed-off-by: reidliu41 <[email protected]> Co-authored-by: reidliu41 <[email protected]> * [Misc] Add LoRA code owner (vllm-project#18387) Signed-off-by: Jee Jee Li <[email protected]> * Update cpu.txt (vllm-project#18398) Signed-off-by: 汪志鹏 <[email protected]> * [CI] Add mteb testing to test the accuracy of the embedding model (vllm-project#17175) * [Bugfix] Fix MRoPE Errors in the Qwen-VL Model When Processing Pure Text (vllm-project#18407) Co-authored-by: 松灵 <[email protected]> * [Misc] refactor prompt embedding examples (vllm-project#18405) Signed-off-by: reidliu41 <[email protected]> Co-authored-by: reidliu41 <[email protected]> * [Minor] Rename quantization nvfp4 to modelopt_fp4 (vllm-project#18356) Signed-off-by: mgoin <[email protected]> * [Model] use AutoWeightsLoader for bloom (vllm-project#18300) Signed-off-by: calvin chen <[email protected]> * [Kernel] update comment for KV shape in unified triton attn (vllm-project#18099) Signed-off-by: haochengxia <[email protected]> * fix:Build torch wheel inline rather than picking from nightly (vllm-project#18351) Signed-off-by: Dilip Gowda Bhagavan <[email protected]> * [TPU] Re-enable the Pallas MoE kernel (vllm-project#18025) Signed-off-by: Michael Goin <[email protected]> * [Bugfix] config.head_dim is now explicitly set to None (vllm-project#18432) Signed-off-by: Gregory Shtrasberg <[email protected]> * [Bug] Fix moe_sum signature (vllm-project#18440) Signed-off-by: Bill Nell <[email protected]> * Revert "[Bugfix] Fix MRoPE Errors in the Qwen-VL Model When Processing Pure Text (vllm-project#18407)" (vllm-project#18456) Signed-off-by: DarkLight1337 <[email protected]> * [Bugfix][Failing Test] Fix nixl connector test when promt size < block size (vllm-project#18429) Signed-off-by: wwl2755 <[email protected]> * [Misc] MultiConnector._connectors type (vllm-project#18423) Signed-off-by: nicklucche <[email protected]> * [Frontend] deprecate `--device` arg (vllm-project#18399) Signed-off-by: Kebe <[email protected]> * [V1] Fix general plugins not loaded in engine for multiproc (vllm-project#18326) Signed-off-by: Yong Hoon Shin <[email protected]> * [Misc] refactor disaggregated-prefill-v1 example (vllm-project#18474) Signed-off-by: reidliu41 <[email protected]> Co-authored-by: reidliu41 <[email protected]> * [Bugfix][Failing Test] Fix test_events.py (vllm-project#18460) Signed-off-by: rabi <[email protected]> * [MODEL] FalconH1 (vllm-project#18406) Signed-off-by: dhia.rhaiem <[email protected]> Co-authored-by: younesbelkada <[email protected]> Co-authored-by: Ilyas Chahed <[email protected]> Co-authored-by: Jingwei Zuo <[email protected]> * [Doc] fix arg docstring in linear layers (vllm-project#18410) Signed-off-by: giantcroc <[email protected]> * [Bugfix] Reduce moe_sum test size to avoid OOM (vllm-project#18484) Signed-off-by: Bill Nell <[email protected]> * [Build] fix Dockerfile shell (vllm-project#18402) * [Misc] Update deprecation message for `--enable-reasoning` (vllm-project#18404) * [ROCm][Kernel][V1] Enable AMD Radeon GPU Custom Paged Attention on v1 (vllm-project#17004) Signed-off-by: Hosang Yoon <[email protected]> * Remove incorrect env value * Revert "[v1] Support multiple KV cache groups in GPU model runner (vllm-project#17945) (vllm-project#18459) Signed-off-by: Mark McLoughlin <[email protected]> * [FEAT][ROCm] Upgrade AITER MLA v1 backend (vllm-project#18338) Signed-off-by: vllmellm <[email protected]> Co-authored-by: Luka Govedič <[email protected]> * [Bugfix] Consistent ascii handling in tool parsers (vllm-project#17704) Signed-off-by: Sebastian Schönnenbeck <[email protected]> * [FalconH1] Fix output dtype in RMSNorm fallback path for Falcon-H1 (e.g. 0.5B) (vllm-project#18500) Signed-off-by: dhia.rhaiem <[email protected]> Co-authored-by: younesbelkada <[email protected]> Co-authored-by: Ilyas Chahed <[email protected]> Co-authored-by: Jingwei Zuo <[email protected]> * [MISC] update project urls in pyproject.toml (vllm-project#18519) Signed-off-by: Andy Xie <[email protected]> * [CI] Fix race condition with StatelessProcessGroup.barrier (vllm-project#18506) Signed-off-by: Russell Bryant <[email protected]> * Intialize io_thread_pool attribute in the beginning. (vllm-project#18331) Signed-off-by: rabi <[email protected]> * [Bugfix] Inconsistent token calculation compared to HF in llava family (vllm-project#18479) Signed-off-by: jaycha <[email protected]> * [BugFix][DP] Send DP wave completion only from `dp_rank==0` (vllm-project#18502) Signed-off-by: Nick Hill <[email protected]> Co-authored-by: kourosh hakhamaneshi <[email protected]> * [Bugfix][Model] Make Olmo2Model weight loading return loaded weights (vllm-project#18504) Signed-off-by: Shane A <[email protected]> * [Bugfix] Fix LoRA test (vllm-project#18518) Signed-off-by: Jee Jee Li <[email protected]> * [Doc] Fix invalid JSON in example args (vllm-project#18527) Signed-off-by: DarkLight1337 <[email protected]> * [Neuron] Update Dockerfile.neuron to use latest neuron release (2.23) (vllm-project#18512) Signed-off-by: Satyajith Chilappagari <[email protected]> * Update default neuron config for speculation (vllm-project#18274) Signed-off-by: Elaine Zhao <[email protected]> Co-authored-by: Shashwat Srijan <[email protected]> Co-authored-by: Aakash Shetty <[email protected]> * Order sequence ids + config update to support specifying custom quantization layers (vllm-project#18279) Signed-off-by: Elaine Zhao <[email protected]> Co-authored-by: Tailin Pan <[email protected]> Co-authored-by: Rishabh Rajesh <[email protected]> Co-authored-by: Yishan McNabb <[email protected]> Co-authored-by: Patrick Lange <[email protected]> Co-authored-by: Maxwell Goldberg <[email protected]> Co-authored-by: Aakash Shetty <[email protected]> * [Bugfix] Fix MRoPE Errors in the Qwen-VL Model When Processing Pure Text (vllm-project#18526) Co-authored-by: 松灵 <[email protected]> Co-authored-by: Cyrus Leung <[email protected]> Co-authored-by: DarkLight1337 <[email protected]> * [Bugfix] Add kwargs to RequestOutput __init__ to be forward compatible (vllm-project#18513) Signed-off-by: Linkun <[email protected]> * [CI/Build] Update bamba test model location (vllm-project#18544) Signed-off-by: Harry Mellor <[email protected]> * [Doc] Support --stream arg in openai_completion_client.py script (vllm-project#18388) Signed-off-by: googs1025 <[email protected]> * [Bugfix] Use random hidden states in dummy sampler run (vllm-project#18543) Signed-off-by: Bowen Wang <[email protected]> * [Doc] Add stream flag for chat completion example (vllm-project#18524) Signed-off-by: calvin chen <[email protected]> * [BugFix][CPU] Fix x86 SHM distributed module initialization (vllm-project#18536) Signed-off-by: jiang.li <[email protected]> * [Misc] improve Automatic Prefix Caching example (vllm-project#18554) Signed-off-by: reidliu41 <[email protected]> Co-authored-by: reidliu41 <[email protected]> * [Misc] Call `ndarray.tobytes()` directly instead of `ndarray.data.tobytes()` (vllm-project#18347) Signed-off-by: Lukas Geiger <[email protected]> * [Bugfix] make `test_openai_schema.py` pass (vllm-project#18224) Signed-off-by: David Xia <[email protected]> Co-authored-by: Harry Mellor <[email protected]> * [Platform] Move platform check to right place (vllm-project#18470) Signed-off-by: wangxiyuan <[email protected]> * [Compile][Platform] Make PiecewiseBackend pluggable and extendable (vllm-project#18076) Signed-off-by: Mengqing Cao <[email protected]> Co-authored-by: youkaichao <[email protected]> * [Build/CI] Fix CUDA 11.8 build (vllm-project#17679) Signed-off-by: Tyler Michael Smith <[email protected]> Signed-off-by: Lucas Wilkinson <[email protected]> Signed-off-by: Tyler Michael Smith <[email protected]> Co-authored-by: Lucas Wilkinson <[email protected]> * [Tool] Add NIXL installation script (vllm-project#18172) Signed-off-by: Linkun <[email protected]> * [V1][Spec Decode][Bugfix] Load quantize weights for EAGLE (vllm-project#18290) * [Frontend][Bug Fix] Update llama4 pythonic jinja template and llama4_pythonic parser (vllm-project#17917) Signed-off-by: Kai Wu <[email protected]> * [Frontend] [Core] Add Tensorizer support for V1, LoRA adapter serialization and deserialization (vllm-project#17926) Signed-off-by: Sanger Steel <[email protected]> * [AMD] [P/D] Compute num gpus for ROCm correctly in run_accuracy_test.sh (vllm-project#18568) Signed-off-by: Randall Smith <[email protected]> * Re-submit: Fix: Proper RGBA -> RGB conversion for PIL images. (vllm-project#18569) Signed-off-by: Chenheli Hua <[email protected]> * [V1][Spec Decoding] Use model_loader.get_model() to load models (vllm-project#18273) Signed-off-by: Mark McLoughlin <[email protected]> * Enable hybrid attention models for Transformers backend (vllm-project#18494) Signed-off-by: Harry Mellor <[email protected]> * [Misc] refactor: simplify input validation and num_requests handling in _convert_v1_inputs (vllm-project#18482) Signed-off-by: googs1025 <[email protected]> * [BugFix] Increase TP execute_model timeout (vllm-project#18558) Signed-off-by: Nick Hill <[email protected]> * [Bugfix] Set `KVTransferConfig.engine_id` in post_init (vllm-project#18576) Signed-off-by: Linkun Chen <[email protected]> * [Spec Decode] Make EAGLE3 draft token ID mapping optional (vllm-project#18488) Signed-off-by: Benjamin Chislett <[email protected]> Co-authored-by: Woosuk Kwon <[email protected]> * [Neuron] Remove bypass on EAGLEConfig and add a test (vllm-project#18514) Signed-off-by: Elaine Zhao <[email protected]> * [Bugfix][Benchmarks] Fix a benchmark of deepspeed-mii backend to use api_key (vllm-project#17291) Signed-off-by: Teruaki Ishizaki <[email protected]> * [Misc] Replace `cuda` hard code with `current_platform` (vllm-project#16983) Signed-off-by: shen-shanshan <[email protected]> * [Hardware] correct method signatures for HPU,ROCm,XPU (vllm-project#18551) Signed-off-by: Andy Xie <[email protected]> * [V1] [Bugfix] eagle bugfix and enable correct lm_head for multimodal (vllm-project#18034) Signed-off-by: Ronald Xu <[email protected]> * [Feature]Add async tensor parallelism using compilation pass (vllm-project#17882) Signed-off-by: cascade812 <[email protected]> * [Doc] Update quickstart and install for cu128 using `--torch-backend=auto` (vllm-project#18505) Signed-off-by: mgoin <[email protected]> * [Feature][V1]: suupports cached_tokens in response usage (vllm-project#18149) Co-authored-by: simon-mo <[email protected]> * [Bugfix] Add half type support in reshape_and_cache_cpu_impl on x86 cpu platform (vllm-project#18430) Signed-off-by: Yuqi Zhang <[email protected]> Co-authored-by: Yuqi Zhang <[email protected]> * Migrate docs from Sphinx to MkDocs (vllm-project#18145) Signed-off-by: Harry Mellor <[email protected]> * Revert "[V1] [Bugfix] eagle bugfix and enable correct lm_head for multimodal (vllm-project#18034)" (vllm-project#18600) Signed-off-by: DarkLight1337 <[email protected]> * [Bugfix][Model] Fix baichuan model loader for tp (vllm-project#18597) Signed-off-by: Mengqing Cao <[email protected]> * [V0][Bugfix] Fix parallel sampling performance regression when guided decoding is enabled (vllm-project#17731) Signed-off-by: Madeesh Kannan <[email protected]> Co-authored-by: Russell Bryant <[email protected]> * Add myself as docs code owner (vllm-project#18605) Signed-off-by: Harry Mellor <[email protected]> * [Hardware][CPU] Update intel_extension_for_pytorch 2.7.0 and move to `requirements/cpu.txt` (vllm-project#18542) Signed-off-by: Kay Yan <[email protected]> * [CI] fix kv_cache_type argument (vllm-project#18594) Signed-off-by: Andy Xie <[email protected]> * [Doc] Fix indent of contributing to vllm (vllm-project#18611) Signed-off-by: Zerohertz <[email protected]> * Replace `{func}` with mkdocs style links (vllm-project#18610) Signed-off-by: Harry Mellor <[email protected]> * [CI/Build] Fix V1 flag being set in entrypoints tests (vllm-project#18598) Signed-off-by: DarkLight1337 <[email protected]> * Fix examples with code blocks in docs (vllm-project#18609) Signed-off-by: Harry Mellor <[email protected]> * [Bugfix] Fix transformers model impl ignored for mixtral quant (vllm-project#18602) Signed-off-by: Tristan Leclercq <[email protected]> * Include private attributes in API documentation (vllm-project#18614) Signed-off-by: Harry Mellor <[email protected]> * [Misc] add Haystack integration (vllm-project#18601) Signed-off-by: reidliu41 <[email protected]> Co-authored-by: reidliu41 <[email protected]> * [Bugfix][Build/CI] Fixup CUDA compiler version check for CUDA_SUPPORTED_ARCHS (vllm-project#18579) * [Doc] Fix markdown list indentation for MkDocs rendering (vllm-project#18620) Signed-off-by: Zerohertz <[email protected]> * [Doc] Use a different color for the announcement (vllm-project#18616) Signed-off-by: DarkLight1337 <[email protected]> * Refactor pplx init logic to make it modular (prepare for deepep) (vllm-project#18200) Signed-off-by: youkaichao <[email protected]> * Fix figures in design doc (vllm-project#18612) Signed-off-by: Harry Mellor <[email protected]> * [Docs] Change mkdocs to not use directory urls (vllm-project#18622) Signed-off-by: mgoin <[email protected]> * [v1] Redo "Support multiple KV cache groups in GPU model runner (vllm-project#17945)" (vllm-project#18593) Signed-off-by: Chen Zhang <[email protected]> * [Doc] fix list formatting (vllm-project#18624) Signed-off-by: David Xia <[email protected]> * [Doc] Fix top-level API links/docs (vllm-project#18621) Signed-off-by: DarkLight1337 <[email protected]> * [Doc] Avoid documenting dynamic / internal modules (vllm-project#18626) Signed-off-by: DarkLight1337 <[email protected]> * [Doc] Fix broken links and unlinked docs, add shortcuts to home sidebar (vllm-project#18627) Signed-off-by: DarkLight1337 <[email protected]> * [V1] Support Deepseek MTP (vllm-project#18435) Signed-off-by: Rui Qiao <[email protected]> Signed-off-by: YaoJiayi <[email protected]> Co-authored-by: Rui Qiao <[email protected]> * Use prebuilt FlashInfer x86_64 PyTorch 2.7 CUDA 12.8 wheel for CI (vllm-project#18537) Signed-off-by: Huy Do <[email protected]> * [CI] Enable test_initialization to run on V1 (vllm-project#16736) Signed-off-by: mgoin <[email protected]> * [Doc] Update references to doc files (vllm-project#18637) Signed-off-by: DarkLight1337 <[email protected]> * [ModelOpt] Introduce VLLM_MAX_TOKENS_PER_EXPERT_FP4_MOE env var to control blockscale tensor allocation (vllm-project#18160) Signed-off-by: Pavani Majety <[email protected]> * [Bugfix] Migrate to REGEX Library to prevent catastrophic backtracking (vllm-project#18454) Signed-off-by: Crucifixion-Fxl <[email protected]> Co-authored-by: Crucifixion-Fxl <[email protected]> * [Bugfix][Nixl] Fix Preemption Bug (vllm-project#18631) Signed-off-by: [email protected] <[email protected]> * config.py: Clarify that only local GGUF checkpoints are supported. (vllm-project#18623) Signed-off-by: Mathieu Bordere <[email protected]> * FIX MOE issue in AutoRound format (vllm-project#18586) Signed-off-by: wenhuach21 <[email protected]> * [V1][Spec Decode] Small refactors to improve eagle bookkeeping performance (vllm-project#18424) Signed-off-by: qizixi <[email protected]> * [Frontend] improve vllm serve --help display (vllm-project#18643) Signed-off-by: reidliu41 <[email protected]> Co-authored-by: reidliu41 <[email protected]> * [Model] Add support for Qwen2.5-Omni-7B-AWQ (Qwen2_5OmniForConditionalGeneration) (vllm-project#18647) * [V1][Spec Decode] Support multi-layer eagle draft model (vllm-project#18030) Signed-off-by: qizixi <[email protected]> * [Doc] Update README links, mark external links (vllm-project#18635) Signed-off-by: DarkLight1337 <[email protected]> * [MISC][pre-commit] Add pre-commit check for triton import (vllm-project#17716) Signed-off-by: Mengqing Cao <[email protected]> * [Doc] Fix indentation problems in V0 Paged Attention docs (vllm-project#18659) Signed-off-by: DarkLight1337 <[email protected]> * [Doc] Add community links (vllm-project#18657) Signed-off-by: DarkLight1337 <[email protected]> * [Model] use AutoWeightsLoader for gpt2 (vllm-project#18625) Signed-off-by: zt2370 <[email protected]> * [Doc] Reorganize user guide (vllm-project#18661) Signed-off-by: DarkLight1337 <[email protected]> * [CI/Build] `chmod +x` to `cleanup_pr_body.sh` (vllm-project#18650) Signed-off-by: DarkLight1337 <[email protected]> * [MISC] typo fix and clean import (vllm-project#18664) Signed-off-by: Andy Xie <[email protected]> * [BugFix] Fix import error for fused_moe (vllm-project#18642) Signed-off-by: wangxiyuan <[email protected]> * [CI] enforce import regex instead of re (vllm-project#18665) Signed-off-by: Aaron Pham <[email protected]> * fix(regression): clone from reference items (vllm-project#18662) Signed-off-by: Aaron Pham <[email protected]> * [CI/Build] fix permission denied issue (vllm-project#18645) Signed-off-by: reidliu41 <[email protected]> Co-authored-by: reidliu41 <[email protected]> * [BugFix][Spec Decode] Improve Prefix Caching Logic in Speculative Decoding (vllm-project#18668) Signed-off-by: Woosuk Kwon <[email protected]> * [V1] Fix _pickle.PicklingError: Can't pickle <class 'transformers_modules.deepseek-ai.DeepSeek-V2-Lite... (vllm-project#18640) Signed-off-by: Seiji Eicher <[email protected]> * [MISC] correct signature for LoaderFunction (vllm-project#18670) Signed-off-by: Andy Xie <[email protected]> * [Misc]Replace `cuda` hard code with `current_platform` in Ray (vllm-project#14668) Signed-off-by: noemotiovon <[email protected]> * [Misc][ModelScope] Change to use runtime VLLM_USE_MODELSCOPE (vllm-project#18655) Signed-off-by: Mengqing Cao <[email protected]> Signed-off-by: Isotr0py <[email protected]> Co-authored-by: Isotr0py <[email protected]> * [VLM] Initialize video input support for InternVL models (vllm-project#18499) Signed-off-by: Isotr0py <[email protected]> Co-authored-by: Cyrus Leung <[email protected]> * Speed up the `kernels/quantization/` tests (vllm-project#18669) Signed-off-by: mgoin <[email protected]> * [BUGFIX] catch subclass first for try...except (vllm-project#18672) Signed-off-by: Andy Xie <[email protected]> * [Misc] Reduce logs on startup (vllm-project#18649) Signed-off-by: DarkLight1337 <[email protected]> * [doc] fix broken links (vllm-project#18671) Signed-off-by: reidliu41 <[email protected]> Co-authored-by: reidliu41 <[email protected]> * [doc] improve readability (vllm-project#18675) Signed-off-by: reidliu41 <[email protected]> Co-authored-by: reidliu41 <[email protected]> * [Bugfix] Fix cpu usage and cache hit stats reporting on cpu environment (vllm-project#18674) Signed-off-by: zzzyq <[email protected]> Co-authored-by: Cyrus Leung <[email protected]> * [CI/build] fix no regex (vllm-project#18676) Signed-off-by: reidliu41 <[email protected]> Co-authored-by: reidliu41 <[email protected]> * [Misc] small improve (vllm-project#18680) Signed-off-by: reidliu41 <[email protected]> Co-authored-by: reidliu41 <[email protected]> * [Bugfix] Fix profiling dummy data for Pixtral (vllm-project#18677) Signed-off-by: DarkLight1337 <[email protected]> * [Core][Multimodal] Convert PIL Image to array without data copy when hashing (vllm-project#18682) Signed-off-by: Lukas Geiger <[email protected]> * [CI/Build][Doc] Update `gte-Qwen2-1.5B-instruct` usage (vllm-project#18683) Signed-off-by: DarkLight1337 <[email protected]> Signed-off-by: Isotr0py <[email protected]> Co-authored-by: Isotr0py <[email protected]> * [Misc] Fixed the abnormally high TTFT issue in the PD disaggregation example (vllm-project#18644) Signed-off-by: zhaohaidao <[email protected]> Signed-off-by: zhaohaiyuan <[email protected]> Co-authored-by: zhaohaiyuan <[email protected]> * refactor: simplify request handler, use positive condition check for handler assignment (vllm-project#18690) Signed-off-by: googs1025 <[email protected]> * [Bugfix] Fix the lm_head in gpt_bigcode in lora mode (vllm-project#6357) Signed-off-by: Max de Bayser <[email protected]> Signed-off-by: Max de Bayser <[email protected]> * [CI] add missing argument (vllm-project#18694) Signed-off-by: Andy Xie <[email protected]> * [GH] Add issue template for reporting CI failures (vllm-project#18696) Signed-off-by: DarkLight1337 <[email protected]> * [Doc] Fix issue template format (vllm-project#18699) Signed-off-by: DarkLight1337 <[email protected]> * [Bugfix] Fix Mistral-format models with sliding window (vllm-project#18693) Signed-off-by: DarkLight1337 <[email protected]> * [CI/Build] Replace `math.isclose` with `pytest.approx` (vllm-project#18703) Signed-off-by: DarkLight1337 <[email protected]> * [CI] fix dump_input for str type (vllm-project#18697) Signed-off-by: Andy Xie <[email protected]> * [Model] Add support for YARN in NemotronNAS models (vllm-project#18427) Signed-off-by: Nave Assaf <[email protected]> * [CI/Build] Split pooling and generation extended language models tests in CI (vllm-project#18705) Signed-off-by: Isotr0py <[email protected]> * [Hardware][Intel-Gaudi] [CI/Build] Add tensor parallel size = 2 test to HPU CI (vllm-project#18709) Signed-off-by: Lukasz Durejko <[email protected]> * [Misc] add AutoGen integration (vllm-project#18712) Signed-off-by: reidliu41 <[email protected]> Co-authored-by: reidliu41 <[email protected]> Co-authored-by: Cyrus Leung <[email protected]> * [Bugfix]: handle hf-xet CAS error when loading Qwen3 weights in vLLM (vllm-project#18701) * [Doc] Improve API docs (vllm-project#18713) Signed-off-by: DarkLight1337 <[email protected]> * [Doc] Move examples and further reorganize user guide (vllm-project#18666) Signed-off-by: DarkLight1337 <[email protected]> * [Bugfix] Fix Llama GGUF initialization (vllm-project#18717) Signed-off-by: DarkLight1337 <[email protected]> * [V1][Sampler] Improve performance of FlashInfer sampling by sampling logits instead of probs (vllm-project#18608) * Convert `examples` to `ruff-format` (vllm-project#18400) Signed-off-by: Harry Mellor <[email protected]> * [Model][Gemma3] Simplify image input validation (vllm-project#18710) Signed-off-by: Lukas Geiger <[email protected]> * [Misc] improve web section group title display (vllm-project#18684) Signed-off-by: reidliu41 <[email protected]> Co-authored-by: reidliu41 <[email protected]> * [V1][Quantization] Add CUDA graph compatible v1 GGUF support (vllm-project#18646) Signed-off-by: Isotr0py <[email protected]> Signed-off-by: Isotr0py <[email protected]> * [Model][Gemma3] Cast image pixel values already on CPU (vllm-project#18732) Signed-off-by: Lukas Geiger <[email protected]> * [FEAT] [ROCm] Upgrade AITER Fused MoE kernels. (vllm-project#18271) Signed-off-by: vllmellm <[email protected]> * [Doc] Update OOT model docs (vllm-project#18742) Signed-off-by: DarkLight1337 <[email protected]> * [Doc] Update reproducibility doc and example (vllm-project#18741) Signed-off-by: DarkLight1337 <[email protected]> * [Misc] improve docs (vllm-project#18734) Signed-off-by: reidliu41 <[email protected]> Co-authored-by: reidliu41 <[email protected]> * feat(rocm-support): support mamba2 on rocm (vllm-project#18565) Signed-off-by: Islam Almersawi <[email protected]> Co-authored-by: Islam Almersawi <[email protected]> * [Hardware][Intel-Gaudi] [CI/Build] Fix multiple containers using the same name in run-hpu-test.sh (vllm-project#18752) Signed-off-by: Lukasz Durejko <[email protected]> * [Doc] cleanup deprecated flag for doc (vllm-project#18715) Signed-off-by: calvin chen <[email protected]> * Minor fix about MooncakeStoreConnector (vllm-project#18721) Signed-off-by: baoloongmao <[email protected]> * [Build] fix cpu build missing libtbbmalloc.so (vllm-project#18744) Signed-off-by: Kebe <[email protected]> * [BUG FIX] minicpm (vllm-project#18739) Signed-off-by: huangyuxiang03 <[email protected]> Co-authored-by: huangyuxiang03 <[email protected]> * [Doc] Convert Sphinx directives ( `{class}`, `{meth}`, `{attr}`, ...) to MkDocs format for better documentation linking (vllm-project#18663) Signed-off-by: Zerohertz <[email protected]> * [CI/Build] Remove imports of built-in `re` (vllm-project#18750) Signed-off-by: DarkLight1337 <[email protected]> * [V1][Metrics] Add API for accessing in-memory Prometheus metrics (vllm-project#17010) Signed-off-by: Mark McLoughlin <[email protected]> * Disable prefix cache by default for benchmark (vllm-project#18639) Signed-off-by: cascade812 <[email protected]> * optimize get_kv_cache_torch_dtype (vllm-project#18531) Signed-off-by: idellzheng <[email protected]> * [Core] Automatically cast multi-modal input dtype (vllm-project#18756) Signed-off-by: DarkLight1337 <[email protected]> * [Bugfix] Mistral tool calling when content is list (vllm-project#18729) Signed-off-by: mgoin <[email protected]> --------- Signed-off-by: Satyajith Chilappagari <[email protected]> Signed-off-by: Lucia Fang <[email protected]> Signed-off-by: Liangfu Chen <[email protected]> Signed-off-by: Isotr0py <[email protected]> Signed-off-by: Nan2018 <[email protected]> Signed-off-by: rand-fly <[email protected]> Signed-off-by: reidliu41 <[email protected]> Signed-off-by: Jee Jee Li <[email protected]> Signed-off-by: 汪志鹏 <[email protected]> Signed-off-by: mgoin <[email protected]> Signed-off-by: calvin chen <[email protected]> Signed-off-by: haochengxia <[email protected]> Signed-off-by: Dilip Gowda Bhagavan <[email protected]> Signed-off-by: Michael Goin <[email protected]> Signed-off-by: Gregory Shtrasberg <[email protected]> Signed-off-by: Bill Nell <[email protected]> Signed-off-by: DarkLight1337 <[email protected]> Signed-off-by: wwl2755 <[email protected]> Signed-off-by: nicklucche <[email protected]> Signed-off-by: Kebe <[email protected]> Signed-off-by: Yong Hoon Shin <[email protected]> Signed-off-by: rabi <[email protected]> Signed-off-by: dhia.rhaiem <[email protected]> Signed-off-by: giantcroc <[email protected]> Signed-off-by: Hosang Yoon <[email protected]> Signed-off-by: Mark McLoughlin <[email protected]> Signed-off-by: vllmellm <[email protected]> Signed-off-by: Sebastian Schönnenbeck <[email protected]> Signed-off-by: Andy Xie <[email protected]> Signed-off-by: Russell Bryant <[email protected]> Signed-off-by: jaycha <[email protected]> Signed-off-by: Nick Hill <[email protected]> Signed-off-by: Shane A <[email protected]> Signed-off-by: Elaine Zhao <[email protected]> Signed-off-by: Linkun <[email protected]> Signed-off-by: Harry Mellor <[email protected]> Signed-off-by: googs1025 <[email protected]> Signed-off-by: Bowen Wang <[email protected]> Signed-off-by: jiang.li <[email protected]> Signed-off-by: Lukas Geiger <[email protected]> Signed-off-by: David Xia <[email protected]> Signed-off-by: wangxiyuan <[email protected]> Signed-off-by: Mengqing Cao <[email protected]> Signed-off-by: Tyler Michael Smith <[email protected]> Signed-off-by: Lucas Wilkinson <[email protected]> Signed-off-by: Tyler Michael Smith <[email protected]> Signed-off-by: Kai Wu <[email protected]> Signed-off-by: Sanger Steel <[email protected]> Signed-off-by: Randall Smith <[email protected]> Signed-off-by: Chenheli Hua <[email protected]> Signed-off-by: Linkun Chen <[email protected]> Signed-off-by: Benjamin Chislett <[email protected]> Signed-off-by: Teruaki Ishizaki <[email protected]> Signed-off-by: shen-shanshan <[email protected]> Signed-off-by: Ronald Xu <[email protected]> Signed-off-by: cascade812 <[email protected]> Signed-off-by: Yuqi Zhang <[email protected]> Signed-off-by: Madeesh Kannan <[email protected]> Signed-off-by: Kay Yan <[email protected]> Signed-off-by: Zerohertz <[email protected]> Signed-off-by: Tristan Leclercq <[email protected]> Signed-off-by: youkaichao <[email protected]> Signed-off-by: Chen Zhang <[email protected]> Signed-off-by: Rui Qiao <[email protected]> Signed-off-by: YaoJiayi <[email protected]> Signed-off-by: Huy Do <[email protected]> Signed-off-by: Pavani Majety <[email protected]> Signed-off-by: Crucifixion-Fxl <[email protected]> Signed-off-by: [email protected] <[email protected]> Signed-off-by: Mathieu Bordere <[email protected]> Signed-off-by: wenhuach21 <[email protected]> Signed-off-by: qizixi <[email protected]> Signed-off-by: zt2370 <[email protected]> Signed-off-by: Aaron Pham <[email protected]> Signed-off-by: Woosuk Kwon <[email protected]> Signed-off-by: Seiji Eicher <[email protected]> Signed-off-by: noemotiovon <[email protected]> Signed-off-by: zzzyq <[email protected]> Signed-off-by: zhaohaidao <[email protected]> Signed-off-by: zhaohaiyuan <[email protected]> Signed-off-by: Max de Bayser <[email protected]> Signed-off-by: Max de Bayser <[email protected]> Signed-off-by: Nave Assaf <[email protected]> Signed-off-by: Lukasz Durejko <[email protected]> Signed-off-by: Isotr0py <[email protected]> Signed-off-by: Islam Almersawi <[email protected]> Signed-off-by: baoloongmao <[email protected]> Signed-off-by: huangyuxiang03 <[email protected]> Signed-off-by: idellzheng <[email protected]> Co-authored-by: sunyicode0012 <[email protected]> Co-authored-by: Gong Shufan <[email protected]> Co-authored-by: Satyajith Chilappagari <[email protected]> Co-authored-by: Lucia Fang <[email protected]> Co-authored-by: Lucia (Lu) Fang <[email protected]> Co-authored-by: Liangfu Chen <[email protected]> Co-authored-by: Isotr0py <[email protected]> Co-authored-by: Nan Qin <[email protected]> Co-authored-by: Andrew Sansom <[email protected]> Co-authored-by: Kevin H. Luu <[email protected]> Co-authored-by: Random Fly <[email protected]> Co-authored-by: Reid <[email protected]> Co-authored-by: reidliu41 <[email protected]> Co-authored-by: Jee Jee Li <[email protected]> Co-authored-by: 汪志鹏 <[email protected]> Co-authored-by: wang.yuqi <[email protected]> Co-authored-by: 燃 <[email protected]> Co-authored-by: 松灵 <[email protected]> Co-authored-by: Michael Goin <[email protected]> Co-authored-by: Calvin Chen <[email protected]> Co-authored-by: Percy <[email protected]> Co-authored-by: Dilip Gowda Bhagavan <[email protected]> Co-authored-by: bnellnm <[email protected]> Co-authored-by: Cyrus Leung <[email protected]> Co-authored-by: wwl2755 <[email protected]> Co-authored-by: Nicolò Lucchesi <[email protected]> Co-authored-by: Kebe <[email protected]> Co-authored-by: Yong Hoon Shin <[email protected]> Co-authored-by: Rabi Mishra <[email protected]> Co-authored-by: Dhia Eddine Rhaiem <[email protected]> Co-authored-by: younesbelkada <[email protected]> Co-authored-by: Ilyas Chahed <[email protected]> Co-authored-by: Jingwei Zuo <[email protected]> Co-authored-by: GiantCroc <[email protected]> Co-authored-by: Hyogeun Oh (오효근) <[email protected]> Co-authored-by: Hosang <[email protected]> Co-authored-by: Mark McLoughlin <[email protected]> Co-authored-by: vllmellm <[email protected]> Co-authored-by: Luka Govedič <[email protected]> Co-authored-by: Sebastian Schoennenbeck <[email protected]> Co-authored-by: Ning Xie <[email protected]> Co-authored-by: Russell Bryant <[email protected]> Co-authored-by: youngrok cha <[email protected]> Co-authored-by: Nick Hill <[email protected]> Co-authored-by: kourosh hakhamaneshi <[email protected]> Co-authored-by: Shane A <[email protected]> Co-authored-by: aws-elaineyz <[email protected]> Co-authored-by: Shashwat Srijan <[email protected]> Co-authored-by: Aakash Shetty <[email protected]> Co-authored-by: Tailin Pan <[email protected]> Co-authored-by: Rishabh Rajesh <[email protected]> Co-authored-by: Yishan McNabb <[email protected]> Co-authored-by: Patrick Lange <[email protected]> Co-authored-by: Maxwell Goldberg <[email protected]> Co-authored-by: Cyrus Leung <[email protected]> Co-authored-by: lkchen <[email protected]> Co-authored-by: Harry Mellor <[email protected]> Co-authored-by: CYJiang <[email protected]> Co-authored-by: Bowen Wang <[email protected]> Co-authored-by: Li, Jiang <[email protected]> Co-authored-by: Lukas Geiger <[email protected]> Co-authored-by: David Xia <[email protected]> Co-authored-by: wangxiyuan <[email protected]> Co-authored-by: Mengqing Cao <[email protected]> Co-authored-by: youkaichao <[email protected]> Co-authored-by: Tyler Michael Smith <[email protected]> Co-authored-by: Lucas Wilkinson <[email protected]> Co-authored-by: Ekagra Ranjan <[email protected]> Co-authored-by: Kai Wu <[email protected]> Co-authored-by: Sanger Steel <[email protected]> Co-authored-by: rasmith <[email protected]> Co-authored-by: Chenheli Hua <[email protected]> Co-authored-by: Benjamin Chislett <[email protected]> Co-authored-by: Woosuk Kwon <[email protected]> Co-authored-by: Teruaki Ishizaki <[email protected]> Co-authored-by: Shanshan Shen <[email protected]> Co-authored-by: RonaldBXu <[email protected]> Co-authored-by: cascade <[email protected]> Co-authored-by: Chauncey <[email protected]> Co-authored-by: simon-mo <[email protected]> Co-authored-by: Yuqi Zhang <[email protected]> Co-authored-by: Yuqi Zhang <[email protected]> Co-authored-by: Madeesh Kannan <[email protected]> Co-authored-by: Kay Yan <[email protected]> Co-authored-by: Tristan Leclercq <[email protected]> Co-authored-by: Simon Mo <[email protected]> Co-authored-by: Chen Zhang <[email protected]> Co-authored-by: Jiayi Yao <[email protected]> Co-authored-by: Rui Qiao <[email protected]> Co-authored-by: Huy Do <[email protected]> Co-authored-by: Pavani Majety <[email protected]> Co-authored-by: Feng XiaoLong <[email protected]> Co-authored-by: Crucifixion-Fxl <[email protected]> Co-authored-by: Robert Shaw <[email protected]> Co-authored-by: Mathieu Borderé <[email protected]> Co-authored-by: Wenhua Cheng <[email protected]> Co-authored-by: qizixi <[email protected]> Co-authored-by: Yuanhao WU <[email protected]> Co-authored-by: ztang2370 <[email protected]> Co-authored-by: Aaron Pham <[email protected]> Co-authored-by: Seiji Eicher <[email protected]> Co-authored-by: Chenguang Li <[email protected]> Co-authored-by: Isotr0py <[email protected]> Co-authored-by: AlexZhao <[email protected]> Co-authored-by: zhaohaiyuan <[email protected]> Co-authored-by: Maximilien de Bayser <[email protected]> Co-authored-by: Naveassaf <[email protected]> Co-authored-by: Łukasz Durejko <[email protected]> Co-authored-by: dylan <[email protected]> Co-authored-by: almersawi <[email protected]> Co-authored-by: Islam Almersawi <[email protected]> Co-authored-by: Łukasz Durejko <[email protected]> Co-authored-by: maobaolong <[email protected]> Co-authored-by: Shawn Huang <[email protected]> Co-authored-by: huangyuxiang03 <[email protected]> Co-authored-by: chunxiaozheng <[email protected]>
2 parents 91a5600 + d5e35a9 commit 1900335

File tree

676 files changed

+18697
-13744
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

676 files changed

+18697
-13744
lines changed

.buildkite/pyproject.toml

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,6 @@
66

77
[tool.ruff]
88
line-length = 88
9-
exclude = [
10-
# External file, leaving license intact
11-
"examples/other/fp8/quantizer/quantize.py",
12-
"vllm/vllm_flash_attn/flash_attn_interface.pyi"
13-
]
149

1510
[tool.ruff.lint.per-file-ignores]
1611
"vllm/third_party/**" = ["ALL"]

.buildkite/release-pipeline.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ steps:
6464
- "docker push vllm/vllm-tpu:$BUILDKITE_COMMIT"
6565
plugins:
6666
- docker-login#v3.0.0:
67-
username: vllm
67+
username: vllmbot
6868
password-env: DOCKERHUB_TOKEN
6969
env:
7070
DOCKER_BUILDKIT: "1"

.buildkite/scripts/hardware_ci/run-hpu-test.sh

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -10,15 +10,17 @@ docker build -t hpu-test-env -f docker/Dockerfile.hpu .
1010
# Setup cleanup
1111
# certain versions of HPU software stack have a bug that can
1212
# override the exit code of the script, so we need to use
13-
# separate remove_docker_container and remove_docker_container_and_exit
13+
# separate remove_docker_containers and remove_docker_containers_and_exit
1414
# functions, while other platforms only need one remove_docker_container
1515
# function.
1616
EXITCODE=1
17-
remove_docker_container() { docker rm -f hpu-test || true; }
18-
remove_docker_container_and_exit() { remove_docker_container; exit $EXITCODE; }
19-
trap remove_docker_container_and_exit EXIT
20-
remove_docker_container
17+
remove_docker_containers() { docker rm -f hpu-test || true; docker rm -f hpu-test-tp2 || true; }
18+
remove_docker_containers_and_exit() { remove_docker_containers; exit $EXITCODE; }
19+
trap remove_docker_containers_and_exit EXIT
20+
remove_docker_containers
2121

2222
# Run the image and launch offline inference
2323
docker run --runtime=habana --name=hpu-test --network=host -e HABANA_VISIBLE_DEVICES=all -e VLLM_SKIP_WARMUP=true --entrypoint="" hpu-test-env python3 examples/offline_inference/basic/generate.py --model facebook/opt-125m
24+
docker run --runtime=habana --name=hpu-test-tp2 --network=host -e HABANA_VISIBLE_DEVICES=all -e VLLM_SKIP_WARMUP=true --entrypoint="" hpu-test-env python3 examples/offline_inference/basic/generate.py --model facebook/opt-125m --tensor-parallel-size 2
25+
2426
EXITCODE=$?

.buildkite/scripts/hardware_ci/run-neuron-test.sh

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,13 +11,14 @@ container_name="neuron_$(tr -dc A-Za-z0-9 < /dev/urandom | head -c 10; echo)"
1111
HF_CACHE="$(realpath ~)/huggingface"
1212
mkdir -p "${HF_CACHE}"
1313
HF_MOUNT="/root/.cache/huggingface"
14+
HF_TOKEN=$(aws secretsmanager get-secret-value --secret-id "ci/vllm-neuron/hf-token" --region us-west-2 --query 'SecretString' --output text | jq -r .VLLM_NEURON_CI_HF_TOKEN)
1415

1516
NEURON_COMPILE_CACHE_URL="$(realpath ~)/neuron_compile_cache"
1617
mkdir -p "${NEURON_COMPILE_CACHE_URL}"
1718
NEURON_COMPILE_CACHE_MOUNT="/root/.cache/neuron_compile_cache"
1819

1920
# Try building the docker image
20-
aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 763104351884.dkr.ecr.us-west-2.amazonaws.com
21+
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws
2122

2223
# prune old image and containers to save disk space, and only once a day
2324
# by using a timestamp file in tmp.
@@ -47,8 +48,16 @@ trap remove_docker_container EXIT
4748
docker run --rm -it --device=/dev/neuron0 --network bridge \
4849
-v "${HF_CACHE}:${HF_MOUNT}" \
4950
-e "HF_HOME=${HF_MOUNT}" \
51+
-e "HF_TOKEN=${HF_TOKEN}" \
5052
-v "${NEURON_COMPILE_CACHE_URL}:${NEURON_COMPILE_CACHE_MOUNT}" \
5153
-e "NEURON_COMPILE_CACHE_URL=${NEURON_COMPILE_CACHE_MOUNT}" \
5254
--name "${container_name}" \
5355
${image_name} \
54-
/bin/bash -c "python3 /workspace/vllm/examples/offline_inference/neuron.py && python3 -m pytest /workspace/vllm/tests/neuron/1_core/ -v --capture=tee-sys && python3 -m pytest /workspace/vllm/tests/neuron/2_core/ -v --capture=tee-sys"
56+
/bin/bash -c "
57+
python3 /workspace/vllm/examples/offline_inference/neuron.py;
58+
python3 -m pytest /workspace/vllm/tests/neuron/1_core/ -v --capture=tee-sys;
59+
for f in /workspace/vllm/tests/neuron/2_core/*.py; do
60+
echo 'Running test file: '$f;
61+
python3 -m pytest \$f -v --capture=tee-sys;
62+
done
63+
"

.buildkite/test-pipeline.yaml

Lines changed: 26 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -33,14 +33,13 @@ steps:
3333

3434
- label: Documentation Build # 2min
3535
mirror_hardwares: [amdexperimental]
36-
working_dir: "/vllm-workspace/test_docs/docs"
36+
working_dir: "/vllm-workspace/test_docs"
3737
fast_check: true
3838
no_gpu: True
3939
commands:
40-
- pip install -r ../../requirements/docs.txt
41-
- SPHINXOPTS=\"-W\" make html
42-
# Check API reference (if it fails, you may have missing mock imports)
43-
- grep \"sig sig-object py\" build/html/api/vllm/vllm.sampling_params.html
40+
- pip install -r ../requirements/docs.txt
41+
# TODO: add `--strict` once warnings in docstrings are fixed
42+
- mkdocs build
4443

4544
- label: Async Engine, Inputs, Utils, Worker Test # 24min
4645
mirror_hardwares: [amdexperimental]
@@ -59,6 +58,7 @@ steps:
5958
- pytest -v -s async_engine # AsyncLLMEngine
6059
- NUM_SCHEDULER_STEPS=4 pytest -v -s async_engine/test_async_llm_engine.py
6160
- pytest -v -s test_inputs.py
61+
- pytest -v -s test_outputs.py
6262
- pytest -v -s multimodal
6363
- pytest -v -s test_utils.py # Utils
6464
- pytest -v -s worker # Worker
@@ -128,7 +128,7 @@ steps:
128128
- pytest -v -s entrypoints/llm/test_generate.py # it needs a clean process
129129
- pytest -v -s entrypoints/llm/test_generate_multiple_loras.py # it needs a clean process
130130
- VLLM_USE_V1=0 pytest -v -s entrypoints/llm/test_guided_generate.py # it needs a clean process
131-
- pytest -v -s entrypoints/openai --ignore=entrypoints/openai/test_oot_registration.py --ignore=entrypoints/openai/test_chat_with_tool_reasoning.py --ignore=entrypoints/openai/correctness/ --ignore=entrypoints/openai/test_openai_schema.py
131+
- pytest -v -s entrypoints/openai --ignore=entrypoints/openai/test_chat_with_tool_reasoning.py --ignore=entrypoints/openai/test_oot_registration.py --ignore=entrypoints/openai/test_tensorizer_entrypoint.py --ignore=entrypoints/openai/correctness/
132132
- pytest -v -s entrypoints/test_chat_utils.py
133133
- VLLM_USE_V1=0 pytest -v -s entrypoints/offline_mode # Needs to avoid interference with other tests
134134

@@ -141,6 +141,7 @@ steps:
141141
- vllm/core/
142142
- tests/distributed/test_utils
143143
- tests/distributed/test_pynccl
144+
- tests/distributed/test_events
144145
- tests/spec_decode/e2e/test_integration_dist_tp4
145146
- tests/compile/test_basic_correctness
146147
- examples/offline_inference/rlhf.py
@@ -159,6 +160,7 @@ steps:
159160
- pytest -v -s distributed/test_utils.py
160161
- pytest -v -s compile/test_basic_correctness.py
161162
- pytest -v -s distributed/test_pynccl.py
163+
- pytest -v -s distributed/test_events.py
162164
- pytest -v -s spec_decode/e2e/test_integration_dist_tp4.py
163165
# TODO: create a dedicated test section for multi-GPU example tests
164166
# when we have multiple distributed example tests
@@ -224,6 +226,7 @@ steps:
224226
- pytest -v -s v1/test_serial_utils.py
225227
- pytest -v -s v1/test_utils.py
226228
- pytest -v -s v1/test_oracle.py
229+
- pytest -v -s v1/test_metrics_reader.py
227230
# TODO: accuracy does not match, whether setting
228231
# VLLM_USE_FLASHINFER_SAMPLER or not on H100.
229232
- pytest -v -s v1/e2e
@@ -248,7 +251,7 @@ steps:
248251
- python3 offline_inference/vision_language.py --seed 0
249252
- python3 offline_inference/vision_language_embedding.py --seed 0
250253
- python3 offline_inference/vision_language_multi_image.py --seed 0
251-
- VLLM_USE_V1=0 python3 other/tensorize_vllm_model.py --model facebook/opt-125m serialize --serialized-directory /tmp/ --suffix v1 && python3 other/tensorize_vllm_model.py --model facebook/opt-125m deserialize --path-to-tensors /tmp/vllm/facebook/opt-125m/v1/model.tensors
254+
- VLLM_USE_V1=0 python3 others/tensorize_vllm_model.py --model facebook/opt-125m serialize --serialized-directory /tmp/ --suffix v1 && python3 others/tensorize_vllm_model.py --model facebook/opt-125m deserialize --path-to-tensors /tmp/vllm/facebook/opt-125m/v1/model.tensors
252255
- python3 offline_inference/encoder_decoder.py
253256
- python3 offline_inference/encoder_decoder_multimodal.py --model-type whisper --seed 0
254257
- python3 offline_inference/basic/classify.py
@@ -320,6 +323,7 @@ steps:
320323
- pytest -v -s compile/test_fusion.py
321324
- pytest -v -s compile/test_silu_mul_quant_fusion.py
322325
- pytest -v -s compile/test_sequence_parallelism.py
326+
- pytest -v -s compile/test_async_tp.py
323327

324328
- label: PyTorch Fullgraph Smoke Test # 9min
325329
mirror_hardwares: [amdexperimental, amdproduction]
@@ -397,10 +401,12 @@ steps:
397401
source_file_dependencies:
398402
- vllm/model_executor/model_loader
399403
- tests/tensorizer_loader
404+
- tests/entrypoints/openai/test_tensorizer_entrypoint.py
400405
commands:
401406
- apt-get update && apt-get install -y curl libsodium23
402407
- export VLLM_WORKER_MULTIPROC_METHOD=spawn
403408
- pytest -v -s tensorizer_loader
409+
- pytest -v -s entrypoints/openai/test_tensorizer_entrypoint.py
404410

405411
- label: Benchmarks # 9min
406412
mirror_hardwares: [amdexperimental, amdproduction]
@@ -479,10 +485,7 @@ steps:
479485
- pytest -v -s models/test_registry.py
480486
- pytest -v -s models/test_utils.py
481487
- pytest -v -s models/test_vision.py
482-
# V1 Test: https://github.com/vllm-project/vllm/issues/14531
483-
- VLLM_USE_V1=0 pytest -v -s models/test_initialization.py -k 'not llama4 and not plamo2'
484-
- VLLM_USE_V1=0 pytest -v -s models/test_initialization.py -k 'llama4'
485-
- VLLM_USE_V1=0 pytest -v -s models/test_initialization.py -k 'plamo2'
488+
- pytest -v -s models/test_initialization.py
486489

487490
- label: Language Models Test (Standard)
488491
mirror_hardwares: [amdexperimental]
@@ -496,16 +499,25 @@ steps:
496499
- pip freeze | grep -E 'torch'
497500
- pytest -v -s models/language -m core_model
498501

499-
- label: Language Models Test (Extended)
502+
- label: Language Models Test (Extended Generation) # 1hr20min
500503
mirror_hardwares: [amdexperimental]
501504
optional: true
502505
source_file_dependencies:
503506
- vllm/
504-
- tests/models/language
507+
- tests/models/language/generation
505508
commands:
506509
# Install causal-conv1d for plamo2 models here, as it is not compatible with pip-compile.
507510
- pip install 'git+https://github.com/Dao-AILab/[email protected]'
508-
- pytest -v -s models/language -m 'not core_model'
511+
- pytest -v -s models/language/generation -m 'not core_model'
512+
513+
- label: Language Models Test (Extended Pooling) # 36min
514+
mirror_hardwares: [amdexperimental]
515+
optional: true
516+
source_file_dependencies:
517+
- vllm/
518+
- tests/models/language/pooling
519+
commands:
520+
- pytest -v -s models/language/pooling -m 'not core_model'
509521

510522
- label: Multi-Modal Models Test (Standard)
511523
mirror_hardwares: [amdexperimental]

.github/ISSUE_TEMPLATE/400-bug-report.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -81,14 +81,14 @@ body:
8181
required: true
8282
- type: markdown
8383
attributes:
84-
value: >
85-
⚠️ Please separate bugs of `transformers` implementation or usage from bugs of `vllm`. If you think anything is wrong with the models' output:
84+
value: |
85+
⚠️ Please separate bugs of `transformers` implementation or usage from bugs of `vllm`. If you think anything is wrong with the model's output:
8686
8787
- Try the counterpart of `transformers` first. If the error appears, please go to [their issues](https://github.com/huggingface/transformers/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc).
8888
8989
- If the error only appears in vllm, please provide the detailed script of how you run `transformers` and `vllm`, also highlight the difference and what you expect.
9090
91-
Thanks for contributing 🎉!
91+
Thanks for reporting 🙏!
9292
- type: checkboxes
9393
id: askllm
9494
attributes:
Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
name: 🧪 CI failure report
2+
description: Report a failing test.
3+
title: "[CI Failure]: "
4+
labels: ["ci-failure"]
5+
6+
body:
7+
- type: markdown
8+
attributes:
9+
value: >
10+
#### Include the name of the failing Buildkite step and test file in the title.
11+
- type: input
12+
attributes:
13+
label: Name of failing test
14+
description: |
15+
Paste in the fully-qualified name of the failing test from the logs.
16+
placeholder: |
17+
`path/to/test_file.py::test_name[params]`
18+
validations:
19+
required: true
20+
- type: checkboxes
21+
attributes:
22+
label: Basic information
23+
description: Select all items that apply to the failing test.
24+
options:
25+
- label: Flaky test
26+
- label: Can reproduce locally
27+
- label: Caused by external libraries (e.g. bug in `transformers`)
28+
- type: textarea
29+
attributes:
30+
label: 🧪 Describe the failing test
31+
description: |
32+
Please provide a clear and concise description of the failing test.
33+
placeholder: |
34+
A clear and concise description of the failing test.
35+
36+
```
37+
The error message you got, with the full traceback and the error logs with [dump_input.py:##] if present.
38+
```
39+
validations:
40+
required: true
41+
- type: textarea
42+
attributes:
43+
label: 📝 History of failing test
44+
description: |
45+
Since when did the test start to fail?
46+
You can look up its history via [Buildkite Test Suites](https://buildkite.com/organizations/vllm/analytics/suites/ci-1/tests?branch=main).
47+
48+
If you have time, identify the PR that caused the test to fail on main. You can do so via the following methods:
49+
50+
- Use Buildkite Test Suites to find the PR where the test failure first occurred, and reproduce the failure locally.
51+
52+
- Run [`git bisect`](https://git-scm.com/docs/git-bisect) locally.
53+
54+
- Manually unblock Buildkite steps for suspected PRs on main and check the results. (authorized users only)
55+
placeholder: |
56+
Approximate timeline and/or problematic PRs
57+
58+
A link to the Buildkite analytics of the failing test (if available)
59+
validations:
60+
required: true
61+
- type: textarea
62+
attributes:
63+
label: CC List.
64+
description: >
65+
The list of people you want to CC. Usually, this includes those who worked on the PR that failed the test.
66+
- type: markdown
67+
attributes:
68+
value: >
69+
Thanks for reporting 🙏!

.github/mergify.yml

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ pull_request_rules:
5858
- files~=^benchmarks/structured_schemas/
5959
- files=benchmarks/benchmark_serving_structured_output.py
6060
- files=benchmarks/run_structured_output_benchmark.sh
61-
- files=docs/source/features/structured_outputs.md
61+
- files=docs/features/structured_outputs.md
6262
- files=examples/offline_inference/structured_outputs.py
6363
- files=examples/online_serving/openai_chat_completion_structured_outputs.py
6464
- files=examples/online_serving/openai_chat_completion_structured_outputs_with_reasoning.py
@@ -135,9 +135,7 @@ pull_request_rules:
135135
- files~=^tests/entrypoints/openai/tool_parsers/
136136
- files=tests/entrypoints/openai/test_chat_with_tool_reasoning.py
137137
- files~=^vllm/entrypoints/openai/tool_parsers/
138-
- files=docs/source/features/tool_calling.md
139-
- files=docs/source/getting_started/examples/openai_chat_completion_client_with_tools.md
140-
- files=docs/source/getting_started/examples/chat_with_tools.md
138+
- files=docs/features/tool_calling.md
141139
- files~=^examples/tool_chat_*
142140
- files=examples/offline_inference/chat_with_tools.py
143141
- files=examples/online_serving/openai_chat_completion_client_with_tools_required.py

.github/scripts/cleanup_pr_body.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ sed -i '/\*\*BEFORE SUBMITTING, PLEASE READ.*\*\*/,$d' "${NEW}"
2626

2727
# Remove HTML <details> section that includes <summary> text of "PR Checklist (Click to Expand)"
2828
python3 - <<EOF
29-
import re
29+
import regex as re
3030
3131
with open("${NEW}", "r") as file:
3232
content = file.read()

.github/workflows/cleanup_pr_body.yml

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,12 @@ jobs:
2020
with:
2121
python-version: '3.12'
2222

23+
- name: Install Python dependencies
24+
run: |
25+
python3 -m pip install --upgrade pip
26+
python3 -m pip install regex
27+
2328
- name: Update PR description
2429
env:
2530
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
26-
run: .github/scripts/cleanup_pr_body.sh "${{ github.event.number }}"
31+
run: bash .github/scripts/cleanup_pr_body.sh "${{ github.event.number }}"

.gitignore

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -77,11 +77,6 @@ instance/
7777
# Scrapy stuff:
7878
.scrapy
7979

80-
# Sphinx documentation
81-
docs/_build/
82-
docs/source/getting_started/examples/
83-
docs/source/api/vllm
84-
8580
# PyBuilder
8681
.pybuilder/
8782
target/
@@ -151,6 +146,7 @@ venv.bak/
151146

152147
# mkdocs documentation
153148
/site
149+
docs/examples
154150

155151
# mypy
156152
.mypy_cache/

.pre-commit-config.yaml

Lines changed: 17 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ repos:
1717
- id: ruff
1818
args: [--output-format, github, --fix]
1919
- id: ruff-format
20-
files: ^(.buildkite|benchmarks)/.*
20+
files: ^(.buildkite|benchmarks|examples)/.*
2121
- repo: https://github.com/codespell-project/codespell
2222
rev: v2.4.1
2323
hooks:
@@ -39,6 +39,7 @@ repos:
3939
rev: v0.9.29
4040
hooks:
4141
- id: pymarkdown
42+
exclude: '.*\.inc\.md'
4243
args: [fix]
4344
- repo: https://github.com/rhysd/actionlint
4445
rev: v1.7.7
@@ -127,6 +128,21 @@ repos:
127128
name: Update Dockerfile dependency graph
128129
entry: tools/update-dockerfile-graph.sh
129130
language: script
131+
- id: enforce-import-regex-instead-of-re
132+
name: Enforce import regex as re
133+
entry: python tools/enforce_regex_import.py
134+
language: python
135+
types: [python]
136+
pass_filenames: false
137+
additional_dependencies: [regex]
138+
# forbid directly import triton
139+
- id: forbid-direct-triton-import
140+
name: "Forbid direct 'import triton'"
141+
entry: python tools/check_triton_import.py
142+
language: python
143+
types: [python]
144+
pass_filenames: false
145+
additional_dependencies: [regex]
130146
# Keep `suggestion` last
131147
- id: suggestion
132148
name: Suggestion

0 commit comments

Comments
 (0)