Skip to content

[Usage]: Workaround to run model on GPUs with Compute Capability < 8.0? #29707

@seasoncool

Description

@seasoncool

Your current environment

Problem:
I am unable to run the Qwen3-VL-32B-Instruct-AWQ-4bit model due to a CUDA compute capability requirement. My hardware consists of two NVIDIA QUADRO RTX 5000 cards (16GB each, 32GB total) with a compute capability of 7.5. The software framework (likely a recent version of PyTorch or a specific library) raises an error:

"GPUs with compute capability < 8.0 are not supported."

Question:
Are there any workarounds to run this model on my older QUADRO RTX 5000 GPUs? Thanks in advance.

 vllm collect-env
INFO 11-29 20:49:15 [__init__.py:216] Automatically detected platform cuda.
Collecting environment information...
==============================
        System Info
==============================
OS                           : Ubuntu 24.04.3 LTS (x86_64)
GCC version                  : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version                : Could not collect
CMake version                : version 3.30.3
Libc version                 : glibc-2.39

==============================
       PyTorch Info
==============================
PyTorch version              : 2.8.0+cu128
Is debug build               : False
CUDA used to build PyTorch   : 12.8
ROCM used to build PyTorch   : N/A

==============================
      Python Environment
==============================
Python version               : 3.12.11 | packaged by Anaconda, Inc. | (main, Jun  5 2025, 13:09:17) [GCC 11.2.0] (64-bit runtime)
Python platform              : Linux-6.14.0-27-generic-x86_64-with-glibc2.39

==============================
       CUDA / GPU Info
==============================
Is CUDA available            : True
CUDA runtime version         : 12.0.140
CUDA_MODULE_LOADING set to   : LAZY
GPU models and configuration :
GPU 0: Quadro RTX 5000
GPU 1: Quadro RTX 5000

Nvidia driver version        : 580.65.06
cuDNN version                : Could not collect
HIP runtime version          : N/A
MIOpen runtime version       : N/A
Is XNNPACK available         : True

==============================
          CPU Info
==============================
Architecture:                         x86_64
CPU op-mode(s):                       32-bit, 64-bit
Address sizes:                        46 bits physical, 48 bits virtual
Byte Order:                           Little Endian
CPU(s):                               20
On-line CPU(s) list:                  0-19
Vendor ID:                            GenuineIntel
Model name:                           Intel(R) Core(TM) i9-10900X CPU @ 3.70GHz
CPU family:                           6
Model:                                85
Thread(s) per core:                   2
Core(s) per socket:                   10
Socket(s):                            1
Stepping:                             7
CPU(s) scaling MHz:                   28%
CPU max MHz:                          4700.0000
CPU min MHz:                          1200.0000
BogoMIPS:                             7399.70
Flags:                                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512_vnni md_clear flush_l1d arch_capabilities
L1d cache:                            320 KiB (10 instances)
L1i cache:                            320 KiB (10 instances)
L2 cache:                             10 MiB (10 instances)
L3 cache:                             19.3 MiB (1 instance)
NUMA node(s):                         1
NUMA node0 CPU(s):                    0-19
Vulnerability Gather data sampling:   Vulnerable
Vulnerability Ghostwrite:             Not affected
Vulnerability Itlb multihit:          KVM: Mitigation: VMX unsupported
Vulnerability L1tf:                   Not affected
Vulnerability Mds:                    Not affected
Vulnerability Meltdown:               Not affected
Vulnerability Mmio stale data:        Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed:               Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow:   Not affected
Vulnerability Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:             Mitigation; Enhanced / Automatic IBRS; IBPB conditional; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Mitigation; TSX disabled

==============================
Versions of relevant libraries
==============================
[pip3] numpy==2.2.6
[pip3] nvidia-cublas-cu12==12.8.4.1
[pip3] nvidia-cuda-cupti-cu12==12.8.90
[pip3] nvidia-cuda-nvrtc-cu12==12.8.93
[pip3] nvidia-cuda-runtime-cu12==12.8.90
[pip3] nvidia-cudnn-cu12==9.10.2.21
[pip3] nvidia-cufft-cu12==11.3.3.83
[pip3] nvidia-cufile-cu12==1.13.1.3
[pip3] nvidia-curand-cu12==10.3.9.90
[pip3] nvidia-cusolver-cu12==11.7.3.90
[pip3] nvidia-cusparse-cu12==12.5.8.93
[pip3] nvidia-cusparselt-cu12==0.7.1
[pip3] nvidia-nccl-cu12==2.27.3
[pip3] nvidia-nvjitlink-cu12==12.8.93
[pip3] nvidia-nvtx-cu12==12.8.90
[pip3] pyzmq==27.1.0
[pip3] torch==2.8.0
[pip3] torchaudio==2.8.0
[pip3] torchvision==0.23.0
[pip3] transformers==4.57.0
[pip3] triton==3.4.0
[conda] numpy                     2.2.6                    pypi_0    pypi
[conda] nvidia-cublas-cu12        12.8.4.1                 pypi_0    pypi
[conda] nvidia-cuda-cupti-cu12    12.8.90                  pypi_0    pypi
[conda] nvidia-cuda-nvrtc-cu12    12.8.93                  pypi_0    pypi
[conda] nvidia-cuda-runtime-cu12  12.8.90                  pypi_0    pypi
[conda] nvidia-cudnn-cu12         9.10.2.21                pypi_0    pypi
[conda] nvidia-cufft-cu12         11.3.3.83                pypi_0    pypi
[conda] nvidia-cufile-cu12        1.13.1.3                 pypi_0    pypi
[conda] nvidia-curand-cu12        10.3.9.90                pypi_0    pypi
[conda] nvidia-cusolver-cu12      11.7.3.90                pypi_0    pypi
[conda] nvidia-cusparse-cu12      12.5.8.93                pypi_0    pypi
[conda] nvidia-cusparselt-cu12    0.7.1                    pypi_0    pypi
[conda] nvidia-nccl-cu12          2.27.3                   pypi_0    pypi
[conda] nvidia-nvjitlink-cu12     12.8.93                  pypi_0    pypi
[conda] nvidia-nvtx-cu12          12.8.90                  pypi_0    pypi
[conda] pyzmq                     27.1.0                   pypi_0    pypi
[conda] torch                     2.8.0                    pypi_0    pypi
[conda] torchaudio                2.8.0                    pypi_0    pypi
[conda] torchvision               0.23.0                   pypi_0    pypi
[conda] transformers              4.57.0                   pypi_0    pypi
[conda] triton                    3.4.0                    pypi_0    pypi

==============================
         vLLM Info
==============================
ROCM Version                 : Could not collect
vLLM Version                 : 0.11.0
vLLM Build Flags:
  CUDA Archs: Not Set; ROCm: Disabled
GPU Topology:
        GPU0    GPU1    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NODE    0-19    0               N/A
GPU1    NODE     X      0-19    0               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

==============================
     Environment Variables
==============================
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
TORCHINDUCTOR_CACHE_DIR=/tmp/torchinductor_drc-whlab
VLLM_WORKER_MULTIPROC_METHOD=spawn
CUDA_MODULE_LOADING=LAZY

python -m vllm.entrypoints.openai.api_server \
  --served-model-name Qwen2___5-VL-32B-Instruct-AWQ \
  --model /home/drc-whlab/.cache/modelscope/hub/cpatonn-mirror/Qwen3-VL-32B-Instruct-AWQ-4bit  \
  --tensor-parallel-size 2 \
  --dtype=half \
  --gpu_memory_utilization 0.8 \
  --max_num_seqs 10 \
  --limit-mm-per-prompt '{"image": 1, "video": 0}' \
  --mm-processor-kwargs '{"max_pixels": 2073600}' \
  --enable-auto-tool-choice \
  --tool-call-parser=hermes \
  --trust-remote-code \
  --enforce-eager \
  --port=7777 \
  --max-model-len 8000
INFO 11-29 20:51:49 [__init__.py:216] Automatically detected platform cuda.
(APIServer pid=1706856) INFO 11-29 20:51:51 [api_server.py:1839] vLLM API server version 0.11.0
(APIServer pid=1706856) INFO 11-29 20:51:51 [utils.py:233] non-default args: {'port': 7777, 'enable_auto_tool_choice': True, 'tool_call_parser': 'hermes', 'model': '/home/drc-whlab/.cache/modelscope/hub/cpatonn-mirror/Qwen3-VL-32B-Instruct-AWQ-4bit', 'trust_remote_code': True, 'dtype': 'half', 'max_model_len': 8000, 'enforce_eager': True, 'served_model_name': ['Qwen2___5-VL-32B-Instruct-AWQ'], 'tensor_parallel_size': 2, 'gpu_memory_utilization': 0.8, 'limit_mm_per_prompt': {'image': 1, 'video': 0}, 'mm_processor_kwargs': {'max_pixels': 2073600}, 'max_num_seqs': 10}
(APIServer pid=1706856) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
(APIServer pid=1706856) INFO 11-29 20:51:51 [model.py:547] Resolved architecture: Qwen3VLForConditionalGeneration
(APIServer pid=1706856) `torch_dtype` is deprecated! Use `dtype` instead!
(APIServer pid=1706856) WARNING 11-29 20:51:51 [model.py:1733] Casting torch.bfloat16 to torch.float16.
(APIServer pid=1706856) INFO 11-29 20:51:51 [model.py:1510] Using max model len 8000
(APIServer pid=1706856) INFO 11-29 20:51:54 [scheduler.py:205] Chunked prefill is enabled with max_num_batched_tokens=2048.
(APIServer pid=1706856) INFO 11-29 20:51:55 [__init__.py:381] Cudagraph is disabled under eager mode
INFO 11-29 20:51:59 [__init__.py:216] Automatically detected platform cuda.
(EngineCore_DP0 pid=1707016) INFO 11-29 20:52:02 [core.py:644] Waiting for init message from front-end.
(EngineCore_DP0 pid=1707016) INFO 11-29 20:52:02 [core.py:77] Initializing a V1 LLM engine (v0.11.0) with config: model='/home/drc-whlab/.cache/modelscope/hub/cpatonn-mirror/Qwen3-VL-32B-Instruct-AWQ-4bit', speculative_config=None, tokenizer='/home/drc-whlab/.cache/modelscope/hub/cpatonn-mirror/Qwen3-VL-32B-Instruct-AWQ-4bit', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.float16, max_seq_len=8000, download_dir=None, load_format=auto, tensor_parallel_size=2, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=compressed-tensors, enforce_eager=True, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=Qwen2___5-VL-32B-Instruct-AWQ, enable_prefix_caching=True, chunked_prefill_enabled=True, pooler_config=None, compilation_config={"level":0,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":null,"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"cudagraph_mode":0,"use_cudagraph":true,"cudagraph_num_of_warmups":0,"cudagraph_capture_sizes":[],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"use_inductor_graph_partition":false,"pass_config":{},"max_capture_size":0,"local_cache_dir":null}
(EngineCore_DP0 pid=1707016) WARNING 11-29 20:52:02 [multiproc_executor.py:720] Reducing Torch parallelism from 10 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.
(EngineCore_DP0 pid=1707016) INFO 11-29 20:52:02 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1], buffer_handle=(2, 16777216, 10, 'psm_c27f14c3'), local_subscribe_addr='ipc:///tmp/bb16b368-b520-4da8-8924-14136c986887', remote_subscribe_addr=None, remote_addr_ipv6=False)
INFO 11-29 20:52:05 [__init__.py:216] Automatically detected platform cuda.
INFO 11-29 20:52:05 [__init__.py:216] Automatically detected platform cuda.
ERROR 11-29 20:52:08 [fa_utils.py:57] Cannot use FA version 2 is not supported due to FA2 is only supported on devices with compute capability >= 8
ERROR 11-29 20:52:08 [fa_utils.py:57] Cannot use FA version 2 is not supported due to FA2 is only supported on devices with compute capability >= 8
INFO 11-29 20:52:11 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_bffa6a1b'), local_subscribe_addr='ipc:///tmp/6257d372-f203-4a0e-94d8-8c768328eb43', remote_subscribe_addr=None, remote_addr_ipv6=False)
INFO 11-29 20:52:11 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_d3ed3511'), local_subscribe_addr='ipc:///tmp/01782614-de40-4a90-a115-8461024c4cb4', remote_subscribe_addr=None, remote_addr_ipv6=False)
[Gloo] Rank 1 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1
[Gloo] Rank 0 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1
[Gloo] Rank 0 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1
[Gloo] Rank 1 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1
INFO 11-29 20:52:13 [__init__.py:1384] Found nccl from library libnccl.so.2
INFO 11-29 20:52:13 [__init__.py:1384] Found nccl from library libnccl.so.2
INFO 11-29 20:52:13 [pynccl.py:103] vLLM is using nccl==2.27.3
INFO 11-29 20:52:13 [pynccl.py:103] vLLM is using nccl==2.27.3
WARNING 11-29 20:52:14 [symm_mem.py:58] SymmMemCommunicator: Device capability 7.5 not supported, communicator is not available.
WARNING 11-29 20:52:14 [symm_mem.py:58] SymmMemCommunicator: Device capability 7.5 not supported, communicator is not available.
INFO 11-29 20:52:14 [custom_all_reduce.py:35] Skipping P2P check and trusting the driver's P2P report.
INFO 11-29 20:52:14 [custom_all_reduce.py:35] Skipping P2P check and trusting the driver's P2P report.
INFO 11-29 20:52:14 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[1], buffer_handle=(1, 4194304, 6, 'psm_5b69cbec'), local_subscribe_addr='ipc:///tmp/dbb966a2-08f3-4268-a182-fceab6f37dac', remote_subscribe_addr=None, remote_addr_ipv6=False)
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 1 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1
[Gloo] Rank 0 is connected to 1 peer ranks. Expected number of connected peer ranks is : 1
INFO 11-29 20:52:14 [__init__.py:1384] Found nccl from library libnccl.so.2
INFO 11-29 20:52:14 [__init__.py:1384] Found nccl from library libnccl.so.2
INFO 11-29 20:52:14 [pynccl.py:103] vLLM is using nccl==2.27.3
INFO 11-29 20:52:14 [pynccl.py:103] vLLM is using nccl==2.27.3
INFO 11-29 20:52:14 [parallel_state.py:1208] rank 0 in world size 2 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0
INFO 11-29 20:52:14 [parallel_state.py:1208] rank 1 in world size 2 is assigned as DP rank 0, PP rank 0, TP rank 1, EP rank 1
WARNING 11-29 20:52:14 [topk_topp_sampler.py:66] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
WARNING 11-29 20:52:14 [topk_topp_sampler.py:66] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
(Worker_TP1 pid=1707111) INFO 11-29 20:52:17 [gpu_model_runner.py:2602] Starting to load model /home/drc-whlab/.cache/modelscope/hub/cpatonn-mirror/Qwen3-VL-32B-Instruct-AWQ-4bit...
(Worker_TP0 pid=1707110) INFO 11-29 20:52:17 [gpu_model_runner.py:2602] Starting to load model /home/drc-whlab/.cache/modelscope/hub/cpatonn-mirror/Qwen3-VL-32B-Instruct-AWQ-4bit...
(Worker_TP1 pid=1707111) INFO 11-29 20:52:17 [gpu_model_runner.py:2634] Loading model from scratch...
(Worker_TP0 pid=1707110) INFO 11-29 20:52:17 [gpu_model_runner.py:2634] Loading model from scratch...
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597] WorkerProc failed to start.
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597] Traceback (most recent call last):
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 571, in worker_main
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     worker = WorkerProc(*args, **kwargs)
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 437, in __init__
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     self.worker.load_model()
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 213, in load_model
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     self.model_runner.load_model(eep_scale_up=eep_scale_up)
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597] WorkerProc failed to start.
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 2635, in load_model
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     self.model = model_loader.load_model(
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597] Traceback (most recent call last):
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]                  ^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/model_loader/base_loader.py", line 45, in load_model
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 571, in worker_main
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     model = initialize_model(vllm_config=vllm_config,
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     worker = WorkerProc(*args, **kwargs)
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/model_loader/utils.py", line 63, in initialize_model
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 437, in __init__
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     return model_class(vllm_config=vllm_config, prefix=prefix)
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     self.worker.load_model()
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 213, in load_model
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/models/qwen3_vl.py", line 1141, in __init__
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     self.model_runner.load_model(eep_scale_up=eep_scale_up)
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     self.language_model = Qwen3LLMForCausalLM(vllm_config=vllm_config,
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 2635, in load_model
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     self.model = model_loader.load_model(
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/models/qwen3_vl.py", line 1065, in __init__
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]                  ^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     self.model = Qwen3LLMModel(vllm_config=vllm_config, prefix=prefix)
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/model_loader/base_loader.py", line 45, in load_model
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     model = initialize_model(vllm_config=vllm_config,
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/compilation/decorators.py", line 201, in __init__
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/model_loader/utils.py", line 63, in initialize_model
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/models/qwen3_vl.py", line 1002, in __init__
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     return model_class(vllm_config=vllm_config, prefix=prefix)
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     super().__init__(vllm_config=vllm_config, prefix=prefix)
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/compilation/decorators.py", line 201, in __init__
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/models/qwen3_vl.py", line 1141, in __init__
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     self.language_model = Qwen3LLMForCausalLM(vllm_config=vllm_config,
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/models/qwen3.py", line 258, in __init__
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     super().__init__(vllm_config=vllm_config,
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/models/qwen3_vl.py", line 1065, in __init__
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/compilation/decorators.py", line 201, in __init__
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     self.model = Qwen3LLMModel(vllm_config=vllm_config, prefix=prefix)
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/models/qwen2.py", line 319, in __init__
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/compilation/decorators.py", line 201, in __init__
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     self.start_layer, self.end_layer, self.layers = make_layers(
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]                                                     ^^^^^^^^^^^^
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/models/qwen3_vl.py", line 1002, in __init__
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 630, in make_layers
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     super().__init__(vllm_config=vllm_config, prefix=prefix)
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}"))
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/compilation/decorators.py", line 201, in __init__
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/models/qwen2.py", line 321, in <lambda>
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/models/qwen3.py", line 258, in __init__
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     lambda prefix: decoder_layer_type(config=config,
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     super().__init__(vllm_config=vllm_config,
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/compilation/decorators.py", line 201, in __init__
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/models/qwen3.py", line 188, in __init__
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs)
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     self.self_attn = Qwen3Attention(
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/models/qwen2.py", line 319, in __init__
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]                      ^^^^^^^^^^^^^^^
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     self.start_layer, self.end_layer, self.layers = make_layers(
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/models/qwen3.py", line 97, in __init__
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]                                                     ^^^^^^^^^^^^
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     self.qkv_proj = QKVParallelLinear(
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/models/utils.py", line 630, in make_layers
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]                     ^^^^^^^^^^^^^^^^^^
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}"))
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 918, in __init__
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     super().__init__(input_size=input_size,
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/models/qwen2.py", line 321, in <lambda>
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 461, in __init__
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     lambda prefix: decoder_layer_type(config=config,
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     super().__init__(input_size,
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 280, in __init__
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/models/qwen3.py", line 188, in __init__
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     self.quant_method = quant_config.get_quant_method(self,
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     self.self_attn = Qwen3Attention(
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]                      ^^^^^^^^^^^^^^^
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/compressed_tensors/compressed_tensors.py", line 117, in get_quant_method
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/models/qwen3.py", line 97, in __init__
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     quant_scheme = self.get_scheme(layer=layer, layer_name=prefix)
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     self.qkv_proj = QKVParallelLinear(
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]                     ^^^^^^^^^^^^^^^^^^
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/compressed_tensors/compressed_tensors.py", line 626, in get_scheme
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 918, in __init__
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     self._check_scheme_supported(scheme.get_min_capability())
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     super().__init__(input_size=input_size,
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/compressed_tensors/compressed_tensors.py", line 262, in _check_scheme_supported
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 461, in __init__
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     raise RuntimeError(
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     super().__init__(input_size,
(Worker_TP0 pid=1707110) ERROR 11-29 20:52:18 [multiproc_executor.py:597] RuntimeError: ('Quantization scheme is not supported for ', 'the current GPU. Min capability: 80. ', 'Current capability: 75.')
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/layers/linear.py", line 280, in __init__
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     self.quant_method = quant_config.get_quant_method(self,
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/compressed_tensors/compressed_tensors.py", line 117, in get_quant_method
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     quant_scheme = self.get_scheme(layer=layer, layer_name=prefix)
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/compressed_tensors/compressed_tensors.py", line 626, in get_scheme
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     self._check_scheme_supported(scheme.get_min_capability())
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/compressed_tensors/compressed_tensors.py", line 262, in _check_scheme_supported
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597]     raise RuntimeError(
(Worker_TP1 pid=1707111) ERROR 11-29 20:52:18 [multiproc_executor.py:597] RuntimeError: ('Quantization scheme is not supported for ', 'the current GPU. Min capability: 80. ', 'Current capability: 75.')
(Worker_TP0 pid=1707110) INFO 11-29 20:52:18 [multiproc_executor.py:558] Parent process exited, terminating worker
(Worker_TP1 pid=1707111) INFO 11-29 20:52:18 [multiproc_executor.py:558] Parent process exited, terminating worker
[rank0]:[W1129 20:52:18.616462815 ProcessGroupNCCL.cpp:1538] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
(EngineCore_DP0 pid=1707016) ERROR 11-29 20:52:21 [core.py:708] EngineCore failed to start.
(EngineCore_DP0 pid=1707016) ERROR 11-29 20:52:21 [core.py:708] Traceback (most recent call last):
(EngineCore_DP0 pid=1707016) ERROR 11-29 20:52:21 [core.py:708]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 699, in run_engine_core
(EngineCore_DP0 pid=1707016) ERROR 11-29 20:52:21 [core.py:708]     engine_core = EngineCoreProc(*args, **kwargs)
(EngineCore_DP0 pid=1707016) ERROR 11-29 20:52:21 [core.py:708]                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=1707016) ERROR 11-29 20:52:21 [core.py:708]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 498, in __init__
(EngineCore_DP0 pid=1707016) ERROR 11-29 20:52:21 [core.py:708]     super().__init__(vllm_config, executor_class, log_stats,
(EngineCore_DP0 pid=1707016) ERROR 11-29 20:52:21 [core.py:708]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 83, in __init__
(EngineCore_DP0 pid=1707016) ERROR 11-29 20:52:21 [core.py:708]     self.model_executor = executor_class(vllm_config)
(EngineCore_DP0 pid=1707016) ERROR 11-29 20:52:21 [core.py:708]                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=1707016) ERROR 11-29 20:52:21 [core.py:708]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/executor/executor_base.py", line 54, in __init__
(EngineCore_DP0 pid=1707016) ERROR 11-29 20:52:21 [core.py:708]     self._init_executor()
(EngineCore_DP0 pid=1707016) ERROR 11-29 20:52:21 [core.py:708]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 106, in _init_executor
(EngineCore_DP0 pid=1707016) ERROR 11-29 20:52:21 [core.py:708]     self.workers = WorkerProc.wait_for_ready(unready_workers)
(EngineCore_DP0 pid=1707016) ERROR 11-29 20:52:21 [core.py:708]                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=1707016) ERROR 11-29 20:52:21 [core.py:708]   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 509, in wait_for_ready
(EngineCore_DP0 pid=1707016) ERROR 11-29 20:52:21 [core.py:708]     raise e from None
(EngineCore_DP0 pid=1707016) ERROR 11-29 20:52:21 [core.py:708] Exception: WorkerProc initialization failed due to an exception in a background process. See stack trace for root cause.
(EngineCore_DP0 pid=1707016) Process EngineCore_DP0:
(EngineCore_DP0 pid=1707016) Traceback (most recent call last):
(EngineCore_DP0 pid=1707016)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
(EngineCore_DP0 pid=1707016)     self.run()
(EngineCore_DP0 pid=1707016)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/multiprocessing/process.py", line 108, in run
(EngineCore_DP0 pid=1707016)     self._target(*self._args, **self._kwargs)
(EngineCore_DP0 pid=1707016)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 712, in run_engine_core
(EngineCore_DP0 pid=1707016)     raise e
(EngineCore_DP0 pid=1707016)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 699, in run_engine_core
(EngineCore_DP0 pid=1707016)     engine_core = EngineCoreProc(*args, **kwargs)
(EngineCore_DP0 pid=1707016)                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=1707016)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 498, in __init__
(EngineCore_DP0 pid=1707016)     super().__init__(vllm_config, executor_class, log_stats,
(EngineCore_DP0 pid=1707016)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 83, in __init__
(EngineCore_DP0 pid=1707016)     self.model_executor = executor_class(vllm_config)
(EngineCore_DP0 pid=1707016)                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=1707016)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/executor/executor_base.py", line 54, in __init__
(EngineCore_DP0 pid=1707016)     self._init_executor()
(EngineCore_DP0 pid=1707016)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 106, in _init_executor
(EngineCore_DP0 pid=1707016)     self.workers = WorkerProc.wait_for_ready(unready_workers)
(EngineCore_DP0 pid=1707016)                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=1707016)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py", line 509, in wait_for_ready
(EngineCore_DP0 pid=1707016)     raise e from None
(EngineCore_DP0 pid=1707016) Exception: WorkerProc initialization failed due to an exception in a background process. See stack trace for root cause.
(APIServer pid=1706856) Traceback (most recent call last):
(APIServer pid=1706856)   File "<frozen runpy>", line 198, in _run_module_as_main
(APIServer pid=1706856)   File "<frozen runpy>", line 88, in _run_code
(APIServer pid=1706856)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 1953, in <module>
(APIServer pid=1706856)     uvloop.run(run_server(args))
(APIServer pid=1706856)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/uvloop/__init__.py", line 109, in run
(APIServer pid=1706856)     return __asyncio.run(
(APIServer pid=1706856)            ^^^^^^^^^^^^^^
(APIServer pid=1706856)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/asyncio/runners.py", line 195, in run
(APIServer pid=1706856)     return runner.run(main)
(APIServer pid=1706856)            ^^^^^^^^^^^^^^^^
(APIServer pid=1706856)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/asyncio/runners.py", line 118, in run
(APIServer pid=1706856)     return self._loop.run_until_complete(task)
(APIServer pid=1706856)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1706856)   File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
(APIServer pid=1706856)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/uvloop/__init__.py", line 61, in wrapper
(APIServer pid=1706856)     return await main
(APIServer pid=1706856)            ^^^^^^^^^^
(APIServer pid=1706856)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 1884, in run_server
(APIServer pid=1706856)     await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
(APIServer pid=1706856)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 1902, in run_server_worker
(APIServer pid=1706856)     async with build_async_engine_client(
(APIServer pid=1706856)                ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1706856)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=1706856)     return await anext(self.gen)
(APIServer pid=1706856)            ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1706856)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 180, in build_async_engine_client
(APIServer pid=1706856)     async with build_async_engine_client_from_engine_args(
(APIServer pid=1706856)                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1706856)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=1706856)     return await anext(self.gen)
(APIServer pid=1706856)            ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1706856)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 225, in build_async_engine_client_from_engine_args
(APIServer pid=1706856)     async_llm = AsyncLLM.from_vllm_config(
(APIServer pid=1706856)                 ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1706856)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/utils/__init__.py", line 1572, in inner
(APIServer pid=1706856)     return fn(*args, **kwargs)
(APIServer pid=1706856)            ^^^^^^^^^^^^^^^^^^^
(APIServer pid=1706856)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py", line 207, in from_vllm_config
(APIServer pid=1706856)     return cls(
(APIServer pid=1706856)            ^^^^
(APIServer pid=1706856)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py", line 134, in __init__
(APIServer pid=1706856)     self.engine_core = EngineCoreClient.make_async_mp_client(
(APIServer pid=1706856)                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1706856)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 102, in make_async_mp_client
(APIServer pid=1706856)     return AsyncMPClient(*client_args)
(APIServer pid=1706856)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1706856)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 769, in __init__
(APIServer pid=1706856)     super().__init__(
(APIServer pid=1706856)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/engine/core_client.py", line 448, in __init__
(APIServer pid=1706856)     with launch_core_engines(vllm_config, executor_class,
(APIServer pid=1706856)          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1706856)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/contextlib.py", line 144, in __exit__
(APIServer pid=1706856)     next(self.gen)
(APIServer pid=1706856)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/engine/utils.py", line 732, in launch_core_engines
(APIServer pid=1706856)     wait_for_engine_startup(
(APIServer pid=1706856)   File "/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/site-packages/vllm/v1/engine/utils.py", line 785, in wait_for_engine_startup
(APIServer pid=1706856)     raise RuntimeError("Engine core initialization failed. "
(APIServer pid=1706856) RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {}
/home/drc-whlab/miniconda3/envs/vllm011cp312/lib/python3.12/multiprocessing/resource_tracker.py:279: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

Metadata

Metadata

Assignees

No one assigned

    Labels

    usageHow to use vllm

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions