-
-
Notifications
You must be signed in to change notification settings - Fork 9.5k
Open
Labels
usageHow to use vllmHow to use vllm
Description
Your current environment
Dear vllm developers,
First of all, on behalf of our university laboratory, I would like to express our highest gratitude and respect for the work you have done. However, for students like us, it is quite challenging to implement the inference test of LLM model PD (Prefill-Decode) disaggregation using the vllm framework. Is there a manual or tutorial available to help beginners carry out the inference test of LLM model PD disaggregation? We offer our sincere thanks and admiration once again.
New we have installed vllm 0.9.2 in our servers.
root@5468a704:/workspace/vllm-0.9.2# pip show vllm
Name: vllm
Version: 0.9.2
Summary: A high-throughput and memory-efficient inference and serving engine for LLMs
Home-page: https://github.com/vllm-project/vllm
Author: vLLM Team
Author-email:
License-Expression: Apache-2.0
Location: /usr/local/lib/python3.12/dist-packages
Requires: aiohttp, blake3, cachetools, cloudpickle, compressed-tensors, depyf, einops, fastapi, filelock, gguf, huggingface-hub, lark, llguidance, lm-format-enforcer, mistral_common, msgspec, ninja, numba, numpy, openai, opencv-python-headless, outlines, partial-json-parser, pillow, prometheus-fastapi-instrumentator, prometheus_client, protobuf, psutil, py-cpuinfo, pybase64, pydantic, python-json-logger, pyyaml, pyzmq, ray, regex, requests, scipy, sentencepiece, setuptools, six, tiktoken, tokenizers, torch, torchaudio, torchvision, tqdm, transformers, typing_extensions, watchfiles, xformers, xgrammar
Required-by:
How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Metadata
Metadata
Assignees
Labels
usageHow to use vllmHow to use vllm