-
-
Notifications
You must be signed in to change notification settings - Fork 10.3k
Description
🐛 Describe the bug
Using kv-cache-dtype fp8 gives twice as many tokens. However, the mechanism for calculating the maximum number of tokens does not take this into account and only allows the size as if the kv-cache-dtype flag were not set.
with kv-cache-dtype fp8 and --max-model-len 100000
Available KV cache memory: 10.55 GiB
GPU KV cache size: 230,528 tokens
[kv_cache_utils.py:868] Maximum concurrency for 100,000 tokens per request: 2.31x
with kv-cache-dtype fp8 and --max-model-len 221000
ValueError: To serve at least one request with the models's max seq len (221000), (10.12 GiB KV cache is needed, which is larger than the available KV cache memory (5.59 GiB). Based on the available memory, the estimated maximum model length is 122080. Try increasing gpu_memory_utilization
or decreasing max_model_len
when initializing the engine.
It's strange that this flag changes the amount of available memory.
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.