@@ -39,9 +39,9 @@ This living user guide outlines a few known **important changes and limitations*
39
39
For each item, our progress towards V1 support falls into one of the following states:
40
40
41
41
- ** 🚀 Optimized** : Nearly fully optimized, with no further work currently planned.
42
- - ** 🟢 Functional** : Fully operational, with ongoing optimizations.
43
- - ** 🚧 WIP** : Under active development.
44
- - ** 🟡 Planned** : Scheduled for future implementation (some may have open PRs/RFCs).
42
+ - ** 🟢 Functional** : Fully operational, with ongoing optimizations.
43
+ - ** 🚧 WIP** : Under active development.
44
+ - ** 🟡 Planned** : Scheduled for future implementation (some may have open PRs/RFCs).
45
45
- ** 🟠 Delayed** : Temporarily dropped in V1 but planned to be re-introduced later.
46
46
- ** 🔴 Deprecated** : Not planned for V1 unless there is strong demand.
47
47
@@ -70,7 +70,7 @@ For each item, our progress towards V1 support falls into one of the following s
70
70
| -----------------------------| ------------------------------------------------------------------------------------|
71
71
| ** Decoder-only Models** | <nobr >🚀 Optimized</nobr > |
72
72
| ** Encoder-Decoder Models** | <nobr >🟠 Delayed</nobr > |
73
- | ** Embedding Models** | <nobr >🚧 WIP ( [ PR # 16188 ] ( https://github.com/vllm-project/vllm/pull/16188 ) ) </nobr > |
73
+ | ** Embedding Models** | <nobr >🟢 Functional </nobr > |
74
74
| ** Mamba Models** | <nobr >🚧 WIP ([ PR #19327 ] ( https://github.com/vllm-project/vllm/pull/19327 ) )</nobr > |
75
75
| ** Multimodal Models** | <nobr >🟢 Functional</nobr > |
76
76
@@ -80,11 +80,11 @@ vLLM V1 currently excludes model architectures with the `SupportsV0Only` protoco
80
80
81
81
This corresponds to the V1 column in our [list of supported models][supported-models].
82
82
83
- See below for the status of models that are still not yet supported in V1.
83
+ See below for the status of models that are not yet supported or have more features planned in V1.
84
84
85
85
#### Embedding Models
86
86
87
- The initial support will be provided by [ PR # 16188 ] ( https://github.com/vllm-project/vllm/pull/16188 ) .
87
+ The initial basic support is now functional .
88
88
89
89
Later, we will consider using [ hidden states processor] ( https://github.com/vllm-project/vllm/issues/12249 ) ,
90
90
which is based on [ global logits processor] ( https://github.com/vllm-project/vllm/pull/13360 )
0 commit comments