File tree 2 files changed +7
-5
lines changed 2 files changed +7
-5
lines changed Original file line number Diff line number Diff line change @@ -129,8 +129,7 @@ class LLM:
129
129
compilation_config: Either an integer or a dictionary. If it is an
130
130
integer, it is used as the level of compilation optimization. If it
131
131
is a dictionary, it can specify the full compilation configuration.
132
- **kwargs: Arguments for [EngineArgs][vllm.EngineArgs]. (See
133
- [engine-args][])
132
+ **kwargs: Arguments for [`EngineArgs`][vllm.EngineArgs].
134
133
135
134
Note:
136
135
This class is intended to be used for offline inference. For online
@@ -494,7 +493,8 @@ def collective_rpc(self,
494
493
`self` argument, in addition to the arguments passed in `args`
495
494
and `kwargs`. The `self` argument will be the worker object.
496
495
timeout: Maximum time in seconds to wait for execution. Raises a
497
- {exc}`TimeoutError` on timeout. `None` means wait indefinitely.
496
+ [`TimeoutError`][TimeoutError] on timeout. `None` means wait
497
+ indefinitely.
498
498
args: Positional arguments to pass to the worker method.
499
499
kwargs: Keyword arguments to pass to the worker method.
500
500
Original file line number Diff line number Diff line change @@ -582,7 +582,8 @@ def _tokenize_prompt_input(
582
582
add_special_tokens : bool = True ,
583
583
) -> TextTokensPrompt :
584
584
"""
585
- A simpler implementation of {meth}`_tokenize_prompt_input_or_inputs`
585
+ A simpler implementation of
586
+ [`_tokenize_prompt_input_or_inputs`][vllm.entrypoints.openai.serving_engine.OpenAIServing._tokenize_prompt_input_or_inputs]
586
587
that assumes single input.
587
588
"""
588
589
return next (
@@ -603,7 +604,8 @@ def _tokenize_prompt_inputs(
603
604
add_special_tokens : bool = True ,
604
605
) -> Iterator [TextTokensPrompt ]:
605
606
"""
606
- A simpler implementation of {meth}`_tokenize_prompt_input_or_inputs`
607
+ A simpler implementation of
608
+ [`_tokenize_prompt_input_or_inputs`][vllm.entrypoints.openai.serving_engine.OpenAIServing._tokenize_prompt_input_or_inputs]
607
609
that assumes multiple inputs.
608
610
"""
609
611
for text in prompt_inputs :
You can’t perform that action at this time.
0 commit comments