Skip to content

Commit 50a8085

Browse files
committed
fix broken links
* Add build instructions for TRT-RTX (only windows for now) * Fix broken link
1 parent 94804de commit 50a8085

File tree

2 files changed

+17
-2
lines changed

2 files changed

+17
-2
lines changed

docs/build/eps.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -235,6 +235,21 @@ These instructions are for the latest [JetPack SDK](https://developer.nvidia.com
235235

236236
* For a portion of Jetson devices like the Xavier series, higher power mode involves more cores (up to 6) to compute but it consumes more resource when building ONNX Runtime. Set `--parallel 1` in the build command if OOM happens and system is hanging.
237237

238+
## TensorRT-RTX
239+
240+
See more information on the NV TensorRT RTX Execution Provider [here](../execution-providers/TensorRTRTX-ExecutionProvider.md).
241+
242+
### Prerequisites
243+
{: .no_toc }
244+
245+
* Follow [instructions for CUDA execution provider](#cuda) to install CUDA and setup environment variables.
246+
* Intall TensorRT for RTX from nvidia.com (TODO: add link when available)
247+
248+
### Build Instructions
249+
{: .no_toc }
250+
`build.bat --config Release --parallel 32 --build_dir _build --build_shared_lib --use_nv_tensorrt_rtx --tensorrt_home "C:\dev\TensorRT-RTX-1.1.0.3" --cuda_home "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.9" --cmake_generator "Visual Studio 17 2022" --use_vcpkg`
251+
Replace the --tensorrt_home and --cuda_home with correct paths to CUDA and TensorRT-RTX installations.
252+
238253
## oneDNN
239254

240255
See more information on oneDNN (formerly DNNL) [here](../execution-providers/oneDNN-ExecutionProvider.md).

docs/execution-providers/TensorRTRTX-ExecutionProvider.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Currently TensorRT RTX supports RTX GPUs from Ampere or later architectures. Sup
2929
Please select the Nvidia TensorRT RTX version of Onnx Runtime: https://onnxruntime.ai/docs/install. (TODO!)
3030

3131
## Build from source
32-
See [Build instructions](../build/eps.md#tensorrtrtx). (TODO!)
32+
See [Build instructions](../build/eps.md#TensorRT-RTX).
3333

3434
## Requirements
3535

@@ -207,7 +207,7 @@ TensorRT RTX configurations can be set by execution provider options. It's usefu
207207
* The format of the profile shapes is `input_tensor_1:dim_1xdim_2x...,input_tensor_2:dim_3xdim_4x...,...`
208208
* These three flags should all be provided in order to enable explicit profile shapes feature.
209209
* Note that multiple TensorRT RTX profiles can be enabled by passing multiple shapes for the same input tensor.
210-
* Check [Explicit shape range for dynamic shape input](#explicit-shape-range-for-dynamic-shape-input) and TRT doc [optimization profiles](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#opt_profiles) for more details.
210+
* Check TensorRT doc [optimization profiles](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#opt_profiles) for more details.
211211
212212
## NV TensorRT RTX EP Caches
213213
There are two major TRT RTX EP caches:

0 commit comments

Comments
 (0)