Skip to content

Commit 753915b

Browse files
committed
Merge branch 'main' into misc_fixes
2 parents d5111d3 + a8ecd79 commit 753915b

File tree

8 files changed

+23
-46
lines changed

8 files changed

+23
-46
lines changed

.github/scripts/install-torch-tensorrt.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ PLATFORM=$(python -c "import sys; print(sys.platform)")
66

77
# Install all the dependencies required for Torch-TensorRT
88
pip install --pre ${TORCH_TORCHVISION} --index-url ${INDEX_URL}
9-
pip install --pre -r ${PWD}/tests/py/requirements.txt --use-deprecated legacy-resolver
9+
pip install --pre -r ${PWD}/tests/py/requirements.txt
1010

1111
# Install Torch-TensorRT
1212
if [[ ${PLATFORM} == win32 ]]; then

docsrc/RELEASE_CHECKLIST.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -63,9 +63,8 @@ will result in a minor version bump and significant bug fixes will result in a p
6363
- Paste in Milestone information and Changelog information into release notes
6464
- Generate libtorchtrt.tar.gz for the following platforms:
6565
- x86_64 cxx11-abi
66-
- x86_64 pre-cxx11-abi
6766
- TODO: Add cxx11-abi build for aarch64 when a manylinux container for aarch64 exists
68-
- Generate Python packages for Python 3.6/3.7/3.8/3.9 for x86_64
67+
- Generate Python packages for supported Python versions for x86_64
6968
- TODO: Build a manylinux container for aarch64
7069
- `docker run -it -v$(pwd)/..:/workspace/Torch-TensorRT build_torch_tensorrt_wheel /bin/bash /workspace/Torch-TensorRT/py/build_whl.sh` generates all wheels
7170
- To build container `docker build -t build_torch_tensorrt_wheel .`

docsrc/getting_started/installation.rst

Lines changed: 5 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -203,37 +203,20 @@ To build with debug symbols use the following command
203203
204204
A tarball with the include files and library can then be found in ``bazel-bin``
205205

206-
Pre CXX11 ABI Build
207-
............................
208-
209-
To build using the pre-CXX11 ABI use the ``pre_cxx11_abi`` config
210-
211-
.. code-block:: shell
212-
213-
bazel build //:libtorchtrt --config pre_cxx11_abi -c [dbg/opt]
214-
215-
A tarball with the include files and library can then be found in ``bazel-bin``
216-
217-
218206
.. _abis:
219207

220208
Choosing the Right ABI
221209
^^^^^^^^^^^^^^^^^^^^^^^^
222210

223-
Likely the most complicated thing about compiling Torch-TensorRT is selecting the correct ABI. There are two options
224-
which are incompatible with each other, pre-cxx11-abi and the cxx11-abi. The complexity comes from the fact that while
225-
the most popular distribution of PyTorch (wheels downloaded from pytorch.org/pypi directly) use the pre-cxx11-abi, most
226-
other distributions you might encounter (e.g. ones from NVIDIA - NGC containers, and builds for Jetson as well as certain
227-
libtorch builds and likely if you build PyTorch from source) use the cxx11-abi. It is important you compile Torch-TensorRT
228-
using the correct ABI to function properly. Below is a table with general pairings of PyTorch distribution sources and the
229-
recommended commands:
211+
For the old versions, there were two ABI options to compile Torch-TensorRT which were incompatible with each other,
212+
pre-cxx11-abi and cxx11-abi. The complexity came from the different distributions of PyTorch. Fortunately, PyTorch
213+
has switched to cxx11-abi for all distributions. Below is a table with general pairings of PyTorch distribution
214+
sources and the recommended commands:
230215

231216
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
232217
| PyTorch Source | Recommended Python Compilation Command | Recommended C++ Compilation Command |
233218
+=============================================================+==========================================================+====================================================================+
234-
| PyTorch whl file from PyTorch.org | python -m pip install . | bazel build //:libtorchtrt -c opt \-\-config pre_cxx11_abi |
235-
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
236-
| libtorch-shared-with-deps-*.zip from PyTorch.org | python -m pip install . | bazel build //:libtorchtrt -c opt \-\-config pre_cxx11_abi |
219+
| PyTorch whl file from PyTorch.org | python -m pip install . | bazel build //:libtorchtrt -c opt |
237220
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
238221
| libtorch-cxx11-abi-shared-with-deps-*.zip from PyTorch.org | python setup.py bdist_wheel | bazel build //:libtorchtrt -c opt |
239222
+-------------------------------------------------------------+----------------------------------------------------------+--------------------------------------------------------------------+
@@ -339,10 +322,6 @@ To build natively on aarch64-linux-gnu platform, configure the ``WORKSPACE`` wit
339322
In the case that you installed with ``sudo pip install`` this will be ``/usr/local/lib/python3.8/dist-packages/torch``.
340323
In the case you installed with ``pip install --user`` this will be ``$HOME/.local/lib/python3.8/site-packages/torch``.
341324

342-
In the case you are using NVIDIA compiled pip packages, set the path for both libtorch sources to the same path. This is because unlike
343-
PyTorch on x86_64, NVIDIA aarch64 PyTorch uses the CXX11-ABI. If you compiled for source using the pre_cxx11_abi and only would like to
344-
use that library, set the paths to the same path but when you compile make sure to add the flag ``--config=pre_cxx11_abi``
345-
346325
.. code-block:: shell
347326
348327
new_local_repository(
@@ -351,12 +330,6 @@ use that library, set the paths to the same path but when you compile make sure
351330
build_file = "third_party/libtorch/BUILD"
352331
)
353332
354-
new_local_repository(
355-
name = "libtorch_pre_cxx11_abi",
356-
path = "/usr/local/lib/python3.8/dist-packages/torch",
357-
build_file = "third_party/libtorch/BUILD"
358-
)
359-
360333
361334
Compile C++ Library and Compiler CLI
362335
........................................................
@@ -385,6 +358,4 @@ Compile the Python API using the following command from the ``//py`` directory:
385358
386359
python3 setup.py install
387360
388-
If you have a build of PyTorch that uses Pre-CXX11 ABI drop the ``--use-pre-cxx11-abi`` flag
389-
390361
If you are building for Jetpack 4.5 add the ``--jetpack-version 5.0`` flag

docsrc/user_guide/runtime.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,8 +22,6 @@ link ``libtorchtrt_runtime.so`` in your deployment programs or use ``DL_OPEN`` o
2222
you can load the runtime with ``torch.ops.load_library("libtorchtrt_runtime.so")``. You can then continue to use
2323
programs just as you would otherwise via PyTorch API.
2424

25-
.. note:: If you are using the standard distribution of PyTorch in Python on x86, likely you will need the pre-cxx11-abi variant of ``libtorchtrt_runtime.so``, check :ref:`Installation` documentation for more details.
26-
2725
.. note:: If you are linking ``libtorchtrt_runtime.so``, likely using the following flags will help ``-Wl,--no-as-needed -ltorchtrt -Wl,--as-needed`` as there's no direct symbol dependency to anything in the Torch-TensorRT runtime for most Torch-TensorRT runtime applications
2826

2927
An example of how to use ``libtorchtrt_runtime.so`` can be found here: https://github.com/pytorch/TensorRT/tree/master/examples/torchtrt_runtime_example

packaging/driver_upgrade.bat

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,9 @@
1-
REM Source: https://github.com/pytorch/builder/blob/4e109742d88ff3c85e77d60bc4d90d229d7f6afe/windows/internal/driver_update.bat
2-
3-
set "DRIVER_DOWNLOAD_LINK=https://ossci-windows.s3.amazonaws.com/528.89-data-center-tesla-desktop-winserver-2016-2019-2022-dch-international.exe"
4-
curl --retry 3 -kL %DRIVER_DOWNLOAD_LINK% --output 528.89-data-center-tesla-desktop-winserver-2016-2019-2022-dch-international.exe
1+
set WIN_DRIVER_VN=528.89
2+
set "DRIVER_DOWNLOAD_LINK=https://ossci-windows.s3.amazonaws.com/%WIN_DRIVER_VN%-data-center-tesla-desktop-winserver-2016-2019-2022-dch-international.exe"
3+
curl --retry 3 -kL %DRIVER_DOWNLOAD_LINK% --output %WIN_DRIVER_VN%-data-center-tesla-desktop-winserver-2016-2019-2022-dch-international.exe
54
if errorlevel 1 exit /b 1
65

7-
start /wait 528.89-data-center-tesla-desktop-winserver-2016-2019-2022-dch-international.exe -s -noreboot
6+
start /wait %WIN_DRIVER_VN%-data-center-tesla-desktop-winserver-2016-2019-2022-dch-international.exe -s -noreboot
87
if errorlevel 1 exit /b 1
98

10-
del 528.89-data-center-tesla-desktop-winserver-2016-2019-2022-dch-international.exe || ver > NUL
9+
del %WIN_DRIVER_VN%-data-center-tesla-desktop-winserver-2016-2019-2022-dch-international.exe || ver > NUL

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ dev = [
8787
torchvision = [
8888
"torchvision",
8989
] #Leaving torchvisions dependency unconstrained so uv can just install something that should work for the torch we have. TV's on PyT makes it hard to put version constrains in
90-
quantization = ["nvidia-modelopt[deploy,hf,torch]>=0.17.0"]
90+
quantization = ["nvidia-modelopt[all]>=0.27.1"]
9191
monitoring-tools = ["rich>=13.7.1"]
9292
jupyter = ["rich[jupyter]>=13.7.1"]
9393
distributed = ["tensorrt-llm>=0.16.0"]

tests/py/dynamo/runtime/test_000_compilation_settings.py

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,10 @@
1+
import unittest
2+
3+
import tensorrt as trt
14
import torch
25
import torch_tensorrt
36
from torch.testing._internal.common_utils import TestCase, run_tests
7+
from torch_tensorrt.dynamo.utils import is_tegra_platform
48

59
from ..testing_utilities import DECIMALS_OF_AGREEMENT
610

@@ -53,6 +57,10 @@ def forward(self, x):
5357
)
5458
torch._dynamo.reset()
5559

60+
@unittest.skipIf(
61+
is_tegra_platform() and trt._version_ > "10.8",
62+
"DLA is not supported on Jetson platform starting TRT 10.8",
63+
)
5664
def test_dla_args(self):
5765
class AddSoftmax(torch.nn.Module):
5866
def forward(self, x):

tests/py/requirements.txt

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,3 +10,5 @@ pyyaml
1010
timm>=1.0.3
1111
flashinfer-python; python_version < "3.13"
1212
transformers==4.49.0
13+
nvidia-modelopt[all]~=0.27.0; python_version >'3.9' and python_version <'3.13'
14+
--extra-index-url https://pypi.nvidia.com

0 commit comments

Comments
 (0)