Open
Description
Checklist
- I have searched related issues but cannot get the expected help.
- 2. I have read the FAQ documentation but cannot get the expected help.
- 3. The bug has not been fixed in the latest version.
Describe the bug
I am trying to convert a CascadeRCNN to TensorRT via partitioning (i am trying to extract the fpn output embeddings at predict time), i am using the deploy.py script, both partitions are converted to ONNX, but when converting the second partition to TensorRT, i enounter this error:
(parseGraph): INVALID_GRAPH: Assertion failed: toposort(graph.node), &topoOrder) && "Failed to sort the model topologically."
the partitioning config:
_base_ = ['../_base_/base_tensorrt-fp16_static-1920x1920.py']
onnx_config = dict(
dynamic_axes={
'input': {
0: 'batch',
},
'dets': {
0: 'batch',
1: 'num_dets',
},
'labels': {
0: 'batch',
1: 'num_dets',
},
'bbox_feats': {
0: 'batch'
},
'cls_score': {
0: 'batch'
},
'bbox_pred': {
0: 'batch'
},
}, )
partition_config = dict(
type='two_stage', # the partition policy name
apply_marks=True, # should always be set to True
partition_cfg=[
dict(
save_file='backbone2fpn.onnx', # filename to save the partitioned onnx model
start=['detector_forward:input'], # [mark_name:input/output, ...]
end=['extract_feat:output'], # [mark_name:input/output, ...]
output_names=['feat'] # output names
),
dict(
save_file='fpn2end.onnx', # filename to save the partitioned onnx model
start=['roi_extractor:output'],
end=['bbox_head_forward:output'],
# start=['detector_forward:input'], # [mark_name:input/output, ...]
# end=['multiclass_nms:output'], # [mark_name:input/output, ...]
# output_names=['dets', 'labels'], # output names
output_names=['cls', 'bbox']
),
])
Reproduction
python tools/deploy.py [...]/libraries/mmdeploy/configs/mmdet/detection/detection_tensorrt_fpn_partitioned_static-1920x1920.py [...]/models/detector/cascade_rcnn_resnext50_32x4d_fpn_ga_gn_fp16_x2_2x1/model.py [...]/models/detector/cascade_rcnn_resnext50_32x4d_fpn_ga_gn_fp16_x2_2x1/checkpoints/epoch_4.pth [...]/gt_images/100.png --work-dir [...]/base_test --device cuda --dump-info
Environment
2023-02-06 11:45:32,239 - mmdeploy - INFO - **********Environmental information**********
fatal: not a git repository (or any of the parent directories): .git
2023-02-06 11:45:32,874 - mmdeploy - INFO - sys.platform: linux
2023-02-06 11:45:32,874 - mmdeploy - INFO - Python: 3.8.13 (default, Dec 16 2022, 08:32:30) [GCC 7.5.0]
2023-02-06 11:45:32,875 - mmdeploy - INFO - CUDA available: True
2023-02-06 11:45:32,875 - mmdeploy - INFO - GPU 0: Tesla V100-PCIE-16GB
2023-02-06 11:45:32,875 - mmdeploy - INFO - CUDA_HOME: /usr/local/cuda
2023-02-06 11:45:32,875 - mmdeploy - INFO - NVCC: Cuda compilation tools, release 11.3, V11.3.109
2023-02-06 11:45:32,875 - mmdeploy - INFO - GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
2023-02-06 11:45:32,875 - mmdeploy - INFO - PyTorch: 1.12.0+cu113
2023-02-06 11:45:32,875 - mmdeploy - INFO - PyTorch compiling details: PyTorch built with:
- GCC 9.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 11.3
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
- CuDNN 8.3.2 (built against CUDA 11.5)
- Magma 2.5.2
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.3.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.12.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,
2023-02-06 11:45:32,875 - mmdeploy - INFO - TorchVision: 0.13.0+cu113
2023-02-06 11:45:32,875 - mmdeploy - INFO - OpenCV: 4.7.0
2023-02-06 11:45:32,875 - mmdeploy - INFO - MMCV: 1.7.0
2023-02-06 11:45:32,875 - mmdeploy - INFO - MMCV Compiler: GCC 9.3
2023-02-06 11:45:32,875 - mmdeploy - INFO - MMCV CUDA Compiler: 11.3
2023-02-06 11:45:32,875 - mmdeploy - INFO - MMDeploy: 0.12.0+
2023-02-06 11:45:32,875 - mmdeploy - INFO -
2023-02-06 11:45:32,875 - mmdeploy - INFO - **********Backend information**********
Traceback (most recent call last):
File "tools/check_env.py", line 71, in <module>
check_backend()
File "tools/check_env.py", line 28, in check_backend
logger.info(f'onnxruntime: {ort_version}\tops_is_avaliable : '
AttributeError: module 'mmdeploy.apis.onnxruntime' has no attribute 'is_custom_ops_available'
root@5c0c4432940d:/home/python_modules/libraries/mmdeploy# echo $PATH
/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/cuda/bin:/bin:/usr/local/cuda/bin:/bin
root@5c0c4432940d:/home/python_modules/libraries/mmdeploy# echo $LD_LIBRARY_PATH
/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/local/cuda/lib64:/usr/local/cuda/lib64
### Error traceback
_No response_