diff --git a/models/face_detection_yunet/README.md b/models/face_detection_yunet/README.md old mode 100644 new mode 100755 index 1eb32854..89a4eb53 --- a/models/face_detection_yunet/README.md +++ b/models/face_detection_yunet/README.md @@ -8,13 +8,14 @@ Notes: - This model can detect **faces of pixels between around 10x10 to 300x300** due to the training scheme. - For details on training this model, please visit https://github.com/ShiqiYu/libfacedetection.train. - This ONNX model has fixed input shape, but OpenCV DNN infers on the exact shape of input image. See https://github.com/opencv/opencv_zoo/issues/44 for more information. +- Quantization was done via Per Tensor method. Results of accuracy evaluation with [tools/eval](../../tools/eval). | Models | Easy AP | Medium AP | Hard AP | | ----------- | ------- | --------- | ------- | | YuNet | 0.8871 | 0.8710 | 0.7681 | -| YuNet quant | 0.8838 | 0.8683 | 0.7676 | +| YuNet quant | 0.8809 | 0.8626 | 0.7493 | \*: 'quant' stands for 'quantized'. diff --git a/models/face_detection_yunet/face_detection_yunet_2023mar_int8.onnx b/models/face_detection_yunet/face_detection_yunet_2023mar_int8.onnx old mode 100644 new mode 100755 index c10540eb..97c7a2d8 --- a/models/face_detection_yunet/face_detection_yunet_2023mar_int8.onnx +++ b/models/face_detection_yunet/face_detection_yunet_2023mar_int8.onnx @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:321aa5a6afabf7ecc46a3d06bfab2b579dc96eb5c3be7edd365fa04502ad9294 -size 100416 +oid sha256:d0405ddda91a261af340e087219dfbac26cea1575aab2ccb7f2ed41e20708a82 +size 142857 diff --git a/models/face_recognition_sface/README.md b/models/face_recognition_sface/README.md old mode 100644 new mode 100755 index 6fb9c5c1..f2a3d386 --- a/models/face_recognition_sface/README.md +++ b/models/face_recognition_sface/README.md @@ -8,13 +8,14 @@ Note: - Model files encode MobileFaceNet instances trained on the SFace loss function, see the [SFace paper](https://arxiv.org/abs/2205.12010) for reference. - ONNX file conversions from [original code base](https://github.com/zhongyy/SFace) thanks to [Chengrui Wang](https://github.com/crywang). - (As of Sep 2021) Supporting 5-landmark warping for now, see below for details. +- Quantization was done via Per Tensor method. Results of accuracy evaluation with [tools/eval](../../tools/eval). | Models | Accuracy | | ----------- | -------- | | SFace | 0.9940 | -| SFace quant | 0.9932 | +| SFace quant | 0.9928 | \*: 'quant' stands for 'quantized'. diff --git a/models/face_recognition_sface/face_recognition_sface_2021dec_int8.onnx b/models/face_recognition_sface/face_recognition_sface_2021dec_int8.onnx old mode 100644 new mode 100755 index 23086ad9..532378db --- a/models/face_recognition_sface/face_recognition_sface_2021dec_int8.onnx +++ b/models/face_recognition_sface/face_recognition_sface_2021dec_int8.onnx @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:2b0e941e6f16cc048c20aee0c8e31f569118f65d702914540f7bfdc14048d78a -size 9896933 +oid sha256:6ea795662a7996ac3c6b66face4f4af21a692fc01319696085307643ded02cd4 +size 9967287 diff --git a/models/facial_expression_recognition/README.md b/models/facial_expression_recognition/README.md old mode 100644 new mode 100755 index 0b0004a0..a5f489bc --- a/models/facial_expression_recognition/README.md +++ b/models/facial_expression_recognition/README.md @@ -7,6 +7,7 @@ Note: - Progressive Teacher is contributed by [Jing Jiang](https://scholar.google.com/citations?user=OCwcfAwAAAAJ&hl=zh-CN). - [MobileFaceNet](https://link.springer.com/chapter/10.1007/978-3-319-97909-0_46) is used as the backbone and the model is able to classify seven basic facial expressions (angry, disgust, fearful, happy, neutral, sad, surprised). - [facial_expression_recognition_mobilefacenet_2022july.onnx](https://github.com/opencv/opencv_zoo/raw/master/models/facial_expression_recognition/facial_expression_recognition_mobilefacenet_2022july.onnx) is implemented thanks to [Chengrui Wang](https://github.com/crywang). +- Quantization was done via Per Channel method. Results of accuracy evaluation on [RAF-DB](http://whdeng.cn/RAF/model1.html). diff --git a/models/facial_expression_recognition/facial_expression_recognition_mobilefacenet_2022july_int8.onnx b/models/facial_expression_recognition/facial_expression_recognition_mobilefacenet_2022july_int8.onnx old mode 100644 new mode 100755 index 06473970..54997313 --- a/models/facial_expression_recognition/facial_expression_recognition_mobilefacenet_2022july_int8.onnx +++ b/models/facial_expression_recognition/facial_expression_recognition_mobilefacenet_2022july_int8.onnx @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:f0d7093aff10e2638c734c5f18a6a7eabd2b9239b20bdb9b8090865a6f69a1ed -size 1364007 +oid sha256:27ee23d3c2717c8627348d37c83bf7ab899a0160ca3344f2c48d0a0d0531290d +size 1446244 diff --git a/models/handpose_estimation_mediapipe/README.md b/models/handpose_estimation_mediapipe/README.md old mode 100644 new mode 100755 index fdd12c31..7784e32e --- a/models/handpose_estimation_mediapipe/README.md +++ b/models/handpose_estimation_mediapipe/README.md @@ -14,6 +14,7 @@ This model is converted from TFlite to ONNX using following tools: **Note**: - The int8-quantized model may produce invalid results due to a significant drop of accuracy. - Visit https://github.com/google/mediapipe/blob/master/docs/solutions/models.md#hands for models of larger scale. +- Quantization was done via Per Tensor method. ## Demo diff --git a/models/handpose_estimation_mediapipe/handpose_estimation_mediapipe_2023feb_int8.onnx b/models/handpose_estimation_mediapipe/handpose_estimation_mediapipe_2023feb_int8.onnx old mode 100644 new mode 100755 index d6301154..1368009d --- a/models/handpose_estimation_mediapipe/handpose_estimation_mediapipe_2023feb_int8.onnx +++ b/models/handpose_estimation_mediapipe/handpose_estimation_mediapipe_2023feb_int8.onnx @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:e97bc1fb83b641954d33424c82b6ade719d0f73250bdb91710ecfd5f7b47e321 -size 1167628 +oid sha256:3f0c4bf34f009038d4222055a7a409c56fd7b0045b39f14757c7bb40c1ff075f +size 1167787 diff --git a/models/human_segmentation_pphumanseg/README.md b/models/human_segmentation_pphumanseg/README.md old mode 100644 new mode 100755 index 61e514b8..d239a74f --- a/models/human_segmentation_pphumanseg/README.md +++ b/models/human_segmentation_pphumanseg/README.md @@ -1,6 +1,6 @@ # PPHumanSeg -This model is ported from [PaddleHub](https://github.com/PaddlePaddle/PaddleHub) using [this script from OpenCV](https://github.com/opencv/opencv/blob/master/samples/dnn/dnn_model_runner/dnn_conversion/paddlepaddle/paddle_humanseg.py). +This model is ported from [PaddleHub](https://github.com/PaddlePaddle/PaddleHub) using [this script from OpenCV](https://github.com/opencv/opencv/blob/master/samples/dnn/dnn_model_runner/dnn_conversion/paddlepaddle/paddle_humanseg.py). Quantization was done via Per Tensor method. ## Demo @@ -47,7 +47,7 @@ Results of accuracy evaluation with [tools/eval](../../tools/eval). | Models | Accuracy | mIoU | | ------------------ | -------------- | ------------- | | PPHumanSeg | 0.9581 | 0.8996 | -| PPHumanSeg quant | 0.4365 | 0.2788 | +| PPHumanSeg quant | 0.7261 | 0.3687 | \*: 'quant' stands for 'quantized'. diff --git a/models/human_segmentation_pphumanseg/human_segmentation_pphumanseg_2023mar_int8.onnx b/models/human_segmentation_pphumanseg/human_segmentation_pphumanseg_2023mar_int8.onnx old mode 100644 new mode 100755 index d1eea02a..aabe496e --- a/models/human_segmentation_pphumanseg/human_segmentation_pphumanseg_2023mar_int8.onnx +++ b/models/human_segmentation_pphumanseg/human_segmentation_pphumanseg_2023mar_int8.onnx @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:510775a9e23c1a53c34013a2fa3ac1906bfd7b789d55c07e6b49f30bb669007d -size 1607872 +oid sha256:d403dc1ec98ee8883ebc02b998539592b11208c5abc0b3fa073858a880d59f90 +size 1659241 diff --git a/models/image_classification_mobilenet/README.md b/models/image_classification_mobilenet/README.md old mode 100644 new mode 100755 index 228a2800..294cca08 --- a/models/image_classification_mobilenet/README.md +++ b/models/image_classification_mobilenet/README.md @@ -6,12 +6,14 @@ MobileNetV2: Inverted Residuals and Linear Bottlenecks Results of accuracy evaluation with [tools/eval](../../tools/eval). +Quantization was done via Per Channel method for V1 and Per Tensor for V2 + | Models | Top-1 Accuracy | Top-5 Accuracy | | ------------------ | -------------- | -------------- | | MobileNet V1 | 67.64 | 87.97 | -| MobileNet V1 quant | 55.53 | 78.74 | +| MobileNet V1 quant | 40.50 | 53.87 | | MobileNet V2 | 69.44 | 89.23 | -| MobileNet V2 quant | 68.37 | 88.56 | +| MobileNet V2 quant | 58.10 | 87.40 | \*: 'quant' stands for 'quantized'. diff --git a/models/image_classification_mobilenet/image_classification_mobilenetv1_2022apr_int8.onnx b/models/image_classification_mobilenet/image_classification_mobilenetv1_2022apr_int8.onnx old mode 100644 new mode 100755 index 240b151a..404c0a8e --- a/models/image_classification_mobilenet/image_classification_mobilenetv1_2022apr_int8.onnx +++ b/models/image_classification_mobilenet/image_classification_mobilenetv1_2022apr_int8.onnx @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:ef32077ef2f8f37ddafeeb1d29a0662e7a794d61190552730769a96b7d58e6df -size 4321622 +oid sha256:175961c08525ef63314a8b547b5c502631c7cfd070f401bfa9aebad7cbe49fac +size 4441268 diff --git a/models/image_classification_mobilenet/image_classification_mobilenetv2_2022apr_int8.onnx b/models/image_classification_mobilenet/image_classification_mobilenetv2_2022apr_int8.onnx old mode 100644 new mode 100755 index 63db23c8..8a3b6cb0 --- a/models/image_classification_mobilenet/image_classification_mobilenetv2_2022apr_int8.onnx +++ b/models/image_classification_mobilenet/image_classification_mobilenetv2_2022apr_int8.onnx @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:cc028fe6cae7bc11a4ff53cfc9b79c920e8be65ce33a904ec3e2a8f66d77f95f -size 3655033 +oid sha256:59c59a60cffbadd9c7afd48e637986a8028a53c55311102906e0e4d583755279 +size 3597682 diff --git a/models/license_plate_detection_yunet/README.md b/models/license_plate_detection_yunet/README.md old mode 100644 new mode 100755 index c69e7820..e93e59a1 --- a/models/license_plate_detection_yunet/README.md +++ b/models/license_plate_detection_yunet/README.md @@ -3,6 +3,7 @@ This model is contributed by Dong Xu (徐栋) from [watrix.ai](watrix.ai) (银河水滴). Please note that the model is trained with Chinese license plates, so the detection results of other license plates with this model may be limited. +Quantization was done via Per Tensor method. ## Demo diff --git a/models/license_plate_detection_yunet/license_plate_detection_lpd_yunet_2023mar_int8.onnx b/models/license_plate_detection_yunet/license_plate_detection_lpd_yunet_2023mar_int8.onnx old mode 100644 new mode 100755 index 94c15dc1..e93939af --- a/models/license_plate_detection_yunet/license_plate_detection_lpd_yunet_2023mar_int8.onnx +++ b/models/license_plate_detection_yunet/license_plate_detection_lpd_yunet_2023mar_int8.onnx @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:d67982a014fe93ad04612f565ed23ca010dcb0fd925d880ef0edf9cd7bdf931a -size 1087142 +oid sha256:bf49e4c50ac44412dea5875003139301f9b8ad7116f89e0093ef921642fc591a +size 1115111 diff --git a/models/object_detection_yolox/README.md b/models/object_detection_yolox/README.md old mode 100644 new mode 100755 index 49316c58..01ae2514 --- a/models/object_detection_yolox/README.md +++ b/models/object_detection_yolox/README.md @@ -10,6 +10,7 @@ Key features of the YOLOX object detector Note: - This version of YoloX: YoloX_s +- Quantization was done via Per Tensor method. ## Demo diff --git a/models/object_detection_yolox/object_detection_yolox_2022nov_int8.onnx b/models/object_detection_yolox/object_detection_yolox_2022nov_int8.onnx old mode 100644 new mode 100755 index af996081..5d616d91 --- a/models/object_detection_yolox/object_detection_yolox_2022nov_int8.onnx +++ b/models/object_detection_yolox/object_detection_yolox_2022nov_int8.onnx @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:01a3b0f400b30bc1e45230e991b2e499ab42622485a330021947333fbaf03935 -size 9079452 +oid sha256:b8c1766fb329f0bd2845a53f5e2ed1f9b555f65ee9a5fd5231ebd367927588ce +size 9143614 diff --git a/models/palm_detection_mediapipe/README.md b/models/palm_detection_mediapipe/README.md old mode 100644 new mode 100755 index 75de371e..6f0ead93 --- a/models/palm_detection_mediapipe/README.md +++ b/models/palm_detection_mediapipe/README.md @@ -9,6 +9,7 @@ SSD Anchors are generated from [GenMediaPipePalmDectionSSDAnchors](https://githu **Note**: - Visit https://github.com/google/mediapipe/blob/master/docs/solutions/models.md#hands for models of larger scale. +- Quantization was done via Per Tensor method. ## Demo diff --git a/models/palm_detection_mediapipe/palm_detection_mediapipe_2023feb_int8.onnx b/models/palm_detection_mediapipe/palm_detection_mediapipe_2023feb_int8.onnx old mode 100644 new mode 100755 index 8e4c39d8..9a29dacd --- a/models/palm_detection_mediapipe/palm_detection_mediapipe_2023feb_int8.onnx +++ b/models/palm_detection_mediapipe/palm_detection_mediapipe_2023feb_int8.onnx @@ -1,3 +1,3 @@ version https://git-lfs.github.com/spec/v1 -oid sha256:9f014de96ef5b6816b3eb9a5fed21a7371ef0f104ea440aa19ce9129fe2af5f6 -size 1157004 +oid sha256:71c438ec3a32b2818673ef30fd5a64b14c011fa1a34f0772744cfcbd43f68908 +size 1241472 diff --git a/models/pose_estimation_mediapipe/README.md b/models/pose_estimation_mediapipe/README.md old mode 100644 new mode 100755 index d7bcbce1..2c6b3fe8 --- a/models/pose_estimation_mediapipe/README.md +++ b/models/pose_estimation_mediapipe/README.md @@ -10,6 +10,7 @@ This model is converted from TFlite to ONNX using following tools: **Note**: - Visit https://github.com/google/mediapipe/blob/master/docs/solutions/models.md#pose for models of larger scale. +- Quantization was done via Per Channel method. ## Demo ### python diff --git a/models/pose_estimation_mediapipe/pose_estimation_mediapipe_2023mar_int8.onnx b/models/pose_estimation_mediapipe/pose_estimation_mediapipe_2023mar_int8.onnx new file mode 100755 index 00000000..65e02b9c --- /dev/null +++ b/models/pose_estimation_mediapipe/pose_estimation_mediapipe_2023mar_int8.onnx @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8ae936aec501f89334c31b50ab0a6c3abec45a793bc4e465bf767ceb7e2492e +size 1839235 diff --git a/models/text_recognition_crnn/README.md b/models/text_recognition_crnn/README.md old mode 100644 new mode 100755 index 29870da1..c5e8a5d7 --- a/models/text_recognition_crnn/README.md +++ b/models/text_recognition_crnn/README.md @@ -3,6 +3,7 @@ [An End-to-End Trainable Neural Network for Image-based Sequence Recognition and Its Application to Scene Text Recognition](https://arxiv.org/abs/1507.05717) Results of accuracy evaluation with [tools/eval](../../tools/eval) at different text recognition datasets. +2021 Sep English model's Quantization was done via Per Channel method. | Model name | ICDAR03(%) | IIIT5k(%) | CUTE80(%) | | ------------ | ---------- | --------- | --------- | diff --git a/models/text_recognition_crnn/text_recognition_CRNN_EN_2021sep_int8.onnx b/models/text_recognition_crnn/text_recognition_CRNN_EN_2021sep_int8.onnx new file mode 100755 index 00000000..805572c5 --- /dev/null +++ b/models/text_recognition_crnn/text_recognition_CRNN_EN_2021sep_int8.onnx @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1073218f46f9be0065b81b35ee4cafad86837166063851af01fb20599b4e6f76 +size 16411940 diff --git a/tools/quantize/quantize-ort.py b/tools/quantize/quantize-ort.py old mode 100644 new mode 100755 index aba57f71..a4fcc84b --- a/tools/quantize/quantize-ort.py +++ b/tools/quantize/quantize-ort.py @@ -14,7 +14,7 @@ import onnxruntime from onnxruntime.quantization import quantize_static, CalibrationDataReader, QuantType, QuantFormat, quant_pre_process -from transform import Compose, Resize, CenterCrop, Normalize, ColorConvert, HandAlign +from transform import Compose, Resize, CenterCrop, Normalize, ColorConvert, HandAlign, ImagePad class DataReader(CalibrationDataReader): def __init__(self, model_path, image_dir, transforms, data_dim): @@ -79,7 +79,7 @@ def run(self): quant_pre_process(new_model_path, new_model_path) output_name = '{}_{}.onnx'.format(self.model_path[:-5], self.wt_type) quantize_static(new_model_path, output_name, self.dr, - quant_format=QuantFormat.QOperator, # start from onnxruntime==1.11.0, quant_format is set to QuantFormat.QDQ by default, which performs fake quantization + quant_format=QuantFormat.QDQ, # start from onnxruntime==1.11.0, quant_format is set to QuantFormat.QDQ by default, which performs fake quantization per_channel=self.per_channel, weight_type=self.type_dict[self.wt_type], activation_type=self.type_dict[self.act_type], @@ -91,22 +91,63 @@ def run(self): models=dict( yunet=Quantize(model_path='../../models/face_detection_yunet/face_detection_yunet_2023mar.onnx', calibration_image_dir='../../benchmark/data/face_detection', - transforms=Compose([Resize(size=(160, 120))]), + transforms=Compose([Resize(size=(640, 640))]), nodes_to_exclude=['MaxPool_5', 'MaxPool_18', 'MaxPool_25', 'MaxPool_32'], - ), + ), #COLOR_BGR2RGB sface=Quantize(model_path='../../models/face_recognition_sface/face_recognition_sface_2021dec.onnx', calibration_image_dir='../../benchmark/data/face_recognition', transforms=Compose([Resize(size=(112, 112))])), + # Facial Expression Recognition net + facexpnet=Quantize(model_path='../../models/facial_expression_recognition/facial_expression_recognition_mobilefacenet_2022july.onnx', + calibration_image_dir='../../benchmark/data/facial_expression_recognition/fer_calibration', + transforms=Compose([Resize(size=(112, 112)), + ColorConvert(ctype=cv.COLOR_BGR2RGB), + Normalize(std=[255, 255, 255]) + ])), + # Object Detection nanonet + nanonet=Quantize(model_path='../../models/object_detection_nanodet/object_detection_nanodet_2022nov.onnx', + calibration_image_dir='../../benchmark/data/object_detection', + transforms=Compose([Resize(size=(112, 112))])), + # object_detection_yolox + yolox=Quantize(model_path='../../models/object_detection_yolox/object_detection_yolox_2022nov.onnx', + calibration_image_dir='../../benchmark/data/object_detection', + transforms=Compose([Resize(size=(640, 640))])), + # object_tracking_vittrack + vittrack=Quantize(model_path='../../models/object_tracking_vittrack/object_tracking_vittrack_2023sep.onnx', + calibration_image_dir='../../benchmark/data/object_tracking_image', + transforms=Compose([Resize(size=(640, 640))])), + pphumanseg=Quantize(model_path='../../models/human_segmentation_pphumanseg/human_segmentation_pphumanseg_2023mar.onnx', calibration_image_dir='../../benchmark/data/human_segmentation', transforms=Compose([Resize(size=(192, 192))])), + + mobilenetv1=Quantize(model_path='../../models/image_classification_mobilenet/image_classification_mobilenetv1_2022apr.onnx', + calibration_image_dir='../../benchmark/data/image_classification', + transforms=Compose([ + Resize(size=(224, 224)), + Normalize(std=[255, 255, 255]), + Normalize(mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225]) + ])), + mobilenetv2=Quantize(model_path='../../models/image_classification_mobilenet/image_classification_mobilenetv2_2022apr.onnx', + calibration_image_dir='../../benchmark/data/image_classification', + transforms=Compose([ + Resize(size=(224, 224)), + Normalize(std=[255, 255, 255]), + Normalize(mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225]) + ])), + ppresnet50=Quantize(model_path='../../models/image_classification_ppresnet/image_classification_ppresnet50_2022jan.onnx', calibration_image_dir='../../benchmark/data/image_classification', transforms=Compose([Resize(size=(224, 224))])), - # TBD: VitTrack youtureid=Quantize(model_path='../../models/person_reid_youtureid/person_reid_youtu_2021nov.onnx', calibration_image_dir='../../benchmark/data/person_reid', transforms=Compose([Resize(size=(128, 256))])), + mppose=Quantize(model_path='../../models/pose_estimation_mediapipe/pose_estimation_mediapipe_2023mar.onnx', + calibration_image_dir='../../benchmark/data/person_detection', + transforms=Compose([Resize(size=(256, 256)), + ColorConvert(ctype=cv.COLOR_BGR2RGB), + Normalize(std=[255, 255, 255]), + ]),data_dim="hwc"), ppocrv3det_en=Quantize(model_path='../../models/text_detection_ppocr/text_detection_en_ppocrv3_2023may.onnx', calibration_image_dir='../../benchmark/data/text', transforms=Compose([Resize(size=(736, 736)), @@ -122,18 +163,17 @@ def run(self): calibration_image_dir='../../benchmark/data/text', transforms=Compose([Resize(size=(100, 32))])), mp_palmdet=Quantize(model_path='../../models/palm_detection_mediapipe/palm_detection_mediapipe_2023feb.onnx', - calibration_image_dir='path/to/dataset', + calibration_image_dir='../../benchmark/data/FreiHAND/evaluation/rgb', transforms=Compose([Resize(size=(192, 192)), Normalize(std=[255, 255, 255]), ColorConvert(ctype=cv.COLOR_BGR2RGB)]), data_dim='hwc'), mp_handpose=Quantize(model_path='../../models/handpose_estimation_mediapipe/handpose_estimation_mediapipe_2023feb.onnx', - calibration_image_dir='path/to/dataset', + calibration_image_dir='../../benchmark/data/FreiHAND/evaluation/rgb', transforms=Compose([HandAlign("mp_handpose"), Resize(size=(224, 224)), Normalize(std=[255, 255, 255]), ColorConvert(ctype=cv.COLOR_BGR2RGB)]), data_dim='hwc'), lpd_yunet=Quantize(model_path='../../models/license_plate_detection_yunet/license_plate_detection_lpd_yunet_2023mar.onnx', calibration_image_dir='../../benchmark/data/license_plate_detection', transforms=Compose([Resize(size=(320, 240))]), - nodes_to_exclude=['MaxPool_5', 'MaxPool_18', 'MaxPool_25', 'MaxPool_32', 'MaxPool_39'], - ), + nodes_to_exclude=['MaxPool_5', 'MaxPool_18', 'MaxPool_25', 'MaxPool_32', 'MaxPool_39'],), ) if __name__ == '__main__':