Skip to content

[Docs] Is there anyone who has tried to transfer mmpose inference model to rknn format? #3196

Open
@yizi72

Description

@yizi72

📚 The doc issue

We are now studying how to deploy a mmpose algorithm to a RK3588 pad (Android system). The first job we've done is to convert the mmpose inference model to onnx format and then rknn format.

We used torch.onnx.export to otain an onnx model which has been verified to obtain correct results. Then the following code is used to get the rknn format :

from rknn.api import RKNN
import cv2
import numpy as np

if name == 'main':
rknn = RKNN(verbose=True,verbose_file="log.txt")

rknn.config(
mean_values=[[123.675, 116.28, 103.53]],
std_values=[[58.395, 57.12, 57.375]],
target_platform="rk3588",
optimization_level=1
)

rknn.load_onnx(
model="path/to/mmpose.onnx",
input_size_list=[[1,3,512,512]],
)

rknn.build(
do_quantization=False
)

rknn.export_rknn(
export_path="path/to/mmpose.rknn"
)

rknn.release()
The tranfer process is good and no error or warning is found from the log.txt. However when the mmpose.rknn model is verified through a python program, very odd result is shown. 4 key points are found but located in the same position. We've checked the postprocessing code and they are correct.

Is there anyone who has tried the similar work and share some expereience with us? Thanks!

Suggest a potential alternative/fix

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions