• Mediapipe 手势模型 转换rknn


    mediappipe手势模型低端设备  RK3568推理 GPU 占用比较高  尝试3568平台NPU 进行推理 

    一下进行 tflite模型转换 rknn模型

    手部坐标 模型  图在这里插入图片描述

    准备rknn-tookit2 1.3.0以上环境  

    进入 t@ubuntu:~/rknn/rknn-toolkit2/examples/tflite$ 目录

    参照 mobilenet_v1 demo 转换hands模型

    copy mobilenet_v1 demo  重命名 mediapipe_hand

    cd /rknn/rknn-toolkit2/examples/tflite/mediapipe_hand$

    修改 test.py  加载模型 及输入图片资源 

    1. import numpy as np
    2. import cv2
    3. from rknn.api import RKNN
    4. #import tensorflow.compat.v1 as tf #使用1.0版本的方法
    5. #tf.disable_v2_behavior() #
    6. def show_outputs(outputs):
    7. output = outputs[0][0]
    8. output_sorted = sorted(output, reverse=True)
    9. top5_str = 'mobilenet_v1\n-----TOP 5-----\n'
    10. for i in range(5):
    11. value = output_sorted[i]
    12. index = np.where(output == value)
    13. for j in range(len(index)):
    14. if (i + j) >= 5:
    15. break
    16. if value > 0:
    17. topi = '{}: {}\n'.format(index[j], value)
    18. else:
    19. topi = '-1: 0.0\n'
    20. top5_str += topi
    21. print(top5_str)
    22. if __name__ == '__main__':
    23. # Create RKNN object
    24. rknn = RKNN(verbose=True)
    25. # Pre-process config
    26. print('--> Config model')
    27. rknn.config(mean_values=[128, 128, 128], std_values=[128, 128, 128])
    28. print('done')
    29. # Load model
    30. print('--> Loading model')
    31. ret = rknn.load_tflite(model='hand_landmark_lite.tflite')
    32. #ret = rknn.load_tflite(model='palm_detection_lite.tflite')
    33. if ret != 0:
    34. print('Load model failed!')
    35. exit(ret)
    36. print('done')
    37. # Build model
    38. print('--> Building model')
    39. ret = rknn.build(do_quantization=True, dataset='./dataset.txt')
    40. if ret != 0:
    41. print('Build model failed!')
    42. exit(ret)
    43. print('done')
    44. # Export rknn model
    45. print('--> Export rknn model')
    46. ret = rknn.export_rknn('./hands.rknn')
    47. if ret != 0:
    48. print('Export rknn model failed!')
    49. exit(ret)
    50. print('done')
    51. # Set inputs
    52. img = cv2.imread('./hand_1.jpg')
    53. img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    54. img = cv2.resize(img, (224,224))
    55. img = np.expand_dims(img, 0)
    56. # Init runtime environment
    57. print('--> Init runtime environment')
    58. ret = rknn.init_runtime()
    59. if ret != 0:
    60. print('Init runtime environment failed!')
    61. exit(ret)
    62. print('done')
    63. # Inference
    64. print('--> Running model')
    65. outputs = rknn.inference(inputs=[img])
    66. print('--> outputs:' ,outputs)
    67. np.save('./tflite_hands.npy', outputs[0])
    68. show_outputs(outputs)
    69. print('done')
    70. rknn.release()

    注意输入 图的size    tensorflow版本 尽量使用最新的  

    模拟推理结果

    数据格式

    [array([[ 75.521454, 161.8317 , 0. , 110.37751 , 165.15132 ,
    -6.639249, 141.91394 , 159.34198 , -11.618686, 170.13075 ,
    151.04291 , -15.768216, 190.0485 , 141.91394 , -19.917747,
    131.95508 , 99.58873 , -14.108404, 158.51207 , 63.90277 ,
    -18.257935, 174.28029 , 45.644836, -19.917747, 186.72887 ,
    31.536432, -20.747652, 112.86723 , 87.97005 , -12.448591,
    136.1046 , 48.134556, -15.768216, 151.04291 , 24.067278,
    -17.428028, 164.32141 , 7.469155, -17.428028, 96.26911 ,
    85.48033 , -9.128967, 109.54761 , 47.30465 , -12.448591,
    120.33639 , 25.727089, -13.278498, 130.29526 , 10.788779,
    -13.278498, 81.330795, 87.97005 , -7.469155, 83.82052 ,
    55.60371 , -9.128967, 90.45976 , 38.175682, -9.958874,
    97.92892 , 24.897182, -9.958874]], dtype=float32), array([[0.9889264]], dtype=float32), array([[0.4299424]], dtype=float32), array([[-0.03374431, 0.06308718, 0.01027001, 0.00293429, 0.0572186 ,
    0.02689764, 0.02200716, 0.04645955, 0.01369334, 0.04059098,
    0.04743765, 0.01075905, 0.06210908, 0.03863478, -0.00978096,
    0.01907287, -0.0009781 , -0.0009781 , 0.03521145, -0.01075905,
    -0.01075905, 0.04010193, -0.0224962 , 0.00489048, 0.05672956,
    -0.03618954, 0.02738668, 0.00342334, -0.00244524, -0.00391238,
    0.01516049, -0.02689764, -0.00146714, 0.02640859, -0.05183908,
    0.02151811, 0.04059098, -0.06113099, 0.00537953, -0.00391238,
    -0.00244524, 0.0009781 , -0.00635762, -0.02200716, -0.00146714,
    0.00244524, -0.04401431, 0.00489048, 0.01369334, -0.0581967 ,
    0.02396335, -0.01467144, -0.00880286, 0.00586857, -0.02836478,
    -0.01956192, 0.00244524, -0.02151811, -0.03716764, -0.00048905,
    -0.01173715, -0.04499241, -0.01907287]], dtype=float32)]

     手部坐标预测模型 (Hand LandMark Model)

    在这里插入图片描述

    21个关键点 hands检测置信度 左右手分类 ..

    在这里插入图片描述

  • 相关阅读:
    几千粉丝的视频账号播放量只有几百怎么办?借助批量剪辑快速提升播放量
    fdisk分区以及格式化磁盘简要步骤
    pytorch:Model模块专题
    命运2中文wiki搭建记录——MediaWiki安装与初设置
    编码技巧 --- 如何实现字符串运算表达式的计算
    MySQL进阶实战 3,mysql索引详解,上篇
    网络安全(黑客)自学
    C语言之动态内存管理篇(1)
    TensorFlow搭建LSTM实现多变量多步长时间序列预测(五):seq2seq
    【Java八股文总结】之Linux常用指令
  • 原文地址:https://blog.csdn.net/TyearLin/article/details/127102623