• 毕业设计-基于机器视觉的深蹲检测识别-TensorFlow-opencv


    目录

    前言

    课题背景和意义

    实现技术思路

    实现效果图样例


    前言


        📅大四是整个大学期间最忙碌的时光,一边要忙着备考或实习为毕业后面临的就业升学做准备,一边要为毕业设计耗费大量精力。近几年各个学校要求的毕设项目越来越难,有不少课题是研究生级别难度的,对本科同学来说是充满挑战。为帮助大家顺利通过和节省时间与精力投入到更重要的就业和考试中去,学长分享优质的选题经验和毕设项目与技术思路。

    🚀对毕设有任何疑问都可以问学长哦!

    大家好,这里是海浪学长毕设专题,本次分享的课题是

    🎯基于机器视觉的深蹲检测识别

    课题背景和意义

    深蹲是一项健身运动,是练大腿肌肉的王牌动作,坚持做还会起到减肥的作用。深蹲被认为是增强腿部和臀部力量和围度,以及发展核心力量(core strength)必不可少的练习。深蹲时,有明确阶段和大幅度变化的基本运动,可以通过机器视觉技术来实现对深蹲的检测识别。

    实现技术思路

    需要的库文件

    1. import io
    2. from PIL import ImageFont
    3. from PIL import ImageDraw
    4. import csv
    5. import cv2
    6. from matplotlib import pyplot as plt
    7. import numpy as np
    8. import os
    9. from PIL import Image
    10. import sys
    11. import tqdm
    12. from mediapipe.python.solutions import drawing_utils as mp_drawing
    13. from mediapipe.python.solutions import pose as mp_pose

    技术步骤

    1. 收集目标练习的图像样本并对其进行姿势预测,
    2. 将获得的姿态标志转换为适合 k-NN 分类器的数据,并使用这些数据形成训练集,
    3. 执行分类本身,然后进行重复计数。

    训练样本

    为了建立一个好的分类器,应该为训练集收集适当的样本:理论上每个练习的每个最终状态大约有几百个样本,收集的样本涵盖不同的摄像机角度、环境条件、身体形状和运动变化,这一点很重要,能做到这样最好。但实际中如果嫌麻烦的话每种状态15-25张左右都可以,然后拍摄角度要注意多样化,最好每隔15度拍摄一张。

    获取归一化的landmarks

    要将样本转换为 k-NN 分类器训练集,我们可以在给定图像上运行 BlazePose 模型,并将预测的landmarks转储到 CSV 文件中。此外,Pose Classification Colab (Extended)通过针对整个训练集对每个样本进行分类,提供了有用的工具来查找异常值(例如,错误预测的姿势)和代表性不足的类别(例如,不覆盖所有摄像机角度)。
     

    1. # 人体姿态编码模块
    2. class FullBodyPoseEmbedder(object):
    3. """Converts 3D pose landmarks into 3D embedding."""
    4. def __init__(self, torso_size_multiplier=2.5):
    5. # Multiplier to apply to the torso to get minimal body size.
    6. self._torso_size_multiplier = torso_size_multiplier
    7. # Names of the landmarks as they appear in the prediction.
    8. self._landmark_names = [
    9. 'nose',
    10. 'left_eye_inner', 'left_eye', 'left_eye_outer',
    11. 'right_eye_inner', 'right_eye', 'right_eye_outer',
    12. 'left_ear', 'right_ear',
    13. 'mouth_left', 'mouth_right',
    14. 'left_shoulder', 'right_shoulder',
    15. 'left_elbow', 'right_elbow',
    16. 'left_wrist', 'right_wrist',
    17. 'left_pinky_1', 'right_pinky_1',
    18. 'left_index_1', 'right_index_1',
    19. 'left_thumb_2', 'right_thumb_2',
    20. 'left_hip', 'right_hip',
    21. 'left_knee', 'right_knee',
    22. 'left_ankle', 'right_ankle',
    23. 'left_heel', 'right_heel',
    24. 'left_foot_index', 'right_foot_index',
    25. ]
    26. def __call__(self, landmarks):
    27. """Normalizes pose landmarks and converts to embedding
    28. Args:
    29. landmarks - NumPy array with 3D landmarks of shape (N, 3).
    30. Result:
    31. Numpy array with pose embedding of shape (M, 3) where `M` is the number of
    32. pairwise distances defined in `_get_pose_distance_embedding`.
    33. """
    34. assert landmarks.shape[0] == len(self._landmark_names), 'Unexpected number of landmarks: {}'.format(
    35. landmarks.shape[0])
    36. # Get pose landmarks.
    37. landmarks = np.copy(landmarks)
    38. # Normalize landmarks.
    39. landmarks = self._normalize_pose_landmarks(landmarks)
    40. # Get embedding.
    41. embedding = self._get_pose_distance_embedding(landmarks)
    42. return embedding
    43. def _normalize_pose_landmarks(self, landmarks):
    44. """Normalizes landmarks translation and scale."""
    45. landmarks = np.copy(landmarks)
    46. # Normalize translation.
    47. pose_center = self._get_pose_center(landmarks)
    48. landmarks -= pose_center
    49. # Normalize scale.
    50. pose_size = self._get_pose_size(landmarks, self._torso_size_multiplier)
    51. landmarks /= pose_size
    52. # Multiplication by 100 is not required, but makes it eaasier to debug.
    53. landmarks *= 100
    54. return landmarks
    55. def _get_pose_size(self, landmarks, torso_size_multiplier):
    56. """Calculates pose size.
    57. It is the maximum of two values:
    58. * Torso size multiplied by `torso_size_multiplier`
    59. * Maximum distance from pose center to any pose landmark
    60. """
    61. # This approach uses only 2D landmarks to compute pose size.
    62. landmarks = landmarks[:, :2]
    63. # Hips center.
    64. left_hip = landmarks[self._landmark_names.index('left_hip')]
    65. right_hip = landmarks[self._landmark_names.index('right_hip')]
    66. hips = (left_hip + right_hip) * 0.5
    67. # Shoulders center.
    68. left_shoulder = landmarks[self._landmark_names.index('left_shoulder')]
    69. right_shoulder = landmarks[self._landmark_names.index('right_shoulder')]
    70. shoulders = (left_shoulder + right_shoulder) * 0.5
    71. # Torso size as the minimum body size.
    72. torso_size = np.linalg.norm(shoulders - hips)
    73. # Max dist to pose center.
    74. pose_center = self._get_pose_center(landmarks)
    75. max_dist = np.max(np.linalg.norm(landmarks - pose_center, axis=1))
    76. return max(torso_size * torso_size_multiplier, max_dist)
    77. def _get_pose_distance_embedding(self, landmarks):
    78. """Converts pose landmarks into 3D embedding.
    79. We use several pairwise 3D distances to form pose embedding. All distances
    80. include X and Y components with sign. We differnt types of pairs to cover
    81. different pose classes. Feel free to remove some or add new.
    82. Args:
    83. landmarks - NumPy array with 3D landmarks of shape (N, 3).
    84. Result:
    85. Numpy array with pose embedding of shape (M, 3) where `M` is the number of
    86. pairwise distances.
    87. """
    88. embedding = np.array([
    89. # One joint.
    90. self._get_distance(
    91. self._get_average_by_names(landmarks, 'left_hip', 'right_hip'),
    92. self._get_average_by_names(landmarks, 'left_shoulder', 'right_shoulder')),
    93. self._get_distance_by_names(landmarks, 'left_shoulder', 'left_elbow'),
    94. self._get_distance_by_names(landmarks, 'right_shoulder', 'right_elbow'),
    95. self._get_distance_by_names(landmarks, 'left_elbow', 'left_wrist'),
    96. self._get_distance_by_names(landmarks, 'right_elbow', 'right_wrist'),
    97. self._get_distance_by_names(landmarks, 'left_hip', 'left_knee'),
    98. self._get_distance_by_names(landmarks, 'right_hip', 'right_knee'),
    99. self._get_distance_by_names(landmarks, 'left_knee', 'left_ankle'),
    100. self._get_distance_by_names(landmarks, 'right_knee', 'right_ankle'),
    101. # Two joints.
    102. self._get_distance_by_names(landmarks, 'left_shoulder', 'left_wrist'),
    103. self._get_distance_by_names(landmarks, 'right_shoulder', 'right_wrist'),
    104. self._get_distance_by_names(landmarks, 'left_hip', 'left_ankle'),
    105. self._get_distance_by_names(landmarks, 'right_hip', 'right_ankle'),
    106. # Four joints.
    107. self._get_distance_by_names(landmarks, 'left_hip', 'left_wrist'),
    108. self._get_distance_by_names(landmarks, 'right_hip', 'right_wrist'),
    109. # Five joints.
    110. self._get_distance_by_names(landmarks, 'left_shoulder', 'left_ankle'),
    111. self._get_distance_by_names(landmarks, 'right_shoulder', 'right_ankle'),
    112. self._get_distance_by_names(landmarks, 'left_hip', 'left_wrist'),
    113. self._get_distance_by_names(landmarks, 'right_hip', 'right_wrist'),
    114. # Cross body.
    115. self._get_distance_by_names(landmarks, 'left_elbow', 'right_elbow'),
    116. self._get_distance_by_names(landmarks, 'left_knee', 'right_knee'),
    117. self._get_distance_by_names(landmarks, 'left_wrist', 'right_wrist'),
    118. self._get_distance_by_names(landmarks, 'left_ankle', 'right_ankle'),
    119. # Body bent direction.
    120. # self._get_distance(
    121. # self._get_average_by_names(landmarks, 'left_wrist', 'left_ankle'),
    122. # landmarks[self._landmark_names.index('left_hip')]),
    123. # self._get_distance(
    124. # self._get_average_by_names(landmarks, 'right_wrist', 'right_ankle'),
    125. # landmarks[self._landmark_names.index('right_hip')]),
    126. ])
    127. return embedding
    128. def _get_average_by_names(self, landmarks, name_from, name_to):
    129. lmk_from = landmarks[self._landmark_names.index(name_from)]
    130. lmk_to = landmarks[self._landmark_names.index(name_to)]
    131. return (lmk_from + lmk_to) * 0.5
    132. def _get_distance_by_names(self, landmarks, name_from, name_to):
    133. lmk_from = landmarks[self._landmark_names.index(name_from)]
    134. lmk_to = landmarks[self._landmark_names.index(name_to)]
    135. return self._get_distance(lmk_from, lmk_to)
    136. def _get_distance(self, lmk_from, lmk_to):
    137. return lmk_to - lmk_from

    使用KNN算法分类

    用于姿势分类的 k-NN 算法需要每个样本的特征向量表示和一个度量来计算两个这样的向量之间的距离,以找到最接近目标的姿势样本。

    为了将姿势标志转换为特征向量,我们使用预定义的姿势关节列表之间的成对距离,例如手腕和肩膀、脚踝和臀部以及两个手腕之间的距离。由于该算法依赖于距离,因此在转换之前所有姿势都被归一化以具有相同的躯干尺寸和垂直躯干方向。

     可以根据运动的特点选择所要计算的距离对(例如,引体向上可能更加关注上半身的距离对)。
    为了获得更好的分类结果,使用不同的距离度量调用了两次 k-NN 搜索:
    首先,为了过滤掉与目标样本几乎相同但在特征向量中只有几个不同值的样本(这意味着不同的弯曲关节和其他姿势类),使用最小坐标距离作为距离度量,
    然后使用平均坐标距离在第一次搜索中找到最近的姿势簇。

    最后,我们应用指数移动平均(EMA) 平滑来平衡来自姿势预测或分类的任何噪声。为此,我们不仅搜索最近的姿势簇,而且计算每个姿势簇的概率,并将其用于随着时间的推移进行平滑处理。

    1. # 姿态分类结果平滑
    2. class EMADictSmoothing(object):
    3. """Smoothes pose classification."""
    4. def __init__(self, window_size=10, alpha=0.2):
    5. self._window_size = window_size
    6. self._alpha = alpha
    7. self._data_in_window = []
    8. def __call__(self, data):
    9. """Smoothes given pose classification.
    10. Smoothing is done by computing Exponential Moving Average for every pose
    11. class observed in the given time window. Missed pose classes arre replaced
    12. with 0.
    13. Args:
    14. data: Dictionary with pose classification. Sample:
    15. {
    16. 'pushups_down': 8,
    17. 'pushups_up': 2,
    18. }
    19. Result:
    20. Dictionary in the same format but with smoothed and float instead of
    21. integer values. Sample:
    22. {
    23. 'pushups_down': 8.3,
    24. 'pushups_up': 1.7,
    25. }
    26. """
    27. # Add new data to the beginning of the window for simpler code.
    28. self._data_in_window.insert(0, data)
    29. self._data_in_window = self._data_in_window[:self._window_size]
    30. # Get all keys.
    31. keys = set([key for data in self._data_in_window for key, _ in data.items()])
    32. # Get smoothed values.
    33. smoothed_data = dict()
    34. for key in keys:
    35. factor = 1.0
    36. top_sum = 0.0
    37. bottom_sum = 0.0
    38. for data in self._data_in_window:
    39. value = data[key] if key in data else 0.0
    40. top_sum += factor * value
    41. bottom_sum += factor
    42. # Update factor.
    43. factor *= (1.0 - self._alpha)
    44. smoothed_data[key] = top_sum / bottom_sum
    45. return smoothed_data

    计数器计数

    为了计算重复次数,该算法监控目标姿势类别的概率。深蹲的“向上”和“向下”终端状态:

    当“下”位姿类的概率第一次通过某个阈值时,算法标记进入“下”位姿类。
    一旦概率下降到阈值以下(即起身超过一定高度),算法就会标记“向下”姿势类别,退出并增加计数器。
    为了避免概率在阈值附近波动(例如,当用户在“向上”和“向下”状态之间暂停时)导致幻像计数的情况,用于检测何时退出状态的阈值实际上略低于用于检测状态退出的阈值。

    实现效果图样例

     

    我是海浪学长,创作不易,欢迎点赞、关注、收藏、留言。

    毕设帮助,疑难解答,欢迎打扰!

    最后

  • 相关阅读:
    Linux ____02、Linux开关机、目录介绍、文件目录相关命令(常用命令)
    Linux 忘记密码怎么办,CentOS和Ubuntu重置密码方法
    springboot集成qq邮箱
    21天学习挑战赛——Python操作MySQL和SqlServer
    LeetCode 0667. 优美的排列 II - 思维 + 构造
    Linux系统中如何查看磁盘情况
    IPWorks Encrypt Delphi强加密的一整套组件
    ant design vue:自定义锚点样式
    4.2作业
    《代码大全2》第16章 控制循环
  • 原文地址:https://blog.csdn.net/qq_37340229/article/details/128180305