Thanks to the rapid development of Wearable Fitness Trackers (WFTs) and Smartphone Pedometer Apps (SPAs), people are keeping an eye on their health through fitness and heart rate tracking; therefore, home weight training exercises have received a lot of attention lately. A multi-procedure intelligent algorithm for weight training using two inertial measurement units (IMUs) is proposed in this paper. The first procedure is for motion tracking that estimates the arm orientation and calculates the positions of the wrist and elbow. The second procedure is for posture recognition based on deep learning, which identifies the type of exercise posture. The final procedure is for exercise prescription variables, which first infers the user’s exercise state based on the results of the previous two procedures, triggers the corresponding event, and calculates the key indicators of the weight training exercise (exercise prescription variables), including exercise items, repetitions, sets, training capacity, workout capacity, training period, explosive power, etc.). This study integrates the hardware and software as a complete system. The developed smartphone App is able to receive heart rate data, to analyze the user’s exercise state, and to calculate the exercise prescription variables automatically in real-time. The dashboard in the user interface of the smartphone App can display exercise information through Unity’s Animation System (avatar) and graphics, and records are stored by the SQLite database. The designed system was proven by two types of experimental verification tests. The first type is to control a stepper motor to rotate the designed IMU and to compare the rotation angle obtained from the IMU with the rotation angle of the controlled stepper motor. The average mean absolute error of estimation for 31 repeated experiments is 1.485 degrees. The second type is to use Mediapipe Pose to calculate the position of the wrist and the angles of upper arm and forearm between the Z-axis, and these calculated data are compared with the designed system. The root-mean-square (RMS) error of positions of the wrist is 2.43 cm, and the RMS errors of two angles are 5.654 and 4.385 degrees for upper arm and forearm, respectively. For posture recognition, 12 participants were divided into training group and test group. Eighty percent and 20% of 24,963 samples of 10 participants were used for the training and validation of the LSTM model, respectively. Three-thousand-three-hundred-and-fifty-nine samples of two participants were used to evaluate the performance of the trained LSTM model. The accuracy reached 99%, and F1 score was 0.99. When compared with the other LSTM-based variants, the accuracy of one-layer LSTM presented in this paper is still promising. The exercise prescription variables provided by the presented system are helpful for weight trainers/trainees to closely keep an eye on their fitness progress and for improving their health.
3D shape retrieval has always been a hot research topic in the field of computer vision, and the research goal is to perform fast and efficient retrieval to obtain 3D shapes that meet user needs. With the rapid development and popularization of touch screen devices, hand-drawn sketches have undoubtedly become the most convenient and user-friendly input form. However, the huge difference between the 3D shape and the 2D sketch is the main challenge that affects retrieval performance. In this paper, we propose a method of adding a sketch and view feature similarity comparison module during the training process to obtain the scores for the final feature descriptors under the premise of feature extraction of the 3D shape based on multi-view. Specifically, we render the 3D shape into 2D views from multiple different perspectives to represent the shape. Perform feature extraction on two types of inputs through two different networks, and design a similarity weighting module to calculate the scores of each view, so as to obtain the final descriptors. Finally, a final descriptor similarity metric network is trained based on contrastive loss. The experimental results on SHREC’13 dataset demonstrate the superiority and robustness of our method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.