“…Biosignal based methods can be used to capture high-quality biological data, enabling the inference of hand movements through subtle changes in the human body. Although surface electromyography (EMG) stands out as a widely researched modality for this purpose, ultrasound emerges as a promising alternative, providing comprehensive visualization of forearm musculature to infer hand configurations [1][2]. With the advances in machine learning, ultrasound based human-machine interfaces have been used to control robots and AR/VR interfaces [3][4].…”
Section: Introductionmentioning
confidence: 99%
“…The latter is achieved because the center of the mass of the probe is closer to the body compared to the traditional perpendicular configuration. We use a convolutional neural network (CNN) based on [1][2], in addition to training a vision transformer (ViT) based on [8] to train models to estimate 5 hand gestures from ultrasound images obtained using both traditional perpendicular configuration as well as our proposed reflector based configuration. Section II describes the methods and the experimental design, with the results discussed in Section III.…”
This research presents an innovative mirror-based ultrasound system designed for hand gesture classification using Convolutional Neural Network (CNN) and Vision Transformer (ViT) architectures. Hand gesture recognition using ultrasound has garnered significant interest due to its potential applications in various fields such as prosthetic control and human-machine interfacing. Traditionally, ultrasound probes are placed perpendicular to the forearm causing discomfort and interference with natural arm movements due to the center of mass of the wearable ultrasound system being distanced from the body. To address this challenge, a novel approach utilizing the advantages of acoustic reflection is proposed. A convex ultrasound probe is strategically aligned with the forearm, and ultrasound waves are transmitted to the forearm, and received back using a mirror placed at 45 degrees to the imaging region and the forearm. By aligning the probe parallel to the arm, the center of mass is brought closer to the body, ensuring enhanced stability and reduced strain on the user's arm during data collection. A dataset comprising 5 hand gestures was collected to train and evaluate the performance with Support Vector Machines with linear kernel, CNN, and ViT based approaches. It was observed that the performance of the mirror-based ultrasound system is comparable to the traditional perpendicular approach for hand gesture classification.The experimental results demonstrate the potential of the system in assisting with data acquisition and device development for hand gesture recognition using ultrasound in the field of human-machine interfacing, prosthetic control, humancomputer interaction, and beyond.
“…Biosignal based methods can be used to capture high-quality biological data, enabling the inference of hand movements through subtle changes in the human body. Although surface electromyography (EMG) stands out as a widely researched modality for this purpose, ultrasound emerges as a promising alternative, providing comprehensive visualization of forearm musculature to infer hand configurations [1][2]. With the advances in machine learning, ultrasound based human-machine interfaces have been used to control robots and AR/VR interfaces [3][4].…”
Section: Introductionmentioning
confidence: 99%
“…The latter is achieved because the center of the mass of the probe is closer to the body compared to the traditional perpendicular configuration. We use a convolutional neural network (CNN) based on [1][2], in addition to training a vision transformer (ViT) based on [8] to train models to estimate 5 hand gestures from ultrasound images obtained using both traditional perpendicular configuration as well as our proposed reflector based configuration. Section II describes the methods and the experimental design, with the results discussed in Section III.…”
This research presents an innovative mirror-based ultrasound system designed for hand gesture classification using Convolutional Neural Network (CNN) and Vision Transformer (ViT) architectures. Hand gesture recognition using ultrasound has garnered significant interest due to its potential applications in various fields such as prosthetic control and human-machine interfacing. Traditionally, ultrasound probes are placed perpendicular to the forearm causing discomfort and interference with natural arm movements due to the center of mass of the wearable ultrasound system being distanced from the body. To address this challenge, a novel approach utilizing the advantages of acoustic reflection is proposed. A convex ultrasound probe is strategically aligned with the forearm, and ultrasound waves are transmitted to the forearm, and received back using a mirror placed at 45 degrees to the imaging region and the forearm. By aligning the probe parallel to the arm, the center of mass is brought closer to the body, ensuring enhanced stability and reduced strain on the user's arm during data collection. A dataset comprising 5 hand gestures was collected to train and evaluate the performance with Support Vector Machines with linear kernel, CNN, and ViT based approaches. It was observed that the performance of the mirror-based ultrasound system is comparable to the traditional perpendicular approach for hand gesture classification.The experimental results demonstrate the potential of the system in assisting with data acquisition and device development for hand gesture recognition using ultrasound in the field of human-machine interfacing, prosthetic control, humancomputer interaction, and beyond.
“…Huang et al [13][14][15] compared the effectiveness of sEMG and B-mode ultrasound for gesture recognition and discovered that B-mode ultrasound achieved better performance and long-term effectiveness. Furthermore, Castellini et al [16][17][18] utilized B-mode ultrasound and proposed a gray gradient feature to predict finger movements and various flexion angles. McIntosh et al [19] investigated the impact of data acquisition location on classification accuracy and found that the wrist region was most effective for hand motion recognition.…”
Accurate hand motion intention recognition is essential for the intuitive control of intelligent prosthetic hands and other human-machine interaction systems. Sonomyography, which can detect the changes in muscle morphology and structure precisely, is a promising signal source for fine hand movement recognition. However, sonomyography measured by traditional rigid ultrasound probes may suffer from poor acoustic coupling because the rigid probe surfaces cannot accommodate the curvilinear shape of the human body, particularly in the case of small and irregular residual limbs in amputees. In this study, we used a self-designed lightweight, flexible, and wearable ultrasound transducer to acquire muscle ultrasound images, and proposed a sonomyography transformer (SMGT) model for simultaneous recognition of hand movements and force levels. The performance of SMGT was systematically compared to two commonly used image processing methods, HOG and Gray Gradient, as well as a deep CNN model, in simultaneously recognizing ten classes of hand/finger movements and three force levels. Additionally, ten subjects including seven able-bodied subjects and three trans-radial amputees who are the end users of prosthetic hands were recruited to evaluate the effectiveness of SMGT. Results showed that our proposed method achieved average classification accuracies of 98.4% ± 0.6% and 96.2% ± 3.0% in able-bodied subjects and amputee subjects, respectively, which are much higher than those of other methods. This study provided a valuable approach for ultrasound-based hand motion recognition that may promote the applications of intelligent prosthetic hands.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.