Vision-based myoelectric prosthetic hand uses a camera integrated into its body for object detection and environment understanding, where the results provide necessary information for grasp planning. It is expected that the semi-automatic prosthesis control can be realized with this method. However, such a control method usually suffers from heavy computation due to the requirement of real-time image processing to keep up with the arm movements of the user. This paper presents a distributed control system that assigns heavy processing tasks to one or multiple processing nodes through the network, which greatly reduces the computation burdens of the processor embedded in the prosthetic hand. In this control scheme, the embedded system in the prosthetic hand is only used for gathering necessary data for grasp planning, while the processing nodes in the network are responsible for processing and managing the collected data. A test platform is built to verify the proposed control scheme. The test platform streams user electromyography (EMG) signals and images simultaneously to the GPU server. The GPU sever analyzes the received data and generates the corresponding motor commands in real time. A case study that uses a 3-DoF gripper to continuously grasp several objects is performed using this test platform.INDEX TERMS Distributed control system, real-time object detection, vision-based myoelectric hand.
This paper proposes a novel prosthetic hand control method that incorporates spatial information of target objects obtained with a RGB-D sensor into a myoelectric control procedure. The RGB-D sensor provides not only two-dimensional (2D) color information but also depth information as spatial cues on target objects, and these pieces of information are used to classify objects in terms of shape features. The shape features are then used to determine an appropriate grasp strategy/motion for control of a prosthetic hand. This paper uses a two-channel image format for classification, which contains grayscale and depth information of objects, and the image data is classified with a deep convolutional neural network (DCNN). Compared with previous studies based only on 2D color images, it is expected that the spatial information would improve classification accuracy, and consequently better grasping decision and prosthetic control can be achieved. In this study, a dataset of image pairs, consisting of grayscale images and their corresponding depth images, has been created to validate the proposed method. This database includes images of simple three-dimensional (3D) solid objects from six categories, namely, triangular prism, triangular pyramid, quadrangular prism, rectangular pyramid, cone, and cylinder. Image classification experiments were conducted with this database. The experimental results indicate that spatial information possesses high potential in classifying shape features of objects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.