2020
DOI: 10.1109/access.2020.3036115
|View full text |Cite
|
Sign up to set email alerts
|

Vision-Based Assistance for Myoelectric Hand Control

Abstract: Conventional control systems for prosthetic hands use myoelectric signals as an interface, but it is impossible to realize complex and flexible human hand movements with only myoelectric signals. A promising control scheme for prosthetic hands uses computer vision to assist in grasping objects. It features an imaging sensor, and the control system is capable of recognizing an object placed in the environment. Then, a gripping pattern can be selected from some predefined candidates according the recognized obje… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 20 publications
(13 citation statements)
references
References 27 publications
0
7
0
Order By: Relevance
“…Finally, note that we hypothesize to have only one object in the scene. The extension to multiple objects would include a preliminary step of target object identification such as the ones proposed in [33], [20] which is out of the scope of this paper but can be considered as future work.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, note that we hypothesize to have only one object in the scene. The extension to multiple objects would include a preliminary step of target object identification such as the ones proposed in [33], [20] which is out of the scope of this paper but can be considered as future work.…”
Section: Methodsmentioning
confidence: 99%
“…Conversely, the eye-in-hand configuration can be completely transparent to the user and allows to gather closer views of the objects to grasp. This makes it easier to identify the target with visual and motion cues [20]. For instance, in [21] the geometrical information (centroid and major axes) of the target object is inferred by using an RGB-D camera placed on the prosthesis and it is used to control the wrist orientation through visual servoing.…”
Section: Related Workmentioning
confidence: 99%
“…The experiments showed good results in terms of object segmentation and classification accuracy. Similarly, in [24] an object detection network classifies different objects driving the selection of grasping patterns based on the object kind, without considering the actual grasping surface. The uncertainty introduced by partial occlusion and movements was explicitly modeled in [25].…”
Section: Related Workmentioning
confidence: 99%
“…Our research group has also been developing a visionbased prosthetic hand based on deep learning technology [35] [36] [37] [38]. In [39], we designed a prosthetic hand control method that can determine the grasping target and motion according to the spatial and temporal relationship between the prosthetic hand and the objects, such as distance, position, and gazing time. The developed hand captures the environment with an onboard vision sensor and determines the object to be grasped; then, the motor is triggered by sEMG activation measured from the operator's skin surface.…”
Section: Related Workmentioning
confidence: 99%