Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments 2019
DOI: 10.1145/3316782.3316794
|View full text |Cite
|
Sign up to set email alerts
|

From hand-perspective visual information to grasp type probabilities

Abstract: Limb deciency severely aects the daily lives of amputees and drives eorts to provide functional robotic prosthetic hands to compensate this deprivation. Convolutional neural network-based computer vision control of the prosthetic hand has received increased attention as a method to replace or complement physiological signals due to its reliability by training visual information to predict the hand gesture. Mounting a camera into the palm of a prosthetic hand is proved to be a promising approach to collect visu… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 28 publications
(39 reference statements)
0
8
0
Order By: Relevance
“…Considering the case of embedding the camera in the palm of a robotic hand (or prosthetic hand), the identified grasp types for the same object from different approaching orientations of hand may be different [30,31]. Therefore, in [30], instead of making absolute positive or negative predictions, Han et al utilized the priority of grasp type as the form of prediction result, and took the ranked lists of grasps as a new label form. Based on this, they constructed a probabilistic classifier using the Plackett-Luce model.…”
Section: Related Workmentioning
confidence: 99%
“…Considering the case of embedding the camera in the palm of a robotic hand (or prosthetic hand), the identified grasp types for the same object from different approaching orientations of hand may be different [30,31]. Therefore, in [30], instead of making absolute positive or negative predictions, Han et al utilized the priority of grasp type as the form of prediction result, and took the ranked lists of grasps as a new label form. Based on this, they constructed a probabilistic classifier using the Plackett-Luce model.…”
Section: Related Workmentioning
confidence: 99%
“…The data used in this work is based on [12]. The dataset consists of 4130 images, which were augmented from 413 hand-perspective images of 102 ordinary objects including office and daily supplies, utensils, and complex-shaped objects.…”
Section: Datasetmentioning
confidence: 99%
“…For instance, they need calibration pretty often; the unexpected electrode shifting could distort the EMG signals; muscle fatigue and/or limb disposition adversely affect the EMG patterns; and some amputees may lack critical muscles which EMG classification rely on. These insufficiencies have led researchers to use more sources of information to understand human intent [12]. With the rise of Convolutional Neural Networks [19,23,25,5,27,28,14], studies on using visual information as a source of information for the prosthetic hand have been conducted [7,4,9,8,26,12], which focus on classifying images into a grasp type.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…An alternative use of eye-view and hand-view paired images is the use of the CNN to provide a probability distribution for all possible grasp types, based on the hand-view image [11]. Such a CNN could be further extended to process the videos during the approach to the object, in order to generate a sequence of grasp type probability distributions over time.…”
Section: The Usage Of the Visual Informationmentioning
confidence: 99%