2019
DOI: 10.3389/fbioe.2019.00316
|View full text |Cite
|
Sign up to set email alerts
|

On the Visuomotor Behavior of Amputees and Able-Bodied People During Grasping

Abstract: Visual attention is often predictive for future actions in humans. In manipulation tasks, the eyes tend to fixate an object of interest even before the reach-to-grasp is initiated. Some recent studies have proposed to exploit this anticipatory gaze behavior to improve the control of dexterous upper limb prostheses. This requires a detailed understanding of visuomotor coordination to determine in which temporal window gaze may provide helpful information. In this paper, we verify and quantify the gaze and motor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
17
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(20 citation statements)
references
References 51 publications
3
17
0
Order By: Relevance
“…The study of different algorithms for intact subjects can provide a lot of valuable information to help achieve better prosthetic performance (Merad et al, 2018 ). Even so, it is essential to utilize the data collected from amputees (Gregori et al, 2019 ). Intact subjects performed real movements of the hand, while amputees could only attempt imaginary movements without visual and sensory feedback, so that when they perform the same movements at a specific arm position, the residual muscles of amputated arm might produce sEMG patterns different from those of intact arm (Geng et al, 2012 ; Vidovic et al, 2016 ).…”
Section: Hand Gesture Recognition Challengesmentioning
confidence: 99%
“…The study of different algorithms for intact subjects can provide a lot of valuable information to help achieve better prosthetic performance (Merad et al, 2018 ). Even so, it is essential to utilize the data collected from amputees (Gregori et al, 2019 ). Intact subjects performed real movements of the hand, while amputees could only attempt imaginary movements without visual and sensory feedback, so that when they perform the same movements at a specific arm position, the residual muscles of amputated arm might produce sEMG patterns different from those of intact arm (Geng et al, 2012 ; Vidovic et al, 2016 ).…”
Section: Hand Gesture Recognition Challengesmentioning
confidence: 99%
“…It is based on the implementation provided by Massa and Girshick ( 2018 ) that was originally trained on the Common Objects in Context (COCO) dataset (Lin et al, 2014 ) and fine-tuned on the MeganePro objects. Gregori et al ( 2019 ) demonstrated the substantial increase in average precision of this model with respect to the non-fine-tuned model when tested on the on MeganePro objects, thanks also to the limited variability and number of objects employed in the MeganePro acquisitions. To reduce the computation time, we extracted and stored the contour of the objects identified by this network only from 2 s before to 3.5 s after the beginning of the grasp identified from the relabeled data.…”
Section: Methodsmentioning
confidence: 98%
“…Object recognition and segmentation were performed with a Mask R-CNN network (He et al, 2017 ), utilizing the model released by Gregori et al ( 2019 ). The model uses a ResNet-50-Feature Pyramid Network (He et al, 2016 ; Lin et al, 2017 ) as backbone.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations