2018
DOI: 10.15344/2456-4451/2018/130
|View full text |Cite
|
Sign up to set email alerts
|

Object Shape Classification Using Spatial Information in Myoelectric Prosthetic Control

Abstract: This paper proposes a novel prosthetic hand control method that incorporates spatial information of target objects obtained with a RGB-D sensor into a myoelectric control procedure. The RGB-D sensor provides not only two-dimensional (2D) color information but also depth information as spatial cues on target objects, and these pieces of information are used to classify objects in terms of shape features. The shape features are then used to determine an appropriate grasp strategy/motion for control of a prosthet… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4

Relationship

3
1

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 21 publications
0
3
0
Order By: Relevance
“…These techniques have been applied to the control system of prosthetic hands to determine the grasp pattern and grasp timing [14] [18]. For example, in the study of [19], Shima et al attempted to recognize the shape of an object using RGB-D sensor to improve classification accuracy. To control the prosthetic hands in the whole approaching phase, He et al introduced a real-time object detection system and implemented the system in a server-client mode to improve the computation capability.…”
Section: Related Workmentioning
confidence: 99%
“…These techniques have been applied to the control system of prosthetic hands to determine the grasp pattern and grasp timing [14] [18]. For example, in the study of [19], Shima et al attempted to recognize the shape of an object using RGB-D sensor to improve classification accuracy. To control the prosthetic hands in the whole approaching phase, He et al introduced a real-time object detection system and implemented the system in a server-client mode to improve the computation capability.…”
Section: Related Workmentioning
confidence: 99%
“…Bando et al [11] used a convolutional neural network (CNN 1 ) to classify 20 classes of objects, the classification results help to select the grasp posture from a group of predefined postures. Shima et al [12] takes advantage of object spatial information measured by depth sensor and classifies the objects in terms of their shapes. The shape of the object finally results in the grasp posture.…”
Section: Introductionmentioning
confidence: 99%
“…Shima et al combined a depth sensor with an RGB camera to acquire the spatial information of the object. The spatial information are fed into a CNN for shape classification [14]. This study achieved a higher classification accuracy by feeding depth information as auxiliary input to the CNN network.…”
Section: Introductionmentioning
confidence: 99%