2018
DOI: 10.1038/sdata.2018.101
|View full text |Cite|
|
Sign up to set email alerts
|

Human grasping database for activities of daily living with depth, color and kinematic data streams

Abstract: This paper presents a grasping database collected from multiple human subjects for activities of daily living in unstructured environments. The main strength of this database is the use of three different sensing modalities: color images from a head-mounted action camera, distance data from a depth sensor on the dominant arm and upper body kinematic data acquired from an inertial motion capture suit. 3826 grasps were identified in the data collected during 9-hours of experiments. The grasps were grouped accord… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
38
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 42 publications
(39 citation statements)
references
References 20 publications
0
38
0
Order By: Relevance
“…Hence, previous work has focused on recording grasping activity in other forms like hand joint configuration by manual annotation [49,3], data gloves [20,29] or wired magnetic trackers [54,16] (which can interfere with natural grasping), or model-based hand pose estimation [50]. At a higher level, grasping has been observed through thirdperson [52,21,36] or first-person [21,6,46] videos, in which frames are annotated with the category of grasp according to a grasp taxonomy [12,23]. Tactile sensors are embedded on a glove [4] or in the object [38] to record grasp contact points.…”
Section: Datasets Of Human Graspsmentioning
confidence: 99%
See 1 more Smart Citation
“…Hence, previous work has focused on recording grasping activity in other forms like hand joint configuration by manual annotation [49,3], data gloves [20,29] or wired magnetic trackers [54,16] (which can interfere with natural grasping), or model-based hand pose estimation [50]. At a higher level, grasping has been observed through thirdperson [52,21,36] or first-person [21,6,46] videos, in which frames are annotated with the category of grasp according to a grasp taxonomy [12,23]. Tactile sensors are embedded on a glove [4] or in the object [38] to record grasp contact points.…”
Section: Datasets Of Human Graspsmentioning
confidence: 99%
“…A large body of previous work [20,29,36,46,49,3,50,52,21,36,21,6,46] has recorded human grasps, with methods ranging from data gloves that measure joint configuration to manually arranged robotic hands. ContactDB differs significantly from these previous datasets by focusing primarily on the contact resulting from the rich interaction between hand and object.…”
Section: Introductionmentioning
confidence: 99%
“…The chosen gestures induced: hand open/close, thumb flexion/extension, wrist flexion/extension and index extension. These were chosen as they involved the most frequent activities of daily living (ADL) [123]. First each participant was instructed to perform all gestures without constraints (dynamic) as forcefully as possible in a single recording.…”
Section: Experimental Protocolmentioning
confidence: 99%
“…For more DOF, however, direct sEMG control requires the generation of independent sEMG signals and the identification of independent sites for their acquisition, which can be cumbersome for the user and results in a limited number of simultaneously controlled DOF [253]. In order to increase the range of assistance provided by SymbiHand (e.g., with an active thumb), in addition to enabling the control of more grasps used during ADL [123] (e.g., by adding valves), different sEMG-based motor decoding approaches should be explored. Future work will investigate the possibility of employing regression [254], pattern recognition [253], or EMG-driven model-based techniques [186], [255].…”
Section: Motor Intention Decodingmentioning
confidence: 99%
See 1 more Smart Citation