2019
DOI: 10.1007/s11370-019-00293-8
|View full text |Cite
|
Sign up to set email alerts
|

HANDS: a multimodal dataset for modeling toward human grasp intent inference in prosthetic hands

Abstract: Upper limb and hand functionality is critical to many activities of daily living, and the amputation of one can lead to significant functionality loss for individuals. From this perspective, advanced prosthetic hands of the future are anticipated to benefit from improved shared control between a robotic hand and its human user, but more importantly from the improved capability to infer human intent from multimodal sensor data to provide the robotic hand perception abilities regarding the operational context. S… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1
1

Relationship

3
6

Authors

Journals

citations
Cited by 18 publications
(10 citation statements)
references
References 25 publications
0
10
0
Order By: Relevance
“…HCI enables users to communicate their physiological information with machines for help with manipulating external devices in a more reliable, robust and safe manner. Traditionally, the assessment of physiological activity (e.g., human stress level and mental status) was implemented by monitoring signals such as electroencephalography (EEG) [2] and electromyography (EMG) [3]. However, these measurements require either surface (non-invasive) or implanted (invasive) electrodes and frequent calibration, which increase system cost and decrease user comfort.…”
Section: Introductionmentioning
confidence: 99%
“…HCI enables users to communicate their physiological information with machines for help with manipulating external devices in a more reliable, robust and safe manner. Traditionally, the assessment of physiological activity (e.g., human stress level and mental status) was implemented by monitoring signals such as electroencephalography (EEG) [2] and electromyography (EMG) [3]. However, these measurements require either surface (non-invasive) or implanted (invasive) electrodes and frequent calibration, which increase system cost and decrease user comfort.…”
Section: Introductionmentioning
confidence: 99%
“…The module predicts the class of the frame. The frame-level score ( ŷ( f , c)) is calculated as shown in Equation (3).…”
Section: Gaze-driven Object Recognition Cnnmentioning
confidence: 99%
“…To overcome the limitations of traditional control solely based on the electromyographic (EMG) activity of the remaining muscles, promising alternatives consider hybrid systems combining noninvasive motion capture and vision control [1,2]. They include camera vision modules that allow for recognition of the subject's intention to grasp an object and assist visual control of prosthetic arms for object reaching and grasping [3].…”
Section: Introduction and State-of-the Artmentioning
confidence: 99%
“…2) Dataset: The HANDS dataset [19] is a collection of images from graspable objects used in daily life including office supplies, utensils and complex-shaped objects like toys, from the hand camera perspective and different orientations. The labels are probabilistic as opposed to the common onehot encoding because of the feasibility of grasping objects in multiple ways with different preferences.…”
Section: B Visual Classifiermentioning
confidence: 99%