2021 IEEE 30th International Symposium on Industrial Electronics (ISIE) 2021
DOI: 10.1109/isie45552.2021.9576369
|View full text |Cite
|
Sign up to set email alerts
|

Feature-based Egocentric Grasp Pose Classification for Expanding Human-Object Interactions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 22 publications
1
2
0
Order By: Relevance
“…The testing accuracy for the 16 grasps pose using the GNN model is 94.87%. This result outperformed the accuracy of the MLP network, which scored 78.75% in our previous work [34]. This discovery demonstrates that the gathered dataset of grasp postures contains essential characteristics.…”
Section: Discussion On Microscopic Levelsupporting
confidence: 48%
See 1 more Smart Citation
“…The testing accuracy for the 16 grasps pose using the GNN model is 94.87%. This result outperformed the accuracy of the MLP network, which scored 78.75% in our previous work [34]. This discovery demonstrates that the gathered dataset of grasp postures contains essential characteristics.…”
Section: Discussion On Microscopic Levelsupporting
confidence: 48%
“…First, we conducted a single-participant task-specific reach-to-grasp cycle experiment. We extended the egocentric vision feature extraction capabilities discovered in our earlier research [34]. To make it easier to extract these characteristics, we used object-centered coordinate transformation to make it simple to extract these characteristics.…”
Section: Discussion On Mesoscopic Levelmentioning
confidence: 99%
“…Kinematic finger models [45] have been widely studied in various research, especially in the industrial sector, such as hand modeling and simulation, hand motion coordination, articulated human hands, prosthetic hands, and robotics. Our previous work [46] used a stereo infrared (SIR) camera from Leap Motion to capture the position of the hand and fingers. This sensor tracks fingers in a 3D zone extending from 10 cm to 75cm and from the device in a 170°x170° ordinary field of view.…”
Section: ) Kinematic Finger Modelmentioning
confidence: 99%