2018
DOI: 10.1007/978-3-030-01216-8_26
|View full text |Cite
|
Sign up to set email alerts
|

Efficient 6-DoF Tracking of Handheld Objects from an Egocentric Viewpoint

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…Previous work on egocentric computer vision has concerned object recognition [6,26,28], activity recognition [3,10,29,31,34,37,39,50], gaze prediction [21,27,43,51], video summarization [19,35,40], head motion signatures [36], and human pose estimation [23,49]. In particular, many papers have considered problems related to hands in egocentric video, including hand pose estimation [16,30], hand segmentation [4,46], handheld controller tracking [33], and hand gesture recognition [8]. However, to our knowledge, no prior work has considered egocentric hand-based person identification.…”
Section: Related Workmentioning
confidence: 99%
“…Previous work on egocentric computer vision has concerned object recognition [6,26,28], activity recognition [3,10,29,31,34,37,39,50], gaze prediction [21,27,43,51], video summarization [19,35,40], head motion signatures [36], and human pose estimation [23,49]. In particular, many papers have considered problems related to hands in egocentric video, including hand pose estimation [16,30], hand segmentation [4,46], handheld controller tracking [33], and hand gesture recognition [8]. However, to our knowledge, no prior work has considered egocentric hand-based person identification.…”
Section: Related Workmentioning
confidence: 99%
“…Hand pose estimation has drawn increasing attention for decades [1,3,9,5,37,32,6,21,25,36,40]. Re-cent research efforts can be categorized by their input data forms, which primarily include 2D RGB images and 3D RGBD images with depth map information.…”
Section: Vision-based Hand Pose Estimationmentioning
confidence: 99%