The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2022
DOI: 10.1007/978-3-031-19818-2_8
|View full text |Cite
|
Sign up to set email alerts
|

Fine-Grained Egocentric Hand-Object Segmentation: Dataset, Model, and Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(3 citation statements)
references
References 73 publications
0
3
0
Order By: Relevance
“…Although the analysis of hand-object interactions mostly involves bounding box annotations, a few works have focused on studying hand-object relations using semantic segmentation mask annotations (González-Sosa et al, 2021;Zhang et al, 2022a;Darkhalil et al, 2022;Tokmakov et al, 2023). These works focus on hands and active objects semantic seg-mentation considering egocentric images (González-Sosa et al, 2021;Zhang et al, 2022a) or videos (Darkhalil et al, 2022;Tokmakov et al, 2023). Darkhalil et al (2022) defined and predicted hand-object relations, including cases where the on-hand glove is in contact with an object in the environment.…”
Section: State-of-the-art Papersmentioning
confidence: 99%
See 1 more Smart Citation
“…Although the analysis of hand-object interactions mostly involves bounding box annotations, a few works have focused on studying hand-object relations using semantic segmentation mask annotations (González-Sosa et al, 2021;Zhang et al, 2022a;Darkhalil et al, 2022;Tokmakov et al, 2023). These works focus on hands and active objects semantic seg-mentation considering egocentric images (González-Sosa et al, 2021;Zhang et al, 2022a) or videos (Darkhalil et al, 2022;Tokmakov et al, 2023). Darkhalil et al (2022) defined and predicted hand-object relations, including cases where the on-hand glove is in contact with an object in the environment.…”
Section: State-of-the-art Papersmentioning
confidence: 99%
“…Due to the massive-scale and unconstrained nature of Ego4D, it has proved to be useful for various tasks including action recognition (Liu et al, 2022a;Lange et al, 2023), action detection (Wang et al, 2023a), visual question answering (Bärmann & Waibel, 2022), active speaker detection (Wang et al, 2023d), natural language localisation , natural language queries (Ramakrishnan et al, 2023), gaze estimation (Lai et al, 2022), persuasion modelling for conversational agents (Lai et al, 2023b), audio visual object localisation (Huang et al, 2023a), hand-object segmentation (Zhang et al, 2022b) and action anticipation (Ragusa et al, 2023a;Pasca et al, 2023;Mascaró et al, 2023). New tasks have also been introduced thanks to the diversity of Ego4D, e.g.…”
Section: General Datasetsmentioning
confidence: 99%
“…Hand-object grasp reconstruction also employs contact to refine the hand and object pose estimation [5,15,20,52,54]. In addition, some works [36,47,62] detect hands and classify their physical contact state into self-contact, person-person contact, and person-object contact. Although they consider the relationship between hands and other objects in the scene, they detect only a rough bounding box or boundary for the hand, instead of a finer-grained contact area.…”
Section: Related Workmentioning
confidence: 99%