The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2022
DOI: 10.1109/wacv51458.2022.00383
|View full text |Cite
|
Sign up to set email alerts
|

Inpaint2Learn: A Self-Supervised Framework for Affordance Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 26 publications
0
2
0
Order By: Relevance
“…Previous research has focused on predicting affordances using computer vision [17] [18]. However, good quality datasets are sparse, which some groups like Zhang et al [19] try to address, and observational information can only used for associations in contrast to causal learning enabled by interventions [11] and neglects the central role of embodiment for robots and cognitive systems [20]. In earlier work, we demonstrated the usefulness of interventions for learning causal dependencies between actions, in order to make more profound sense of human demonstrations in a shared environment [21].…”
Section: Related Workmentioning
confidence: 99%
“…Previous research has focused on predicting affordances using computer vision [17] [18]. However, good quality datasets are sparse, which some groups like Zhang et al [19] try to address, and observational information can only used for associations in contrast to causal learning enabled by interventions [11] and neglects the central role of embodiment for robots and cognitive systems [20]. In earlier work, we demonstrated the usefulness of interventions for learning causal dependencies between actions, in order to make more profound sense of human demonstrations in a shared environment [21].…”
Section: Related Workmentioning
confidence: 99%
“…Despite their importance and potential benefits, identifying the locations of physical contacts and their moments in 3D environments remains a challenging problem that requires complex contextual data and advanced processing method such as wearable sensors and computer vision algorithms. Unsurprisingly, various methods for body-contact analysis have been proposed and evaluated in the fields of human activity recognition (HAR) and human-scene interaction (HSI) [2]- [7]. These methods often consider the physical affordances of a target object and 3D data to identify the interaction between the actor's body part and the object while utilizing or developing novel sensing and processing approaches, such as depth cameras, infrared (IR) cameras, inertial measurement units (IMUs), and light detection and ranging (LiDAR) sensors.…”
Section: Introductionmentioning
confidence: 99%