2013
DOI: 10.1177/0278364913478446
|View full text |Cite
|
Sign up to set email alerts
|

Learning human activities and object affordances from RGB-D videos

Abstract: Understanding human activities and object affordances are two very important skills, especially for personal robots which operate in human environments. In this work, we consider the problem of extracting a descriptive labeling of the sequence of sub-activities being performed by a human, and more importantly, of their interactions with the objects in the form of associated affordances. Given a RGB-D video, we jointly model the human activities and object affordances as a Markov random field where the nodes re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
523
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 602 publications
(548 citation statements)
references
References 64 publications
2
523
0
Order By: Relevance
“…However, they manually define affordances in terms of kinematic and dynamic constraints. Recently, Koppula et al [23,21] show that human-actor based affordances are essential for robots working in human spaces in order for them to interact with objects in a human-desirable way. They applied it to look-ahead reactive planning for robots.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…However, they manually define affordances in terms of kinematic and dynamic constraints. Recently, Koppula et al [23,21] show that human-actor based affordances are essential for robots working in human spaces in order for them to interact with objects in a human-desirable way. They applied it to look-ahead reactive planning for robots.…”
Section: Related Workmentioning
confidence: 99%
“…We use the OpenNI skeleton tracker, to obtain the skeleton poses in these RGBD videos. We obtained the ground-truth object bounding box labels via labeling and SIFT-based tracking using the code given in [23]. The dataset is publicly available (along with open-source code): http://pr.cs.cornell.edu/humanactivities/data.php.…”
Section: Generating Physically-grounded Affordancesmentioning
confidence: 99%
See 3 more Smart Citations