2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops 2009
DOI: 10.1109/iccvw.2009.5457583
| View full text |Cite
|
Sign up to set email alerts
|

Abstract: We introduce the publicly available TUM Kitchen Data Set as a comprehensive collection of activity sequences recorded in a kitchen environment equipped with multiple complementary sensors. The recorded data consists of observations of naturally performed manipulation tasks as encountered in everyday activities of human life. Several instances of a table-setting task were performed by different subjects, involving the manipulation of objects and the environment. We provide the original video sequences, fullbody… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
152
0

Year Published

2010
2010
2020
2020

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 156 publications
(153 citation statements)
references
References 18 publications
1
152
0
Order By: Relevance
“…The TUM Kitchen data set was recorded for video-based activity recognition [17]. It also contains RFID and reed switch data, but it does not include on-body sensors.…”
Section: B Datasets For Activity Recognitionmentioning
confidence: 99%
“…The TUM Kitchen data set was recorded for video-based activity recognition [17]. It also contains RFID and reed switch data, but it does not include on-body sensors.…”
Section: B Datasets For Activity Recognitionmentioning
confidence: 99%
“…Note that both (15) and (16) can be solved efficiently using a variation of (6). The objective function of (14) is minimized using the Concave Convex Procedure (CCCP) [14].…”
Section: B Max-margin Learningmentioning
confidence: 99%
“…The TUM-Kitchen dataset [6] is recorded in a home-care scenario where subjects perform a few daily activities in a kitchen. The kitchen is equipped a set of ambient sensors (i.e., multiple RFID tag reader and magnetic sensors) and four static overhead cameras.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Ziebart et al predicts people's future locations [34] and Kitani et al [12] forecasts human actions by considering the physical environment. Other works involving daily activities include daily action classification or summarization by egocentric videos [7,14,17], fall detection [15], and classification of cooking actions [11,21,23,26].…”
Section: Related Workmentioning
confidence: 99%