2013
DOI: 10.1007/978-3-642-37431-9_40
|View full text |Cite
|
Sign up to set email alerts
|

Egocentric Activity Monitoring and Recovery

Abstract: This paper presents a novel approach for real-time egocentric activity recognition in which component atomic events are characterised in terms of binary relationships between parts of the body and manipulated objects. The key contribution is to summarise, within a histogram, the relationships that hold over a fixed time interval. This histogram is then classified into one of a number of atomic events. The relationships encode both the types of body parts and objects involved (e.g. wrist, hammer) together with … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
33
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
4
2
1

Relationship

3
4

Authors

Journals

citations
Cited by 31 publications
(35 citation statements)
references
References 29 publications
0
33
0
Order By: Relevance
“…However due to the high amount of discretisation of space and time, it is often not possible to discern similar looking activities that are performed at a different scale or speed. Quantitative spatial representations, on the other hand, are able to encode such finer motions in an activity as seen in previous work [35]. In our approach we make use of various quantitative spatial representations to aid our model in the recognition problem.…”
Section: Quantitative Spatial Representations (F 3 )mentioning
confidence: 99%
“…However due to the high amount of discretisation of space and time, it is often not possible to discern similar looking activities that are performed at a different scale or speed. Quantitative spatial representations, on the other hand, are able to encode such finer motions in an activity as seen in previous work [35]. In our approach we make use of various quantitative spatial representations to aid our model in the recognition problem.…”
Section: Quantitative Spatial Representations (F 3 )mentioning
confidence: 99%
“…SURF (Bay et al, 2006)) in an image (top-left), 2) filtered keypoints based on their strength with assigned codewords using K-means clustering (top-right), 3) extraction of pairwise relations between keypoints belonging to the same codewords (middle row), 4) histogram of oriented pairwise relations (HOPR) representation of these extracted relations, which is used for framewise classification of activity using a classifier. The wrist marker in images are used for the detection and tracking of wrist in the existing method (Behera et al, 2012b). world activity recognition systems utilize a bag-ofvisual words paradigm, which use spatio-temporal features (Schuldt et al, 2004;Dollár et al, 2005;Blank et al, 2005;Ryoo and Aggarwal, 2009). These features are shown to be robust to the changes in lighting and invariant to affine transformations.…”
Section: Classifier Histogram Of Oriented Pairwise Relations (Hopr)mentioning
confidence: 99%
“…1. The proposed approach is in contrast to traditional approaches where interaction between objects and wrists are often used for recognising activities (Fathi et al, 2011a;Gupta and Davis, 2007;Behera et al, 2012b;Behera et al, 2012a). Such approaches use pre-trained object detectors.…”
Section: Classifier Histogram Of Oriented Pairwise Relations (Hopr)mentioning
confidence: 99%
“…We evaluate our framework using an egocentric paradigm for recognizing complex manipulative tasks of assembling parts of a pump system in an industrial environment 1 . We compare our approach with our 1 Dataset and source code are available at www.engineering.leeds.ac.uk/ previous work in [1] which models the wrist-object and object-object interactions using qualitative and functional relationships. The accuracy of the proposed approach is 68.56% (using SIFT and STIP) and better than the method in [1], which is 52.09%.…”
mentioning
confidence: 99%
“…We compare our approach with our 1 Dataset and source code are available at www.engineering.leeds.ac.uk/ previous work in [1] which models the wrist-object and object-object interactions using qualitative and functional relationships. The accuracy of the proposed approach is 68.56% (using SIFT and STIP) and better than the method in [1], which is 52.09%. We also evaluated using bag-of-visual-features approach and the performance is 63.19%.…”
mentioning
confidence: 99%