2010 Canadian Conference on Computer and Robot Vision 2010
DOI: 10.1109/crv.2010.54
|View full text |Cite
|
Sign up to set email alerts
|

Human Action Recognition Using Salient Opponent-Based Motion Features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2011
2011
2018
2018

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(13 citation statements)
references
References 20 publications
0
13
0
Order By: Relevance
“…Some other tasks like object recognition, tracking, and detection can also be refined by incorporating visual attention information [16,25,22]. Most of the state-of-the-art methods have exploited the contrast and the uniqueness properties in either local or global context to extract saliency regions [14,2,6,20,32].…”
Section: Introductionmentioning
confidence: 99%
“…Some other tasks like object recognition, tracking, and detection can also be refined by incorporating visual attention information [16,25,22]. Most of the state-of-the-art methods have exploited the contrast and the uniqueness properties in either local or global context to extract saliency regions [14,2,6,20,32].…”
Section: Introductionmentioning
confidence: 99%
“…In their filtering framework, the motion-energy map is extracted from motion patterns captured from lateral viewing conditions. This approach has been improved further by Shabani et al (2010) utilising for human's action modelling. In this paper, a new template feature, i.e., GSI, has been introduced that is based on the improved Adelson's motion energy model.…”
Section: Introductionmentioning
confidence: 99%
“…To obtain a contrast polarity insensitive response, the energy of the filters R = (G k u 0 ) 2 + (G k h u 0 ) 2 is utilized. The local maxima in the motion energy map localizes the salient features [5,27] (Fig. 1(a)).…”
Section: Salient Feature Detectionmentioning
confidence: 99%
“…For action classification, we incorporated the same bag-of-words (BOW) setting utilized in [5,27,32]. Salient features are clustered based on their 3DSIFT descriptors [26] into visual words using K-means algorithm with random seed initialization.…”
Section: Action Classification Testmentioning
confidence: 99%
See 1 more Smart Citation