2008
DOI: 10.1007/978-3-540-88693-8_60
|View full text |Cite
|
Sign up to set email alerts
|

Motion Context: A New Representation for Human Action Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
81
0

Year Published

2009
2009
2023
2023

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 110 publications
(85 citation statements)
references
References 17 publications
4
81
0
Order By: Relevance
“…Nevertheless, this approach remains limited to offline applications. Other approaches like (Tabia et al, 2012) and (Zhang et al, 2008) use polar space representations to characterize activities. However they compute their descriptors on entire sequences, thus do not explicitly provide on-line recognition capabilities.…”
Section: Evaluation and Resultsmentioning
confidence: 99%
“…Nevertheless, this approach remains limited to offline applications. Other approaches like (Tabia et al, 2012) and (Zhang et al, 2008) use polar space representations to characterize activities. However they compute their descriptors on entire sequences, thus do not explicitly provide on-line recognition capabilities.…”
Section: Evaluation and Resultsmentioning
confidence: 99%
“…The method based on spatiotemporal interest point can provide abundant description and expression. In this method [30], spatiotemporal words are composed of extracted spatiotemporal interest points for each segment of video sequence. In complicated videos, they can be expressed by response function of linear filter.…”
Section: Intelligent Behavioral Analysismentioning
confidence: 99%
“…Recently, a common way for representing action is using quantized local features around interest points. In [9], action representation is derived by using 2D quantized local features of motion images word and quantized angles and distances between visual words and the reference point. Liu and Shah [10] investigated the optimal number of video-words by using Maximization of Mutual Information clustering.…”
Section: Related Workmentioning
confidence: 99%