2012
DOI: 10.1007/978-3-642-33783-3_19
|View full text |Cite
|
Sign up to set email alerts
|

Motion Interchange Patterns for Action Recognition in Unconstrained Videos

Abstract: Abstract. Action Recognition in videos is an active research field that is fueled by an acute need, spanning several application domains. Still, existing systems fall short of the applications' needs in real-world scenarios, where the quality of the video is less than optimal and the viewpoint is uncontrolled and often not static. In this paper, we consider the key elements of motion encoding and focus on capturing local changes in motion directions. In addition, we decouple image edges from motion edges using… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
122
1

Year Published

2013
2013
2021
2021

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 147 publications
(125 citation statements)
references
References 32 publications
(57 reference statements)
0
122
1
Order By: Relevance
“…In our experiments, the average number is set to 5 000 for a good result. [18] 82.36 [12] 93.0 [19] 88.38 [9] 93.8 [10] 90.8 [17] 90.5 [6] 95.0…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In our experiments, the average number is set to 5 000 for a good result. [18] 82.36 [12] 93.0 [19] 88.38 [9] 93.8 [10] 90.8 [17] 90.5 [6] 95.0…”
Section: Methodsmentioning
confidence: 99%
“…They capture the motion effect on the local structure of self-similarities considering 3 neighbourhood circles at different instants. Kliper-Gross et al extended this idea to Motion Interchange Patterns [12], which encodes local changes in different motion directions.…”
Section: Introductionmentioning
confidence: 99%
“…In particular, on the HMDB51 dataset, the improvement over the best reported result to date 1 is about 3%. [14] 93.9% Le et al [14] 75.8% Sadanand et al [21] 26.9% Ji et al [8] 90.2% B. et al [2] 76.5% Orit et al [10] 29.2% Wang et al [25] 95% Wang et al [25] 84.1% Wang et al [25] 46.6% Our Method 95.6% Our Method 86.56% Our Method 49.22%…”
Section: Spatio-temporal Context Descriptorsmentioning
confidence: 99%
“…Further, Wang et al [26] extend the dense sampling approach by tracking the interest points using a dense optical flow field. We have observed that trajec-tories obtained by a Kanade-Lucas-Tomasi (KLT) tracker [8], densely sampled [26] or one of the variants [6,7,9,27], have been consistently performing well on several benchmark action recognition datasets. Early work by Albright et al [2] indicates that is it sufficient to distinguish human actions by tracking the body joint positions.…”
Section: Introductionmentioning
confidence: 99%
“…In the first approach, a variant of trajectories [7,9] and/or new local feature descriptors [6] is proposed. In the second approach, feature histograms are computed in different volumes obtained my dividing the video along height, width and time, and then aggregated.…”
Section: Introductionmentioning
confidence: 99%