2011 International Conference on Computer Vision 2011
DOI: 10.1109/iccv.2011.6126397
|View full text |Cite
|
Sign up to set email alerts
|

Action recognition in videos acquired by a moving camera using motion decomposition of Lagrangian particle trajectories

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
88
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 132 publications
(91 citation statements)
references
References 25 publications
3
88
0
Order By: Relevance
“…Gilbert et al [34] describes a hierarchical representation of features in a multi-stage approach to capture the most distinctive components of actions. Wu et al [35] uses optical flow to find motion, thereby avoiding the need for object detection and creating robust trajectories. These motion features are then used in an SVM for performing action recognition.…”
Section: Related Workmentioning
confidence: 99%
“…Gilbert et al [34] describes a hierarchical representation of features in a multi-stage approach to capture the most distinctive components of actions. Wu et al [35] uses optical flow to find motion, thereby avoiding the need for object detection and creating robust trajectories. These motion features are then used in an SVM for performing action recognition.…”
Section: Related Workmentioning
confidence: 99%
“…Since our focus here is on modeling a spatiotemporal graph of video features -not on particular features -we use two recent, easy-to-implement approaches that were thoroughly evaluated in the literature [1,18].…”
Section: Feature Extractionmentioning
confidence: 99%
“…Feature extraction: Given a video, motion features are extracted in the form of trajectory snippets, which can be either KLT tracks of Harris corners [1], or Lagrangian particle trajectories [18]. These features have been demonstrated by prior work as robust against many challenges of real-world videos, including large variations in camera motions, object pose and scale, and illumination.…”
Section: Introductionmentioning
confidence: 99%
“…Sun et al [4] used trajectories of SIFT points and encoded motion in three levels of context information: point level, intra-trajectory context and inter-trajectory context. Wu et al [5] used a dense trajectory field obtained by tracking densely sampled particles driven by optical flow. They decomposed the trajectories into camera-induced and object-induced components using low rank optimisation.…”
Section: Introductionmentioning
confidence: 99%