2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2015
DOI: 10.1109/cvpr.2015.7298827
|View full text |Cite
|
Sign up to set email alerts
|

Articulated motion discovery using pairs of trajectories

Abstract: We propose an unsupervised approach for discovering characteristic motion patterns in videos of highly articulated objects performing natural, unscripted behaviors, such as tigers in the wild. We discover consistent patterns in a bottom-up manner by analyzing the relative displacements of large numbers of ordered trajectory pairs through time, such that each trajectory is attached to a different moving part on the object. The pairs of trajectories descriptor relies entirely on motion and is more discriminative… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 22 publications
(15 citation statements)
references
References 38 publications
0
15
0
Order By: Relevance
“…As some sequences lack pauses between different, but related behaviors (e.g., from walking to running), we also partition based on periodic motion. For this we use time-frequency analysis, as periodic motion patterns like walking, running, or licking typically generate peaks in the frequency domain (examples available on our website Del Pero et al 2015b).…”
Section: Temporal Partitioningmentioning
confidence: 99%
See 3 more Smart Citations
“…As some sequences lack pauses between different, but related behaviors (e.g., from walking to running), we also partition based on periodic motion. For this we use time-frequency analysis, as periodic motion patterns like walking, running, or licking typically generate peaks in the frequency domain (examples available on our website Del Pero et al 2015b).…”
Section: Temporal Partitioningmentioning
confidence: 99%
“…7.4). We publicly released this data at Del Pero et al (2015b), where we also provide foreground masks for each shot computed using Papazoglou and Ferrari (2013).…”
Section: Datasetmentioning
confidence: 99%
See 2 more Smart Citations
“…The existing action co-localization works have mainly focused on two scenarios, i.e., co-localization in pairs of videos [18,38,97] and weakly supervised action co-localization with video level labels [20,21,92,93,108,134,143]. Few of these works have considered a fully unconstrained scenario like us, i.e., the numbers and types of common actions are unknown in advance and each video may contain zero, one or several common actions.…”
Section: Thematic Action Discovery and Localization In Collections Of Videosmentioning
confidence: 99%