Proceedings. 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No.98CB36231)
DOI: 10.1109/cvpr.1998.698620
|View full text |Cite
|
Sign up to set email alerts
|

Motion feature detection using steerable flow fields

Abstract: The

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 12 publications
(25 reference statements)
0
8
0
Order By: Relevance
“…An example of using basis set methods (as steerable flow fields) is the work of Fleet et al [34]. The results are good, but the use of a gradient descent solution is heavily dependent on initial conditions and parameters governing movement in the coefficient space.…”
Section: Related Workmentioning
confidence: 99%
“…An example of using basis set methods (as steerable flow fields) is the work of Fleet et al [34]. The results are good, but the use of a gradient descent solution is heavily dependent on initial conditions and parameters governing movement in the coefficient space.…”
Section: Related Workmentioning
confidence: 99%
“…To handle complex motions with concise models, Black et al [13] proposed linear parameterized models learned from training examples using principal component analysis (PCA). Similarly, Fleet et al [18] modeled motion features, such as dynamic occlusion edges, and moving bars, using linear combinations of steerable basis flow fields. These linear models constrain the interpretation of image motion, and are used in the same way as translational or affine motion models.…”
Section: Context and Previous Workmentioning
confidence: 99%
“…This use of bottom-up information, along with the temporal prediction of Condensation, allows us to effectively sample the most interesting portions of the state-space. To initialize new states and provide a distribution over their parameters from which to sample, we use a method described by Fleet et al [8] for detecting motion discontinuities. This approach uses a robust, gradient-based optical flow method with a linear parameterized motion model.…”
Section: Low-level Motion-edge Detectorsmentioning
confidence: 99%
“…The temporal prior is defined in terms of the posterior distribution at the previous time instant and the temporal dynamics of the discontinuity model. The initialization prior incorporates predictions from a low-level motion feature detector [8]. The posterior distribution over the parameter space, conditioned on image measurements, is typically non-Gaussian.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation