Proceedings of IEEE Conference on Computer Vision and Pattern Recognition CVPR-94 1994
DOI: 10.1109/cvpr.1994.323865
|View full text |Cite
|
Sign up to set email alerts
|

MDL-based spatiotemporal segmentation from motion in a long image sequence

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

1999
1999
2014
2014

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…The algorithm combines aspects of snakes and region growing, and is guaranteed to converge to a local minimum. In related work, [23] presented a method for spatiotemporal segmentation of long sequences of images based on the MDL principle and simultaneously obtained optimal spatial segmentation and motion estimation without extracting the optic-flow field.…”
Section: Minimum Description Length Principlementioning
confidence: 99%
“…The algorithm combines aspects of snakes and region growing, and is guaranteed to converge to a local minimum. In related work, [23] presented a method for spatiotemporal segmentation of long sequences of images based on the MDL principle and simultaneously obtained optimal spatial segmentation and motion estimation without extracting the optic-flow field.…”
Section: Minimum Description Length Principlementioning
confidence: 99%
“…For automatic extraction of partial contours, various methods of edge constitution have also been reported. Some methods approximate and constitute the edge pixels using curves or ellipses [12], some methods minimize the spatial position of edge pixels using curves [13], and some methods perform spatiotemporal segmentation by minimizing the motion of edge pixels in a long image sequence [14]. In these methods, tracking of contours has not been considered, so that correspondence of contours between images becomes a problem.…”
Section: Introductionmentioning
confidence: 99%
“…They find the motion parameters of a human body part by calculating the optical flow of several points within this part and compare them to the movements of a model of the part. Gu et al [48] instead track edge features, defined by their length and contrast, in consecutive images using the optical flow constraint. In the work by Bregler [17] each pixel is represented by its optical flow which is grouped into blobs having coherent motion and represented by a mixture of multivariate Gaussians.…”
Section: Temporal Datamentioning
confidence: 99%