2011 18th IEEE International Conference on Image Processing 2011
DOI: 10.1109/icip.2011.6115797
|View full text |Cite
|
Sign up to set email alerts
|

Temporal trimap propagation for video matting using inferential statistics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(10 citation statements)
references
References 12 publications
0
10
0
Order By: Relevance
“…We apply these independently to each view and frame, generating scribbles for each ground truthed frame. The tri-map propagation (TriProp) approach of Sarim et al was also compared against, and was initialised manually at t = 0 as per [5] using only a single tri-map key as input.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…We apply these independently to each view and frame, generating scribbles for each ground truthed frame. The tri-map propagation (TriProp) approach of Sarim et al was also compared against, and was initialised manually at t = 0 as per [5] using only a single tri-map key as input.…”
Section: Resultsmentioning
confidence: 99%
“…[26] Tri.Prop. [5] an Eigenmodel built from the mean (µ) and covariance (C) of RGB colour samples of the background (obtained manually in the first frame). Each pixel's colour c is thresholded via (c−µ)C −1 (c−µ) T < T RGBeigen .…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Segmentation from multiple wide-baseline views has been proposed by exploiting appearance similarity [6,19,38]. These approaches assume static backgrounds and different colour distributions for the foreground and back-ground [27,6] which limits applicability for general scenes. In contrast to overcome these limitations, the proposed approaches initialised the foreground object segmentation from wide-baseline feature correspondence followed by joint segmentation and reconstruction.…”
Section: Joint Segmentation and Reconstructionmentioning
confidence: 99%