2013
DOI: 10.1007/s00138-013-0562-5
|View full text |Cite
|
Sign up to set email alerts
|

Background subtraction model based on color and depth cues

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
22
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(22 citation statements)
references
References 44 publications
0
22
0
Order By: Relevance
“…The disparity information provided as a benchmark dataset 70 is used in [13]. Figure 20.8 shows an example of input with three types of depth images taken from the dataset described in [7]. Figure 20.8 shows an example of input with three types of depth images taken from the dataset described in [7].…”
Section: Disparity Estimationmentioning
confidence: 99%
See 2 more Smart Citations
“…The disparity information provided as a benchmark dataset 70 is used in [13]. Figure 20.8 shows an example of input with three types of depth images taken from the dataset described in [7]. Figure 20.8 shows an example of input with three types of depth images taken from the dataset described in [7].…”
Section: Disparity Estimationmentioning
confidence: 99%
“…This benchmark dataset [7] contains a set of real video sequences including depths information of each sequence. The dataset contains four challenging scenes called suitecase, crossing, labdoor, and LCD screen.…”
Section: Disparity Benchmark Dataset (Dbd)mentioning
confidence: 99%
See 1 more Smart Citation
“…Researchers often have the same idea, F. Sanchez [7] apply the codebook model algorithm to 4-channel scene that pixel value is . Dong Tian [8] propose a depth-weighted group-wise PCA (DG-PCA) approach that is formulated as a weighted -norm PCA problem with depth-based group sparse being introduced.…”
Section: Introductionmentioning
confidence: 99%
“…In another way, Fernandez-Sanchez et al [5] propose a depth-extended Codebook model which fuses range and color information, as well as a post-processing mask fusion stage to get the best of each feature. Results are presented with a complete dataset of stereo images.…”
mentioning
confidence: 99%