2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2017
DOI: 10.1109/cvprw.2017.226
|View full text |Cite
|
Sign up to set email alerts
|

A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
32
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 66 publications
(34 citation statements)
references
References 20 publications
0
32
0
Order By: Relevance
“…The disparity range is discretized for the methods [2], [4], [5]. As suggested in the light field depth estimation challenge held in 2017 LF4CV workshop [21], the number of disparity levels is set to 100 for the method [4] and 256 for the method [2]. For the method [5], the disparity step is set to 0.01, which corresponds to the minimal threshold of bad pixel ratios that we use.…”
Section: Results With Densely Sampled Synthetic Light Fieldsmentioning
confidence: 99%
See 1 more Smart Citation
“…The disparity range is discretized for the methods [2], [4], [5]. As suggested in the light field depth estimation challenge held in 2017 LF4CV workshop [21], the number of disparity levels is set to 100 for the method [4] and 256 for the method [2]. For the method [5], the disparity step is set to 0.01, which corresponds to the minimal threshold of bad pixel ratios that we use.…”
Section: Results With Densely Sampled Synthetic Light Fieldsmentioning
confidence: 99%
“…According to the metrics defined in [20] [21], experimental results show that the proposed approach outperforms state-ofthe-art light field disparity estimation methods for both densely and sparsely sampled LF. In addition, it does not require any prior information on disparity range as in [2], [4], [5] for example.…”
Section: Introductionmentioning
confidence: 99%
“…Ground truth disparity on center view is available for HCI synthetic light fields. Using the evaluation metrics defined in [10,11], we compare in Table 1 our scheme against two state-of-the-art methods, namely *LF [2] and SPO [5]. In our experiments, the number of discretized disparity levels is kept the same as in [11], i.e.…”
Section: Performance Assessmentmentioning
confidence: 99%
“…Unlike [2,5,7,8], the proposed method does not demand discretization of the disparity space, nor prior knowledge about the disparity range. According to the metrics defined in [10,11], the experiments show that our approach achieves competitive performance compared to state-of-the-art methods that make use of the whole set of light field views.…”
Section: Introductionmentioning
confidence: 99%
“…3: Visual comparison of the estimated disparity maps on center view. We use the same evaluation metrics defined in [21,22]. MSE is mean-square-error, which penalizes large disparity errors on the object boundary, whereas BadPix(α) (the percentage of pixels having an error superior to α, α being set to small values) and Q25 (the error value *100 at the 25th percentile of the disparity estimates) measure the sub-pixel accuracy.…”
Section: Dense Light Fieldsmentioning
confidence: 99%