2019
DOI: 10.1109/tcsvt.2018.2832086
|View full text |Cite
|
Sign up to set email alerts
|

Depth Map Estimation Using Defocus and Motion Cues

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 17 publications
(3 citation statements)
references
References 31 publications
0
3
0
Order By: Relevance
“…[30]- [39] adopted different strategies to obtain reliable depth estimation from a single camera by learning to exploit monocular clues such as shadows, occlusions and relative scales between objects. In this field, a particularly appealing practice consists of training end-to-end models in a self or semisupervised manner [12], [36], replacing the need for groundtruth depth labels with image reprojection across different viewpoints according to two main strategies, respectively acquiring images with a single, moving camera [36], [38], [40], [41] or using a stereo camera [10], [12], [37], [42]- [44].…”
Section: A Depth Estimationmentioning
confidence: 99%
“…[30]- [39] adopted different strategies to obtain reliable depth estimation from a single camera by learning to exploit monocular clues such as shadows, occlusions and relative scales between objects. In this field, a particularly appealing practice consists of training end-to-end models in a self or semisupervised manner [12], [36], replacing the need for groundtruth depth labels with image reprojection across different viewpoints according to two main strategies, respectively acquiring images with a single, moving camera [36], [38], [40], [41] or using a stereo camera [10], [12], [37], [42]- [44].…”
Section: A Depth Estimationmentioning
confidence: 99%
“…Also, estimated defocus value results in value proportional to original defocus value due to effect such as downsampling as shown in (11), matting [54] required for full defocus map estimation from sparse defocus map, etc. We use optics to relate defocus parameter σ to depth d as given by (13) based on framework discussed in [56]. Here, k1 and k2 are camera parameters that are always positive σ=k1k2dd=k2k1σEstimation of defocus is relative in nature, i.e.…”
Section: Applications Of Bplcmentioning
confidence: 99%
“…In contrast to learning-based schemes, traditional depth estimation methods mainly focus on discovering different cues to estimate depth from 2-D images [1]- [4], such as structure, shadows, lighting, occlusion, etc [5]. However, these cues are limited or only applicable to specific scenarios (e.g., shadows are greatly affected by ambient lighting).…”
Section: Introductionmentioning
confidence: 99%