2015 Fifth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG) 2015
DOI: 10.1109/ncvpripg.2015.7489946
View full text | Cite
|
Sign up to set email alerts
|

Abstract: As imaging is a process of 2D projection of a 3D scene, the depth information is lost at the time of image capture from conventional camera. This depth information can be inferred back from a set of visual cues present in the image. In this work, we present a model that combines two monocular depth cues namely Texture and Defocus. Depth is related to the spatial extent of the defocus blur by assuming that more an object is blurred, the farther it is from the camera. At first, we estimate the amount of defocus … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 13 publications
(14 reference statements)
0
6
0
Order By: Relevance
“…Zhuo and Sim [9] produce reliable defocus map in the presence of noise employing gradient ratio at the edges. Srikakulapu et al [51] use reliable local scale using Elder and Zucker's method [43] to obtain the edge points at which defocus is estimated. However, the presence of non‐ideal edges, false edges and other types of blurs affect the performance of these techniques.…”
Section: Applications Of Bplcmentioning
confidence: 99%
See 3 more Smart Citations
“…Zhuo and Sim [9] produce reliable defocus map in the presence of noise employing gradient ratio at the edges. Srikakulapu et al [51] use reliable local scale using Elder and Zucker's method [43] to obtain the edge points at which defocus is estimated. However, the presence of non‐ideal edges, false edges and other types of blurs affect the performance of these techniques.…”
Section: Applications Of Bplcmentioning
confidence: 99%
“…14. First column shows the test images, the second column shows the defocus map results for Zhuo's method [9], third column shows defocus map obtained using Srikakulapu's method [51] without applying the hole filling, fourth column shows defocus map for Karaali's method [52] by utilising edge‐aware matting given in [52], fifth column shows defocus map for Karaali's method [52] using Levin's [54] closed‐form matting and last column shows defocus map obtained with the proposed correction. Sky region in the third image introduces large error in estimated defocus map as no defocus information is present for such regions.…”
Section: Applications Of Bplcmentioning
confidence: 99%
See 2 more Smart Citations
“…Third, the blur texture ambiguity is still a challenging problem. Srikakulapu et al [18] proposed a method to correct the depth map by using texture information such as edge sharpness, spot energy, and contrast. However, this approach cannot estimate the metric scale.…”
Section: Related Workmentioning
confidence: 99%