2022 IEEE International Conference on Image Processing (ICIP) 2022
DOI: 10.1109/icip46576.2022.9897352
|View full text |Cite
|
Sign up to set email alerts
|

2HDED:Net for Joint Depth Estimation and Image Deblurring from a Single Out-of-Focus Image

Abstract: Depth estimation and all-in-focus image restoration from defocused RGB images are related problems, although most of the existing methods address them separately. The few approaches that solve both problems use a pipeline processing to derive a depth or defocus map as an intermediary product that serves as a support for image deblurring, which remains the primary goal. In this paper, we propose a new Deep Neural Network (DNN) architecture that performs in parallel the tasks of depth estimation and image deblur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(10 citation statements)
references
References 54 publications
0
1
0
Order By: Relevance
“…In this section, we evaluate iDFD dataset by training a multi-task network that is challenging for both predicting depth and deblurring RGB images. The network is called 2HDED:NET [12]. To emphasize on the importance of using real data, we retrain the network in the same conditions on NYU dataset, completed with synthetically defocused images.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…In this section, we evaluate iDFD dataset by training a multi-task network that is challenging for both predicting depth and deblurring RGB images. The network is called 2HDED:NET [12]. To emphasize on the importance of using real data, we retrain the network in the same conditions on NYU dataset, completed with synthetically defocused images.…”
Section: Resultsmentioning
confidence: 99%
“…Finally, on iDFD, we test the network trained on NYU to evaluate the performance of a model learned from similar but synthetic data. 2HDED:NET is a recently proposed architecture consisting of one encoder and two decoders [12]. The encoder is fed in with a defocused RGB image, while one decoder outputs the scene depth map and the other deblurred image.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Their proposed model performed well with both sharp and blurred images in computational depth estimation up to a range of 3.3 m, regardless of whether the image was in focus or out of focus. Nazir et al [20] suggested a deep convolutional neural network to estimate the depth and image deblurring. Kumar et al [21] presented a novel technique to generate a more accurate depth map for dynamic scenes using a combination of defocus and motion cues.…”
Section: Related Workmentioning
confidence: 99%