2021
DOI: 10.1016/j.jvcir.2021.103137
|View full text |Cite
|
Sign up to set email alerts
|

TheiaNet: Towards fast and inexpensive CNN design choices for image dehazing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(8 citation statements)
references
References 38 publications
0
8
0
Order By: Relevance
“…From Table 6, the proposed image-dehazing CNN leads to better results than the prior-based methods and learning-based methods that have been proposed in recent years. Compared to the latest approach, TheiaNet [20], the average SSIM of the indoor dehazing results increased by 5.65%, and the average PSNR increased by 7.73%, whereas the average SSIM of the outdoor dehazing results increased by 2.83%, and the average PSNR increased by 14.09%. [20], the average SSIM of the dehazing results increased by 2.47%, and the average PSNR increased by 8.09%.…”
Section: Performance and Discussionmentioning
confidence: 80%
See 4 more Smart Citations
“…From Table 6, the proposed image-dehazing CNN leads to better results than the prior-based methods and learning-based methods that have been proposed in recent years. Compared to the latest approach, TheiaNet [20], the average SSIM of the indoor dehazing results increased by 5.65%, and the average PSNR increased by 7.73%, whereas the average SSIM of the outdoor dehazing results increased by 2.83%, and the average PSNR increased by 14.09%. [20], the average SSIM of the dehazing results increased by 2.47%, and the average PSNR increased by 8.09%.…”
Section: Performance and Discussionmentioning
confidence: 80%
“…When using this architecture to solve the problem of image dehazing, it is not possible to simply deepen the number of network layers; but a suitable neural network block should be imported. For example, TheiaNet [20] imports the bottleneck enhancer into the final output of the encoder, which extracts the feature map from coarse to fine through multi-scale pooling to obtain different feature maps and then concatenates these feature maps together. The final output of the decoder is added to the aggregation head, and the outputs of the encoder and decoder of different layers are upsampled to the same resolution and concatenated.…”
Section: Deep-learning-based Methodsmentioning
confidence: 99%
See 3 more Smart Citations