2022
DOI: 10.1049/ipr2.12550
|View full text |Cite
|
Sign up to set email alerts
|

Lightweight single image deraining algorithm incorporating visual saliency

Abstract: Deep learning (DL) methods have achieved excellent performance in the task of single image rain removal, however, there are still some challenges, such as artifact remnant, background over-smooth, and more and more complex and heavy-weight network architecture. Due to too heavy-weight network to suit outdoor detection devices or mobile devices, therefore, we propose a light-weight single image deraining algorithm incorporating visual attention saliency mechanisms (LDVS). The network consists of dilation convol… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 53 publications
0
5
0
Order By: Relevance
“…Comprehensive experiments confirm that JSFR outperforms other methods for target detection on synthetic and real image in both foggy and normal conditions. In the future, we will further improve the accuracy of the algorithm and reduce network parameters to meet the needs of landing implementation [44]. Secondly, we will continue to deeply study the application scenarios such as video detection of bus traffic roads and vehicle or pedestrian tracking.…”
Section: Discussionmentioning
confidence: 99%
“…Comprehensive experiments confirm that JSFR outperforms other methods for target detection on synthetic and real image in both foggy and normal conditions. In the future, we will further improve the accuracy of the algorithm and reduce network parameters to meet the needs of landing implementation [44]. Secondly, we will continue to deeply study the application scenarios such as video detection of bus traffic roads and vehicle or pedestrian tracking.…”
Section: Discussionmentioning
confidence: 99%
“…Meanwhile, training on Paris StreetView and Places2 takes approximately 30 000 iterations in total (the first stage costs 10 000 iterations). We compare our method against some advanced conventional inpainting methods and state-of-the-art blind inpainting approaches, including CA [7] , GC [8] , HiFill [6] , VCNet [12] , and TransCNN-HAE [32] . For a fair comparison, CA, GC, and HiFill are all equipped with our IDN via sequential connections.…”
Section: Methodsmentioning
confidence: 99%
“…They first simulate multiple degradation patterns (e.g., graffiti and image stitching) and design a robust model that can identify degraded regions from semantic differences between contamination and surroundings before restoration. Zhao et al [32] proposed a one-stage hybrid autoencoder architecture, TransCNN-HAE, which exploits the powerful long-range context modeling capabilities of the transformer to avoid the possible degradation of inpainting performance from mask prediction errors. By contrast, the generalization capability of the model is considerably improved.…”
Section: Blind Image Inpaintingmentioning
confidence: 99%
See 1 more Smart Citation
“…Image enhancement is required to close the regions and remove some noises [19]. Therefore, noise removal [20], mathematical morphology operator [21], lightweight operators including dilation and closing [22], are applied to enhance the result of the edge detection process. Then filling function is applied, to fill the connected blobs in the enhanced image.…”
Section: ) Region Enhancement and Filling Blobsmentioning
confidence: 99%