2022
DOI: 10.1049/ipr2.12506
|View full text |Cite
|
Sign up to set email alerts
|

Two‐stage single image dehazing network using swin‐transformer

Abstract: Hazy images often have color distortion, blur and other visible visual quality degradation, affecting the performance of some advanced visual tasks. Therefore, single image dehazing has always been a challenging and significant problem. Convolutional neural network has been widely used in image dehazing task, but the limitations of convolutional operation limit the development of dehazing task. Nowadays, Transformer offers a holistic approach to CV development and does not grow in location as the network deepe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(1 citation statement)
references
References 53 publications
(78 reference statements)
0
1
0
Order By: Relevance
“…The initial design operation is to multiply the two features to obtain a matrix, then sum pool the matrix to obtain the feature vector, and then use this vector to classify, but it usually suffers from as a high computational complexity as scriptOfalse(n2false)$\mathcal {O}(n^2)$. In recent years, an effective attention‐based fusion method has been developed by extending transformer [22–27]. The self‐attention mechanism in Transformer can be regarded as information fusion on a fully‐connected graph, which is more general to model the input data.…”
Section: Introductionmentioning
confidence: 99%
“…The initial design operation is to multiply the two features to obtain a matrix, then sum pool the matrix to obtain the feature vector, and then use this vector to classify, but it usually suffers from as a high computational complexity as scriptOfalse(n2false)$\mathcal {O}(n^2)$. In recent years, an effective attention‐based fusion method has been developed by extending transformer [22–27]. The self‐attention mechanism in Transformer can be regarded as information fusion on a fully‐connected graph, which is more general to model the input data.…”
Section: Introductionmentioning
confidence: 99%