2022
DOI: 10.1109/tgrs.2022.3156646
|View full text |Cite
|
Sign up to set email alerts
|

SQAD: Spatial-Spectral Quasi-Attention Recurrent Network for Hyperspectral Image Denoising

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(11 citation statements)
references
References 45 publications
0
7
0
Order By: Relevance
“…Recently, HSI-specific networks have also been developed to take advantage of both the spectral and spatial properties of HSI [11,10,37,38,39,40,41,42,43]. In particular, SS-CAN [11] is an HSI-specific denoising network, that combines group convolutions and attention modules to effectively exploit spatial and spectral information in images.…”
Section: Deep Learning Approaches To Hsi Denoisingmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, HSI-specific networks have also been developed to take advantage of both the spectral and spatial properties of HSI [11,10,37,38,39,40,41,42,43]. In particular, SS-CAN [11] is an HSI-specific denoising network, that combines group convolutions and attention modules to effectively exploit spatial and spectral information in images.…”
Section: Deep Learning Approaches To Hsi Denoisingmentioning
confidence: 99%
“…More recently, attention-based methods were proposed to capture non-local features [39,40,41,42]. Yuan et.…”
Section: Deep Learning Approaches To Hsi Denoisingmentioning
confidence: 99%
“…In recent years, deep neural network (DNN) approaches have been popularly developed to directly learn a nonlinear mapping function from the space of noisy images to that of clean images [51]. With the employment of its strong learning capacity on large training samples, many DNN-based denoising methods have been proposed [18][19][20][21][52][53][54][55][56][57][58][59][60][61][62][63]. For example, Chen et al [52] used a loss-based scheme to learn filters from training data and formed a trainable nonlinear reaction diffusion (TNRD) model.…”
Section: Deep Neural Networkmentioning
confidence: 99%
“…Zhang et al [56] used a memory-efficient hierarchical neural architecture to search for pleasing solutions. COLA-Net [57] and SQAD [58] introduced an attention mechanism into the DNN for denoising performance improvement. The residual network was further extended by Zhang et al [59] to a residual dense network.…”
Section: Deep Neural Networkmentioning
confidence: 99%
“…However, these model-driven HSI denoising methods concentrate on denoising only and ignore promoting HSI contrast and visibility. Most recently, the success of deep learning also stimulates the development of HSI denoising [24], [25], [26], and [27].…”
Section: Introductionmentioning
confidence: 99%