2021
DOI: 10.3390/rs13245144
|View full text |Cite
|
Sign up to set email alerts
|

Saliency-Guided Remote Sensing Image Super-Resolution

Abstract: Deep learning has recently attracted extensive attention and developed significantly in remote sensing image super-resolution. Although remote sensing images are composed of various scenes, most existing methods consider each part equally. These methods ignore the salient objects (e.g., buildings, airplanes, and vehicles) that have more complex structures and require more attention in recovery processing. This paper proposes a saliency-guided remote sensing image super-resolution (SG-GAN) method to alleviate t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(15 citation statements)
references
References 75 publications
0
15
0
Order By: Relevance
“…Many people propose their work by modifying the network structure and designed some novel loss functions for SRGAN: Xiong et al 28 used a Wasserstein distance to replace the KL and JS divergence as well as modify part of network structure; Zhang et al 29 designed an encoder-decoder structure for unsupervised SR and used a robust loss function based on perceptual loss; Xu et al 30 designed residual generators with self-attention mechanisms and weight normalization and combined multiple losses to optimize the training process. Liu et al 31 proposed an SG-GAN network based on saliency guidance, focusing on more salient parts of complex structures while maintaining rich edges in details. Dong et al 32 proposed a reference-based RRSGAN, which has better generalization ability in different scenarios by using gradient-assisted feature alignment and incorporating the relevant attention module (RAM).…”
Section: Cnn-based Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Many people propose their work by modifying the network structure and designed some novel loss functions for SRGAN: Xiong et al 28 used a Wasserstein distance to replace the KL and JS divergence as well as modify part of network structure; Zhang et al 29 designed an encoder-decoder structure for unsupervised SR and used a robust loss function based on perceptual loss; Xu et al 30 designed residual generators with self-attention mechanisms and weight normalization and combined multiple losses to optimize the training process. Liu et al 31 proposed an SG-GAN network based on saliency guidance, focusing on more salient parts of complex structures while maintaining rich edges in details. Dong et al 32 proposed a reference-based RRSGAN, which has better generalization ability in different scenarios by using gradient-assisted feature alignment and incorporating the relevant attention module (RAM).…”
Section: Cnn-based Methodsmentioning
confidence: 99%
“…designed residual generators with self-attention mechanisms and weight normalization and combined multiple losses to optimize the training process. Liu et al 31 . proposed an SG-GAN network based on saliency guidance, focusing on more salient parts of complex structures while maintaining rich edges in details.…”
Section: Related Workmentioning
confidence: 99%
“…Lei et al [44] propose coupled adversarial training with a well-designed discriminator to learn a better discrimination between the super-resolved image and the corresponding ground truth. Liu et al [45] design a saliency-guided GAN method to improve visual results with additional salient priors. Some researchers focus on the SR of remote sensing satellite videos.…”
Section: A Deep Learning Based Remote Sensing Image Srmentioning
confidence: 99%
“…In this direction, Gong et al [31] proposed the Enlighten-GAN model that uses a self-supervised hierarchical perceptual loss. Liu et al [32] exploited the salient maps of images to learn additional structure priors and to make the model focus more on the salient objects. Huan et al [33] proposed a multi-scale residual network with hierarchical feature fusion and multiscale dilation residual blocks.…”
Section: Related Workmentioning
confidence: 99%
“…In this paper, the authors exploit the best practices derived from state-of-the-art experimentation up to date, suggesting partly keeping the training principles (content and adversarial losses) of the highly successful ESRGAN while modifying the way the perceptual > REPLACE THIS LINE WITH YOUR MANUSCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) < loss is conceived in the context of super-resolution in general and in remote sensing specifically. The key idea of the proposed approach lies in the observation that single-image superresolution is an ill-posed problem in the sense that for any LR image exist numerous HR images that could correspond to it [23], [31], [32]. Thus, for any successful model to achieve superior performance, it must derive significant pixel-level knowledge during the training.…”
Section: Related Workmentioning
confidence: 99%