The low-rank models have gained remarkable performance in the field of remote sensing image denoising. Nonetheless, the existing low-rank-based methods view residues as noise and simply discard them. This causes denoised results to lose many important details, especially the edges. In this paper, we propose a new denoising method named EPLRR-RSID, which focuses on edge preservation to improve the image quality of the details. Specifically, we considered the low-rank residues as a combination of useful edges and noisy components. In order to better learn the edge information from the low-rank representation (LRR), we designed multi-level knowledge to further distinguish the edge part and the noise part from the residues. Furthermore, a manifold learning framework was introduced in our proposed model to better obtain the edge information, as it can find the structural similarity of the edge part while suppressing the influence of the non-structural noise part. In this way, not only the low-rank part is better learned, but also the edge part is precisely preserved. Extensive experiments on synthetic and several real remote sensing datasets showed that EPLRR-RSID has superior advantages over the compared state-of-the-art (SOTA) approaches, with the mean edge protect index (MEPI) values reaching at least 0.9 and the best values in the no-reference index BRISQUE, which represents that our method improved the image quality by edge preserving.
Robust unsupervised feature learning is a critical yet tough task for synthetic aperture radar (SAR) automatic target recognition (ATR) with limited labeled data. The developing contrastive self-supervised learning (CSL) method, which learns informative representations by solving an instance discrimination task, provides a novel method for learning discriminative features from unlabeled SAR images. However, the instance-level contrastive loss can magnify the differences between samples belonging to the same class in the latent feature space. Therefore, CSL can dispel these targets from the same class and affect the downstream classification tasks. In order to address this problem, this paper proposes a novel framework called locality preserving property constrained contrastive learning (LPPCL), which not only learns informative representations of data but also preserves the local similarity property in the latent feature space. In LPPCL, the traditional InfoNCE loss of the CSL models is reformulated in a cross-entropy form where the local similarity of the original data is embedded as pseudo labels. Furthermore, the traditional two-branch CSL architecture is extended to a multi-branch structure, improving the robustness of models trained with limited batch sizes and samples. Finally, the self-attentive pooling module is used to replace the global average pooling layer that is commonly used in most of the standard encoders, which provides an adaptive method for retaining information that benefits downstream tasks during the pooling procedure and significantly improves the performance of the model. Validation and ablation experiments using MSTAR datasets found that the proposed framework outperformed the classic CSL method and achieved state-of-the-art (SOTA) results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.