2021
DOI: 10.3390/rs13244971
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Feature Weighted Fusion Nested U-Net with Discrete Wavelet Transform for Change Detection of High-Resolution Remote Sensing Images

Abstract: The characteristics of a wide variety of scales about objects and complex texture features of high-resolution remote sensing images make deep learning-based change detection methods the mainstream method. However, existing deep learning methods have problems with spatial information loss and insufficient feature representation, resulting in unsatisfactory effects of small objects detection and boundary positioning in high-resolution remote sensing images change detection. To address the problems, a network arc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 48 publications
0
1
0
Order By: Relevance
“…However, traditional manual feature extraction methods are still significant in some fields. A deep-learning network structure was proposed by Wang et al [100]. It is based on two-dimensional discrete wavelet transform and adaptive feature weighted fusion.…”
Section: Based On Deep Learning and Traditional Manual Feature Extrac...mentioning
confidence: 99%
“…However, traditional manual feature extraction methods are still significant in some fields. A deep-learning network structure was proposed by Wang et al [100]. It is based on two-dimensional discrete wavelet transform and adaptive feature weighted fusion.…”
Section: Based On Deep Learning and Traditional Manual Feature Extrac...mentioning
confidence: 99%
“…Therefore, by utilizing image-fusion algorithms, the feature-map quality can be enhanced. Based on its feature-fusion state as shown in Figure 7 , the image-level-fusion methods can be divided into the following [ 31 , 32 ]: Early fusion: This fusion scheme happens when spatial scales are retrieved from the same regions and concatenated as one input image locally, prior to the encoding. In [ 33 ], the authors applied an early-fusion scheme by combining bitemporal remote-sensing images as one input, which was then fed to a modified UNet++ as a backbone to learn the multiscale semantic levels of the visual feature representations for remote-sensing-based change-detection application.…”
Section: Multiscale-deep-learning Taxonomymentioning
confidence: 99%
“…The wavelet transform (WT) differs from the STFT in that its time-frequency window is not uniformly distributed in the phase plane, and its unique multi-scale and multi-resolution characteristics create a unique advantage for signal composition analysis [22][23][24][25][26][27]. But once the wavelet basis function is determined, the distribution of the time-frequency window in the time-frequency plane is then fixed with poor adaptability [28][29][30][31]. The above algorithms achieve the time-frequency decomposition analysis of long time series data, but none of them have good adaptive properties.…”
Section: Introductionmentioning
confidence: 99%