2022
DOI: 10.3390/rs14133057
|View full text |Cite
|
Sign up to set email alerts
|

STF-EGFA: A Remote Sensing Spatiotemporal Fusion Network with Edge-Guided Feature Attention

Abstract: Spatiotemporal fusion in remote sensing plays an important role in Earth science applications by using information complementarity between different remote sensing data to improve image performance. However, several problems still exist, such as edge contour blurring and uneven pixels between the predicted image and the real ground image, in the extraction of salient features by convolutional neural networks (CNNs). We propose a spatiotemporal fusion method with edge-guided feature attention based on remote se… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 42 publications
(62 reference statements)
0
6
0
Order By: Relevance
“…As both the component and the panchromatic image can reflect the gray change in ground objects, the component is expressed in the panchromatic image to obtain the component with more detailed features. Finally, the new RGB image is obtained using the IHS inverse transformation, as shown in Equation (4) follows [ 25 ]: where , and denote the red, green, and blue new bands of the multi-spectral image; and denote the intermediate variables of RGB conversion to the IHS color space; denotes the new value of after the IHS transformation.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…As both the component and the panchromatic image can reflect the gray change in ground objects, the component is expressed in the panchromatic image to obtain the component with more detailed features. Finally, the new RGB image is obtained using the IHS inverse transformation, as shown in Equation (4) follows [ 25 ]: where , and denote the red, green, and blue new bands of the multi-spectral image; and denote the intermediate variables of RGB conversion to the IHS color space; denotes the new value of after the IHS transformation.…”
Section: Methodsmentioning
confidence: 99%
“…As both the component I and the panchromatic image can reflect the gray change in ground objects, the component I is expressed in the panchromatic image to obtain the component I new with more detailed features. Finally, the new RGB image is obtained using the IHS inverse transformation, as shown in Equation ( 4) follows [25]:…”
Section: Dual-transformation For Fused Imagesmentioning
confidence: 99%
“…The visual interpretation method can directly analyze the similarity between the fused image and the real image and yield a preliminary judgment on the fusion accuracy of each model. The correlation analysis method mainly uses four evaluation metrics: average absolute deviation (AAD), root mean square error (RMSE), correlation coefficient (CC), and structural similarity (SSIM) [26,46,47]. These indexes are used to quantitatively evaluate the similarity between the fused image and the real image.…”
Section: Accuracy Evaluationmentioning
confidence: 99%
“…These models contributed to building a common framework for spationtemporal fusion algorithms that employs the use of two streams and the stepwise modeling of spatial, sensor, and temporal differences. In recent works [8][9][10][11][12][13][14][15], multiscale learning, spatial channel attention mechanisms, and edge reservation have been introduced into CNNs for the extraction and integration of features.…”
Section: Introductionmentioning
confidence: 99%