2019
DOI: 10.3390/rs11222612
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of Five Spatio-Temporal Satellite Image Fusion Models over Landscapes with Various Spatial Heterogeneity and Temporal Variation

Abstract: In recent years, many spatial and temporal satellite image fusion (STIF) methods have been developed to solve the problems of trade-off between spatial and temporal resolution of satellite sensors. This study, for the first time, conducted both scene-level and local-level comparison of five state-of-art STIF methods from four categories over landscapes with various spatial heterogeneity and temporal variation. The five STIF methods include the spatial and temporal adaptive reflectance fusion model (STARFM) and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
11
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(20 citation statements)
references
References 55 publications
2
11
0
Order By: Relevance
“…Table 2 shows the quantitative comparison results of the five methods. It appears that UBDF has the worst performance among all the methods, which is consistent with the results reported in [20]. For all the bands, SFSDAF generated lower RMSE and MAD and higher CC, SSIM and PSNR values compared with those of FSDAF.…”
Section: Comparison and Evaluationsupporting
confidence: 88%
See 2 more Smart Citations
“…Table 2 shows the quantitative comparison results of the five methods. It appears that UBDF has the worst performance among all the methods, which is consistent with the results reported in [20]. For all the bands, SFSDAF generated lower RMSE and MAD and higher CC, SSIM and PSNR values compared with those of FSDAF.…”
Section: Comparison and Evaluationsupporting
confidence: 88%
“…To evaluate the performance of the adaptive-SFSDAF algorithm quantitatively and visually, four comparison algorithms are selected as benchmark methods: FSDAF [11], SFSDAF [39], FIT-FC [9], and UBDF [27]. The above four methods were selected due to the following reasons: (1) FSDAF is a robust model at various scales [20]; (2) SFSDAF is a recently developed fusion algorithm based on FSDAF and had better performance than the existing representative fusion methods in all of the experiments reported as it incorporated sub-pixel class fraction change information in the fusion methods; (3) FIT-FC is computationally efficient in comparison to other fusion algorithms in the literature; and (4) UBDF is the most cited model in the unmixing based methods. Among all experiments, for FSDAF, SFSDAF and UBDF, the number of land cover classes was set to 4, for FSDAF, SFSDAF and FIT-FC, the number of similar pixels was set to 20, and the size of the sliding window was set to 16.…”
Section: Comparison and Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…Firstly, the large uncertainties are mainly attributed to the defectiveness of the prior fine-coarse image pair, which meant that less supplementary information was available. Previous studies also supported this finding, and suggested that the number of fine resolution images that are inputted has a significant influence on the performance of fusion methods [58]. Significantly, the fusion images utilized in our study are GF-1 and MODIS products, which are not identical to the products used in the other studies.…”
Section: Comparison With Other Fusion Modelssupporting
confidence: 77%
“…For example, the Fit-FC proposed to retrieve strong temporal changes performs more effectively than STARFM and FSDAF in capturing fast phenological changes (Q. Wang & Atkinson, 2018), but shows lower accuracy than FSDAF in retaining image structures when blending heterogeneous sites (Liu et al, 2019a). The accuracy of learning-based methods decreases when spatial heterogeneity is high and spectral scale differences between low-and high-resolution images are large (Zhu et al, 2016).…”
Section: Introductionmentioning
confidence: 99%