2021
DOI: 10.3390/rs13163125
|View full text |Cite
|
Sign up to set email alerts
|

Radiometric Normalization for Cross-Sensor Optical Gaofen Images with Change Detection and Chi-Square Test

Abstract: As the number of cross-sensor images increases continuously, the surface reflectance of these images is inconsistent at the same ground objects due to different revisit periods and swaths. The surface reflectance consistency between cross-sensor images determines the accuracy of change detection, classification, and land surface parameter inversion, which is the most widespread application. We proposed a relative radiometric normalization (RRN) method to improve the surface reflectance consistency based on the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 64 publications
0
3
0
Order By: Relevance
“…Likewise, Moghimi et al [3] employed a fast level set method (FLSM) and patch-based outlier detection to pick an ideal set of PIFs using a step-by-step unchanged sample selection strategy. With a similar idea, Yan et al [25] employed a chi-square test to automatically extract the PIFs from the unchanged regions detected by an unsupervised autoencoder (AE) method. Although the mentioned methods yielded promising results, they were often computationally demanding in terms of both processing and memory storage.…”
Section: Introductionmentioning
confidence: 99%
“…Likewise, Moghimi et al [3] employed a fast level set method (FLSM) and patch-based outlier detection to pick an ideal set of PIFs using a step-by-step unchanged sample selection strategy. With a similar idea, Yan et al [25] employed a chi-square test to automatically extract the PIFs from the unchanged regions detected by an unsupervised autoencoder (AE) method. Although the mentioned methods yielded promising results, they were often computationally demanding in terms of both processing and memory storage.…”
Section: Introductionmentioning
confidence: 99%
“…Radiometric normalization algorithms can be divided into two main categories based on the transformation of grayscale values to physical signals: absolute radiometric normalization requiring accurate sensor calibration parameters and atmospheric properties [11,12], and relative radiometric normalization (RRN), which is an image−based approach using one image as a reference to normalize another image through its radiometric characteristics [13][14][15]. Due to parameter differences among sensors and the difficulty in collecting atmospheric parameters, the relative radiometric normalization method is generally used for image normalization that does not require extra parameters [16,17]. Hence, in this process, the subject image tends to need similar radiometric conditions as the reference image.…”
Section: Introductionmentioning
confidence: 99%
“…Accurate PIFs are essential to compare sensor images. Rahman, Hay [7], Razzak, Mateo−García [16], Yan, Yang [17], and Padró, Pons [23] normalized images from different sensor types (e.g., Landsat, Sentinel−2, Gaofen, and super−resolution images) using a PIF−based method and reported coherent image correction among varying time series and sensor types. The biggest difference between these studies is the processing of manually or automatically PIFs selection.…”
Section: Introductionmentioning
confidence: 99%
“…The problem of radiometric heterogeneities concerns images acquired from different altitudes and may be caused by various phenomena, such as changes in weather conditions during data acquisition, bidirectional reflectance distribution function (BRDF) effects and sensor defects. RRN is a common, current theme in satellite imagery research (Bai et al., 2018; Santra et al., 2019; Latte and Lejeune, 2020; Yan et al., 2021; Yin et al., 2021). These studies most often concern the normalisation of images of the same area acquired at different times and even using different sensors (Ghanbari et al., 2018; Yin et al., 2021).…”
Section: Introductionmentioning
confidence: 99%