Recently, an increase in wildfire incidents has caused significant damage from economical, humanitarian, and environmental perspectives. Wildfires have increased in severity, frequency, and duration because of climate change and rising global temperatures, resulting in the release of massive volumes of greenhouse gases, the destruction of forests and associated habitats, and the damage to infrastructures. Therefore, identifying burned areas is crucial for monitoring wildfire damage. In this study, we aim at detecting forest burned areas occurring in South Korea using optical satellite images. To exploit the advantage of applying machine learning, the present study employs representative three machine learning methods, Light Gradient Boosting Machine (LightGBM), Random Forest (RF), and U-Net, to detect forest burned areas with a combination of input variables, namely Surface Reflectance (SR), Normalized Difference Vegetation Index (NDVI), and Normalized Burn Ratio (NBR). Two study sites of recently occurred forest fire events in South Korea were selected, and Sentinel-2 satellite images were used by considering a small scale of the forest fires. The quantitative and qualitative evaluations according to the machine learning methods and input variables were carried out. In terms of the comparison focusing on machine learning models, the U-Net showed the highest accuracy in both sites amongst the designed variants. The pre and post fire images by SR, NDVI, NBR, and difference of indices as the main inputs showed the best result. We also demonstrated that diverse landcovers may result in a poor burned area detection performance by comparing the results of the two sites.
Multitemporal very-high-resolution (VHR) satellite images are used as core data in the field of remote sensing because they express the topography and features of the region of interest in detail. However, geometric misalignment and radiometric dissimilarity occur when acquiring multitemporal VHR satellite images owing to external environmental factors, and these errors cause various inaccuracies, thereby hindering the effective use of multitemporal VHR satellite images. Such errors can be minimized by applying preprocessing methods such as image registration and relative radiometric normalization (RRN). However, as the data used in image registration and RRN differ, data consistency and computational efficiency are impaired, particularly when processing large amounts of data, such as a large volume of multitemporal VHR satellite images. To resolve these issues, we proposed an integrated preprocessing method by extracting pseudo-invariant features (PIFs), used for RRN, based on the conjugate points (CPs) extracted for image registration. To this end, the image registration was performed using CPs extracted using the speeded-up robust feature algorithm. Then, PIFs were extracted based on the CPs by removing vegetation areas followed by application of the region growing algorithm. Experiments were conducted on two sites constructed under different acquisition conditions to confirm the robustness of the proposed method. Various analyses based on visual and quantitative evaluation of the experimental results were performed from geometric and radiometric perspectives. The results evidence the successful integration of the image registration and RRN preprocessing steps by achieving a reasonable and stable performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.