The risk and damage of wildfire has been increasing due to various reasons including climate change, and Republic of Korea is no exception to this situation. Burned Area Mapping is crucial not only to prevent further damage but also to manage burned areas. Burned area mapping using satellite data, however, has been limited by spatial and temporal resolution of satellite data, and classification accuracy. This paper presents a new burned area mapping method, by which damaged areas can be mapped using semantic segmentation. For this research, PlanetScope imagery that has high-resolution images with very short revisit time were used, and the proposed method is based on U-Net that requires a uni-temporal PlanetScope image. The network was trained using 17 satellite images for 12 forest fires and corresponding label images which were obtained semiautomatically by setting threshold values. Band combination tests were conducted to produce an optimal burned area mapping model. The results demonstrated that the optimal and most stable band combination is Red, Green, Blue, and NIR of PlanetScope. To improve classification accuracy, NDVI (Normalized Difference Vegetation Index), dissimilarity extracted from GLCM (Grey-Level Co-occurrence Matrix), and Land Cover Maps were used as additional datasets. In addition, topographic normalization was conducted to improve model performance and classification accuracy by reducing shadow effects. The F1 scores and overall accuracies of the final image segmentation models are ranged from 0.883 to 0.939, and from 0.990 to 0.997, respectively. These results highlight the potential of detecting burned areas using the deep learning based approach.
The magnitude of net carbon dioxide emissions resulting from global forest carbon change, and hence the contribution of forests to global climate change, is highly uncertain, owing to the lack of direct measurement by Earth observation and ground data collection. This paper uses a new method to evaluate this uncertainty with greater precision than before. Sources of uncertainty are divided into conceptualization and measurement categories and distributed between the spatial, vertical and temporal dimensions of Earth observation. The method is applied to Forest Reference Emission Level (FREL) reports and National Greenhouse Gas Inventories (NGGIs) submitted to the UN Framework Convention on Climate Change (UNFCCC) by 12 countries containing half of tropical forest area. The two sets of estimates are typical of those to be submitted to the Reducing Emissions from Deforestation and Degradation (REDD+) mechanism of the UNFCCC and the 2023 Global Stocktake of its Paris Agreement, respectively. Assembling the Uncertainty Fingerprint of each estimate shows that Uncertainty Scores are between 10 and 14 for the NGGIs and 5 and 10 for the FREL reports, and so both exceed the threshold of 2 when it is advisable to evaluate uncertainty by standard statistical methods. Conceptualization uncertainties account for 60% of all uncertainties in the NGGIs and 47% in the FREL reports, e.g., there is incomplete coverage of forest carbon fluxes, and limited disaggregation of fluxes between different ecosystem types and forest carbon pools. Of the measurement uncertainties, all FREL reports base forest area estimates on at least medium resolution satellite data, compared with only 3 NGGIs; after REDD+ Readiness schemes, mean area mapping frequency has fallen to 2.3 years in Latin America and 3.0 years in Asia, but only 8.3 years in Africa; and carbon density estimates are based on national forest inventory data in all FREL reports but only 4 NGGIs. The effectiveness of the Global Stocktake and REDD+ monitoring will therefore be constrained by considerable uncertainties, and to reduce these requires a new phase of REDD+ Readiness to ensure more frequent national forest inventories and forest carbon mapping.
Satellite-based flood monitoring for providing visual information on the targeted areas is crucial in responding to and recovering from river floods. However, such monitoring for practical purposes has been constrained mainly by obtaining and analyzing satellite data, and linking and optimizing the required processes. For these purposes, we present a deep learning-based flood area extraction model for a fully automated flood monitoring system, which is designed to continuously operate on a cloud-based computing platform for regularly extracting flooded area from Sentinel-1 data, and providing visual information on flood situations with better image segmentation accuracy. To develop the new flood area extraction model using deep learning, initial model tests were performed more than 500 times to determine optimal hyperparameters, water ratio, and best band combination. The results of this research showed that at ‘waterbody ratio 30%’, which yielded higher segmentation accuracies and lower loss, precision, overall accuracy, IOU, recall, and F1 score of ‘VV, aspect, topographic wetness index, and buffer input bands’ were 0.976, 0.956, 0.894, 0.964, and 0.970, respectively, and averaged inference time was 744.3941 s, which demonstrate improved image segmentation accuracy and reduced processing time. The operation and robustness of the fully automated flood monitoring system were demonstrated by automatically segmenting 12 Sentinel-1 images for the two major flood events in Republic of Korea during 2020 and 2022 in accordance with the hyperparameters, waterbody ratio, and band combinations determined through the intensive tests. Visual inspection of the outputs showed that misclassification of constructed facilities and mountain shadows were extremely reduced. It is anticipated that the fully automated flood monitoring system and the deep leaning-based waterbody extraction model presented in this research could be a valuable reference and benchmark for other countries trying to build a cloud-based flood monitoring system for rapid flood monitoring using deep learning.
Mitigation of geometric calibration offset in ocean without utilizing ground control points was investigated in this study. Real-time AIS information on vessels was exploited after preprocessing and accordingly tested against the detected vessels in the SAR image. Repetitive procedure of measuring the offset between the AIS sensor and the vessel detection was conducted and derived the SAR image of which the positioning offset was ameliorated. The proposed geo-location enhancement algorithm demonstrated the possibility of application in real-time vessel monitoring from remote sensing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.