Compared with a monoculture planting mode, the practice of crop rotations improves fertilizer efficiency and increases crop yield. Large-scale crop rotation monitoring relies on the results of crop classification using remote sensing technology. However, the limited crop classification accuracy cannot satisfy the accurate identification of crop rotation patterns. In this paper, a crop classification and rotation mapping scheme combining the random forest (RF) algorithm and new statistical features extracted from time-series ground range direction (GRD) Sentinel-1 images. First, the synthetic aperture radar (SAR) time-series stacks are established, including VH, VV, and VH/VV channels. Then, new statistical features named the objected generalized gamma distribution (OGΓD) features are introduced to compare with other object-based features for each polarization. The results showed that the OGΓD σVH achieved 96.66% of the overall accuracy (OA) and 95.34% of the Kappa, improving around 4% and 6% compared with the object-based backscatter in VH polarization, respectively. Finally, annual crop-type maps for five consecutive years (2017–2021) are generated using the OGΓD σVH and the RF. By analyzing the five-year crop sequences, the soybean-corn (corn-soybean) is the most representative rotation in the study region, and the soybean-corn-soybean-corn-soybean (together with corn-soybean-corn-soybean-corn) has the highest count with 100 occurrences (25.20% of the total area). This study offers new insights into crop rotation monitoring, giving the basic data for government food planning decision-making.
Inspired by the tremendous success of deep learning (DL) and the increased availability of remote sensing data, DL-based image semantic segmentation has attracted growing interest in the remote sensing community. The ideal scenario of DL application requires a vast number of annotation data with the same feature distribution as the area of interest. However, obtaining such enormous training sets that suit the data distribution of the target area is highly time-consuming and costly. Consistency-regularization-based semi-supervised learning (SSL) methods have gained growing popularity thanks to their ease of implementation and remarkable performance. However, there have been limited applications of SSL in remote sensing. This study comprehensively analyzed several advanced SSL methods based on consistency regularization from the perspective of data- and model-level perturbation. Then, an end-to-end SSL approach based on a hybrid perturbation paradigm was introduced to improve the DL model’s performance with a limited number of labels. The proposed method integrates the semantic boundary information to generate more meaningful mixing images when performing data-level perturbation. Additionally, by using implicit pseudo-supervision based on model-level perturbation, it eliminates the need to set extra threshold parameters in training. Furthermore, it can be flexibly paired with the DL model in an end-to-end manner, as opposed to the separated training stages used in the traditional pseudo-labeling. Experimental results for five remote sensing benchmark datasets in the application of segmentation of roads, buildings, and land cover demonstrated the effectiveness and robustness of the proposed approach. It is particularly encouraging that the ratio of accuracy obtained using the proposed method with 5% labels to that using the purely supervised method with 100% labels was more than 89% on all benchmark datasets.
Due to wind-induced waves, dry sand and wet snow, and terrain shadows, the lake extraction from synthetic aperture radar (SAR) imagery in the Qinghai-Tibet Plateau (QTP) is accompanied by false alarms. In this paper, a practical plateau lake extraction algorithm combining novel statistical features and Kullback-Leibler distance (KLD) using SAR imagery has been proposed. Firstly, a mathematical description for the plateau lake surface called object-based generalized gamma distribution (OGΓD) features has been proposed, which is able to suppress the false alarms by using spatial context information as the large-scale descriptor. Secondly, the random forest (RF) classifier is used to train a multi-feature set, including conventional texture features and OGΓD features, and output an initial labeling result. Finally, to suppress the false alarms in the initial lake extraction results, automatic post-processing based on KLD has been used. The algorithm is tested by several experiments using Sentinel-1 SAR data, performing better than the state-of-the-art algorithms, achieving the overall accuracy (OA) of 99.54% while maintaining a false-alarm rate (FR) of 0.32%.
In the aftermath of a natural hazard, rapid and accurate building damage assessment from remote sensing imagery is crucial for disaster response and rescue operations. Although recent deep learning-based studies have made considerable improvements in assessing building damage, most state-of-the-art works focus on pixel-based, multi-stage approaches, which are more complicated and suffer from partial damage recognition issues at the building-instance level. In the meantime, it is usually time-consuming to acquire sufficient labeled samples for deep learning applications, making a conventional supervised learning pipeline with vast annotation data unsuitable in time-critical disaster cases. In this study, we present an end-to-end building damage assessment framework integrating multitask semantic segmentation with semi-supervised learning to tackle these issues. Specifically, a multitask-based Siamese network followed by object-based post-processing is first constructed to solve the semantic inconsistency problem by refining damage classification results with building extraction results. Moreover, to alleviate labeled data scarcity, a consistency regularization-based semi-supervised semantic segmentation scheme with iteratively perturbed dual mean teachers is specially designed, which can significantly reinforce the network perturbations to improve model performance while maintaining high training efficiency. Furthermore, a confidence weighting strategy is embedded into the semi-supervised pipeline to focus on convincing samples and reduce the influence of noisy pseudo-labels. The comprehensive experiments on three benchmark datasets suggest that the proposed method is competitive and effective in building damage assessment under the circumstance of insufficient labels, which offers a potential artificial intelligence-based solution to respond to the urgent need for timeliness and accuracy in disaster events.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.