High-resolution remote sensing image-based land-use scene classification is a difficult task, which is to recognize the semantic category of a given land-use scene image based on priori knowledge. Land-use scenes often cover multiple land-cover classes or ground objects, which makes a scene very complex and difficult to represent and recognize. To deal with this problem, this paper applies the well-known bag-of-visualwords (BOVWs) model which has been very successful in natural image scene classification. Moreover, many existing BOVW methods only use scale-invariant feature transform (SIFT) features to construct visual vocabularies, lacking in investigation of other features or feature combinations, and they are also sensitive to the rotation of image scenes. Therefore, this paper presents a concentric circle-based spatial-rotation-invariant representation strategy for describing spatial information of visual words and proposes a concentric circle-structured multiscale BOVW method using multiple features for land-use scene classification. Experiments on public land-use scene classification datasets demonstrate that the proposed method is superior to many existing BOVW methods and is very suitable to solve the land-use scene classification problem.
Formulated as a pixel-level labeling task, data-driven neural segmentation models for cloud and corresponding shadow detection have achieved a promising accomplishment in remote sensing imagery processing. The limited capability of these methods to delineate the boundaries of clouds and shadows, however, is still referred to as a central issue of precise cloud and shadow detection. In this paper, we focus on the issue of rough cloud and shadow location and fine-grained boundary refinement of clouds on the dataset of Landsat8 OLI and therefore propose the Refined UNet to achieve this goal. To this end, a data-driven UNet-based coarse prediction and a fully-connected conditional random field (Dense CRF) are concatenated to achieve precise detection. Specifically, the UNet network with adaptive weights of balancing categories is trained from scratch, which can locate the clouds and cloud shadows roughly, while correspondingly the Dense CRF is employed to refine the cloud boundaries. Eventually, Refined UNet can give cloud and shadow proposals sharper and more precisely. The experiments and results illustrate that our model can propose sharper and more precise cloud and shadow segmentation proposals than the ground truths do. Additionally, evaluations on the Landsat 8 OLI imagery dataset of Blue, Green, Red, and NIR bands illustrate that our model can be applied to feasibly segment clouds and shadows on the four-band imagery data.
Feature significance-based multibag-of-visual-words model for remote sensing image scene classification,"Abstract. To obtain a complete representation of scene information in high spatial resolution remote sensing scene images, an increasing number of studies have begun to pay attention to the multiple low-level feature types-based bag-of-visual-words (multi-BOVW) model, for which the two-phase classification-based multi-BOVW method is one of the most popular approaches. However, this method ignores the information of feature significance among different feature types in the score-level fusion stage, thus affecting the classification performance of the multi-BOVW methods. To address this limitation, a feature significance-based multi-BOVW scene classification method was proposed, which integrates the information of feature separating capabilities among different scene categories into the traditional two-phase classification-based score-level fusion framework, realizing different treatments for different feature channels in classifying different scene categories. Experimental results show that the proposed method outperforms the traditional score-level fusion-based multi-BOVW methods and effectively explores the feature significance information in multiclass remote sensing image scene classification tasks. Downloaded From: http://remotesensing.spiedigitallibrary.org/ on 09/11/2016 Terms of Use: http://spiedigitallibrary.org/ss/termsofuse.aspx Zhao, Tang, and Huo: Feature significance-based multibag-of-visual-words model. Downloaded From: http://remotesensing.spiedigitallibrary.org/ on 09/11/2016 Terms of Use: http://spiedigitallibrary.org/ss/termsofuse.aspx Zhao, Tang, and Huo: Feature significance-based multibag-of-visual-words model. Downloaded From: http://remotesensing.spiedigitallibrary.org/ on 09/11/2016 Terms of Use: http://spiedigitallibrary.org/ss/termsofuse.aspxFig. 3 Illustration of the BOVW model. Zhao, Tang, and Huo: Feature significance-based multibag-of-visual-words model. Downloaded From: http://remotesensing.spiedigitallibrary.org/ on 09/11/2016 Terms of Use: http://spiedigitallibrary.org/ss/termsofuse.aspx Fig. 4 Flowchart of the traditional two-phase classification-based multi-BOVW model. Zhao, Tang, and Huo: Feature significance-based multibag-of-visual-words model. Downloaded From: http://remotesensing.spiedigitallibrary.org/ on 09/11/2016 Terms of Use: http://spiedigitallibrary.org/ss/termsofuse.aspx Zhao, Tang, and Huo: Feature significance-based multibag-of-visual-words model. Downloaded From: http://remotesensing.spiedigitallibrary.org/ on 09/11/2016 Terms of Use: http://spiedigitallibrary.org/ss/termsofuse.aspx Zhao, Tang, and Huo: Feature significance-based multibag-of-visual-words model. Downloaded From: http://remotesensing.spiedigitallibrary.org/ on 09/11/2016 Terms of Use: http://spiedigitallibrary.org/ss/termsofuse.aspx Zhao, Tang, and Huo: Feature significance-based multibag-of-visual-words model. Downloaded From: http://remotesensing.spiedigitallibrary.org/ on 09/11/2016 Terms of ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.