2020
DOI: 10.1109/jstars.2020.3028158
|View full text |Cite
|
Sign up to set email alerts
|

A Framework for Land Use Scenes Classification Based on Landscape Photos

Abstract: Space-Earth Integrated Stereoscopic Mapping promotes the progress of earth observation technologies. The method which combined remote sensing images with zenith perspectives and ground-level landscape photos with slanted viewing angles improves the efficiency and accuracy of land surveys. Recently, numerous efforts have been devoted to combining deep learning and remote sensing images for the classification of land use scenes. However, improvement of classification accuracy has been limited because of the lack… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 37 publications
0
8
0
Order By: Relevance
“…Among them, the LIDAR-based geographic mapping method obtains higher accuracy of spatial land use classification planning and is closer to the real environment, but the reconstructed effect of this method lacks texture and only reflects 3D spatial information, and the cost is higher; the reconstructed texture of RGBD camera-based method is clearer, but it is not suitable for geographically complex the large-scale mapping. In contrast, the land use spatial classification planning method of GIS based on multivision stereo matching can obtain 3D information from 2D images by simulating human binoculars and using the principle of stereo vision to adapt to complex geographic environments and has the advantages of automatic, online, noncontact detection, high flexibility, low cost, and clear texture, which can be used to build 3D land use spatial classification planning for geographically large-scale environments [16][17][18].…”
Section: Introductionmentioning
confidence: 99%
“…Among them, the LIDAR-based geographic mapping method obtains higher accuracy of spatial land use classification planning and is closer to the real environment, but the reconstructed effect of this method lacks texture and only reflects 3D spatial information, and the cost is higher; the reconstructed texture of RGBD camera-based method is clearer, but it is not suitable for geographically complex the large-scale mapping. In contrast, the land use spatial classification planning method of GIS based on multivision stereo matching can obtain 3D information from 2D images by simulating human binoculars and using the principle of stereo vision to adapt to complex geographic environments and has the advantages of automatic, online, noncontact detection, high flexibility, low cost, and clear texture, which can be used to build 3D land use spatial classification planning for geographically large-scale environments [16][17][18].…”
Section: Introductionmentioning
confidence: 99%
“…Deep filter banks were proposed to combine multicolumn stacked denoising sparse autoencoders (SDSAE) and Fisher vectors (FV) to automatically learn the representative and discriminative features in a hierarchical manner for land-use scene classification [29]. Xu et al, proposed a land-use classification framework for photos (LUCFP) and successfully applied it to the automatic verification of land surveys in China [30].Considering the high-level details in an ultrahigh-spatial-resolution (UHSR) unmanned aerial vehicle (UAV) dataset, adaptive hierarchical image segmentation optimization, multilevel feature selection, and multiscale supervised machine learning (ML) models were integrated to accurately generate detailed maps for heterogeneous urban areas from the fusion of the UHSR ortho mosaic and digital surface model (DSM). This framework exhibited excellent potential for the detailed mapping of heterogeneous urban landscapes [31].…”
Section: Land-use Classificationmentioning
confidence: 99%
“…Recall [102] Sensitivity [101] True Positive Rate (TPR) [101] Overall Accuracy [103] Detection Probability [68] Hit Rate [104] TP TP+FN PA for positives Precision [102] Positive Predictive Value (PPV) [101] TP TP+FP UA for positives Specificity [105] True Negative Rate (TNR) [101] TN TN+FP PA for negatives Negative Predictive Value (NPV) [101] TN TN+FN UA for negatives False Positive Rate (FPR) [106] Probability of False Detection [107] False Alarm Probability [100] FP TN+FP 1− (PA for negatives) False Negative Rate (FNR) 1 Missing Detection Probability [100] Missing Alarm [108] Misidentification Score [109] FN TP+FN 1− (PA for positives) False Discovery Rate (FDR) 1 False Alarm Probability [68] Commission Error [110] FP TP+FP 1− (UA for positives) Balanced Accuracy [101] Intersection-over-Union (IoU) [99] Jaccard Index [115] Figure 3 above summarizes the frequency at which each accuracy measure is used by papers that focus on binary and multiclass classification types, as well as by scene classification, object detection, semantic segmentation, and instance segmentation applications. A comparison of the graphs indicates that some measures (for example, precision and recall) are used for all types of classification applications, although it is notable that no single measure is used by every single study, even within one category of applications (e.g., multiclass scene identification).…”
Section: Overall Accuracymentioning
confidence: 99%
“…Similarly, when studies refer to the same metric by different names in different parts of the paper (e.g., the text and tables [105]), or, in some cases, even in the same parts of the paper (e.g., within a single table [68]), communication may also be undermined. The problem is particularly acute when studies compare the ROC and P-R graphs, usually using true positive rate for the ROC, but recall for the same accuracy measure in the P-R graph [106,117,118,152]. Therefore, to the extent that it is possible, it would be preferable for studies to use the most common names in Table 6 (typically, the left column), rather than less common names.…”
Section: Clarity In Terminologymentioning
confidence: 99%