Recently, convolutional neural networks (CNNs) have shown significant advantages in the tasks of image classification; however, these usually require a large number of labeled samples for training. In practice, it is difficult and costly to obtain sufficient labeled samples of polarimetric synthetic aperture radar (PolSAR) images. To address this problem, we propose a novel semi-supervised classification method for PolSAR images in this paper, using the co-training of CNN and a support vector machine (SVM). In our co-training method, an eight-layer CNN with residual network (ResNet) architecture is designed as the primary classifier, and an SVM is used as the auxiliary classifier. In particular, the SVM is used to enhance the performance of our algorithm in the case of limited labeled samples. In our method, more and more pseudo-labeled samples are iteratively yielded for training through a two-stage co-training of CNN and SVM, which gradually improves the performance of the two classifiers. The trained CNN is employed as the final classifier due to its strong classification capability with enough samples. We carried out experiments on two C-band airborne PolSAR images acquired by the AIRSAR systems and an L-band spaceborne PolSAR image acquired by the GaoFen-3 system. The experimental results demonstrate that the proposed method can effectively integrate the complementary advantages of SVM and CNN, providing overall classification accuracy of more than 97%, 96% and 93% with limited labeled samples (10 samples per class) for the above three images, respectively, which is superior to the state-of-the-art semi-supervised methods for PolSAR image classification.
In order to improve the accuracy and efficiency of airborne LiDAR point cloud data classification algorithm, a classification algorithm of point cloud based on LightGBM was proposed, and the classification effect of the algorithm on urban point cloud data was tested. In this paper, LightGBM-1 classifier was used to roughly classify point cloud data firstly. Then ground points were extracted to normalize non-ground points. After that, multi-scale neighborhood features of building points and vegetation points were extracted, and then building points and vegetation points were finely classified by LightGBM-2 classifier. The algorithm was verified by urban point cloud data, and the classification effect was evaluated by analyzing classification accuracy and time. Experimental results show that, compared with other algorithms, this algorithm can effectively improve the effect of point cloud data, and realize the effective classification of point cloud data in urban areas.
Urban laser radar point cloud building extraction is a hot spot in recent years, but the accurate distinction between vegetation, buildings and man-made objects has always been a difficult point. In this paper, a point cloud classification algorithm based on ICSF and weakly correlated random forest are proposed for the problem of low classification accuracy. Firstly, the data is ground-filtered by ICSF algorithm, then the decision tree is constructed, and correlation analysis is performed based on the maximum mutual information coefficient. A decision tree with the smallest correlation coefficient and the highest precision is selected to form a random forest. Finally, the decision results are weighted and completed. Point cloud classification. This paper validates the model through the Vaihingen city dataset and ranks the importance of the features according to the method of reducing the average precision. Compared with the traditional random forest classification algorithm, the classification accuracy is improved by 4.2%, which shortens the model time.
Abstract. LiDAR technology has been widely applied in remote sensing and computer vision. Aiming at drawback of inefficient filtering and then extracting methods, the algorithm of combining Delaunay TIN models and region growing is proposed for more efficient building extraction. At First, Delaunay TIN models were built on raw LiDAR points to get connection of discrete points. Based on the geometry properties of triangles which edge points are located, protrusions edge points were extracted. Then, the extracted edge points were assigned as seed points in region growing. It yielded a point set of protrusion based on triangle network connections. Finally, since the size of non-building points is usually much smaller than the building points and non-building point sets can be deleted by threshold. The algorithm extracted building points without filtering operation, the simulation results indicate that it can improve efficiency in building extraction and guarantee the accuracy in different scenarios.
DBC ( Differential Box-Counting ) has been proved the least complex and the most convenient way to calculate the fractal dimension of images. However, for images with low resolution, the existence of empty boxes will influence the accuracy of fractal dimension. In order to reduce its effect, a new approach ADBC (Actual Differential Box-counting) is proposed in this paper. First, the empty boxes are classified into two categories: real empty boxes and potential ones. Then, the probability of the empty boxes being potential ones under higher resolution is determined by associating the spatial domain relations between the Fractional Brownian surface model and the pixel's gray-level. Thus, the more accurate fractal dimension can be obtained even if the image resolution is not high enough. Experimental tests also indicate that with the complexity of calculation being basically the same, ADBC can effectively improve the accuracy of fractal dimension.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.