2018
DOI: 10.1109/tgrs.2018.2811748
|View full text |Cite
|
Sign up to set email alerts
|

Joint Margin, Cograph, and Label Constraints for Semisupervised Scene Parsing From Point Clouds

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(16 citation statements)
references
References 56 publications
0
16
0
Order By: Relevance
“…To obtain the optimal input parameters, we use a trial and error strategy to determine the appropriate values. More specifically, in the DBSCAN clustering, we obtain good results when using the parameter Eps in the range [0.7, 1.5] and the parameter MinPts in the range [6,10]. Similarly, in the over-segmented K-means algorithm and the procedure of topological maintenance using the proposed probability density clustering algorithm, we use the thresholds T in the range [200,300] and the clustering radius h in the range [0.8, 1.5], i.e., our algorithm is insensitive to changes in the parameter settings over a wide range of values.…”
Section: Methodsmentioning
confidence: 95%
See 1 more Smart Citation
“…To obtain the optimal input parameters, we use a trial and error strategy to determine the appropriate values. More specifically, in the DBSCAN clustering, we obtain good results when using the parameter Eps in the range [0.7, 1.5] and the parameter MinPts in the range [6,10]. Similarly, in the over-segmented K-means algorithm and the procedure of topological maintenance using the proposed probability density clustering algorithm, we use the thresholds T in the range [200,300] and the clustering radius h in the range [0.8, 1.5], i.e., our algorithm is insensitive to changes in the parameter settings over a wide range of values.…”
Section: Methodsmentioning
confidence: 95%
“…Those classification methods can be generally divided into two categories, namely the single point-based method and the cluster-based method. Generally, the single point-based method consists of neighborhood selection, feature extraction and selection, and classification of each individual point cloud [7][8][9][10]. A series of publications on this topic have demonstrated the effectiveness of the approach.…”
Section: Introductionmentioning
confidence: 99%
“…Contextual information has played a very important role in the recent classification of point clouds [2,3,13,29]. According to the different discriminative powers of different primitives, we categorize the geometric relationships into two grades.…”
Section: Pairwise Featuresmentioning
confidence: 99%
“…Those classification methods can be mainly classified into two categories: single point-based methods and point set-based methods. Generally, the single point-based methods mainly consist of neighborhood selection, feature extraction, and classifier for each single point classification [5][6][7][8][9]. Among them, the methods of neighborhood selection mainly use radius, cylindrical region, or K-nearest neighbor (KNN) [7,8] to construct the neighborhood.…”
Section: Introductionmentioning
confidence: 99%
“…The low-level features include normal vector and elevation feature [5,8], spin image [6,10], covariance eigenvalue feature [11], view feature histogram (VFH) [12], and clustered view feature histogram (CVFH) [13], among others. Higher level features are mainly extracted by manifold learning [9,14], low-rank representation [15], sparse representation [6,16], and so on [17,18]. The most popular classifiers mainly include linear classifiers [19], random forests [20], AdaBoost [21], and SVM (support vector machine) [22].…”
Section: Introductionmentioning
confidence: 99%