2019
DOI: 10.1109/tgrs.2018.2890508
|View full text |Cite
|
Sign up to set email alerts
|

PSASL: Pixel-Level and Superpixel-Level Aware Subspace Learning for Hyperspectral Image Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
20
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 25 publications
(20 citation statements)
references
References 57 publications
0
20
0
Order By: Relevance
“…Since the original single point multi-scale features are low-level features, the expression of attributes for each single point is not significant. To make point cloud features more prominent and effective, BoW, low rank representation, manifold learning, and sparse coding are commonly used for feature selection [6,14,15]. Sparse coding, by learning a set of "super-complete" basis vectors to represent samples more efficiently, has significant advantages in dictionary learning and feature representation.…”
Section: Llc-based Dictionary Learning and Sparse Coding For Single Pmentioning
confidence: 99%
See 1 more Smart Citation
“…Since the original single point multi-scale features are low-level features, the expression of attributes for each single point is not significant. To make point cloud features more prominent and effective, BoW, low rank representation, manifold learning, and sparse coding are commonly used for feature selection [6,14,15]. Sparse coding, by learning a set of "super-complete" basis vectors to represent samples more efficiently, has significant advantages in dictionary learning and feature representation.…”
Section: Llc-based Dictionary Learning and Sparse Coding For Single Pmentioning
confidence: 99%
“…The low-level features include normal vector and elevation feature [5,8], spin image [6,10], covariance eigenvalue feature [11], view feature histogram (VFH) [12], and clustered view feature histogram (CVFH) [13], among others. Higher level features are mainly extracted by manifold learning [9,14], low-rank representation [15], sparse representation [6,16], and so on [17,18]. The most popular classifiers mainly include linear classifiers [19], random forests [20], AdaBoost [21], and SVM (support vector machine) [22].…”
Section: Introductionmentioning
confidence: 99%
“…Li et al [4] proposed a deep-learning network based on multi-level voxel features fusion for point cloud classification. The feature dimensions used by the above methods are relatively high, and are found to be carrying noise and redundant information [13]. To overcome this drawback, dimensionality reduction and sparse representation are widely used.…”
Section: Introductionmentioning
confidence: 99%
“…Therefore, if we can successfully classify a large volumes of point clouds using only a small percentage of training samples, it should have a great practical applicability because the time and labor cost will be significantly reduced [5]. To solve this problem, [13,[17][18][19] proposed semi-supervised or supervised classification methods for feature transformation matrix and classifier joint learning. For example, Mei et al [17] concatenated multiple single point features to form high-dimensional features for each point, and then used the joint constraints of margin, graph of adjacency and labels to train the model with a small portion of samples based on a semi-supervised framework.…”
Section: Introductionmentioning
confidence: 99%
“…Clustering method: This method is an unsupervised learning method, which heuristically clusters points with similar attributes into the same class to meet the requirements of cost functions. Representative methods include the classic K-means algorithm [23], Euclidean distance clustering algorithm [24], mean shift clustering [25][26][27], hierarchical clustering [28][29][30], sample density-based clustering [31,32], and mixed kernel density function clustering [33]. For example, Wu et al [24] introduced a smooth threshold constraint to the traditional Euclidean clustering algorithm to prevent over-and/or under-segmentation problems.…”
Section: Introductionmentioning
confidence: 99%