2015
DOI: 10.1007/s00521-015-2042-5
|View full text |Cite
|
Sign up to set email alerts
|

Supervised learning of sparse context reconstruction coefficients for data representation and classification

Abstract: Context of data points, which is usually defined as the other data points in a data set, has been found to paly important roles in data representation and classification. In this paper, we study the problem of using context of a data point for its classification problem. Our work is inspired by the observation that actually only very few data points are critical in the context of a data point for its representation and classification. We propose to represent a data point as the sparse linear combination of its… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2015
2015
2018
2018

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 22 publications
(19 citation statements)
references
References 48 publications
(48 reference statements)
0
19
0
Order By: Relevance
“…k-NNs are also most popular for classifying instances based on the context of data points through majority voting(X. Liu, Wang, Yin, Edwards, & Xu, 2017). This method is highly suitable for small datasets.…”
Section: K-nearest Neighbormentioning
confidence: 99%
“…k-NNs are also most popular for classifying instances based on the context of data points through majority voting(X. Liu, Wang, Yin, Edwards, & Xu, 2017). This method is highly suitable for small datasets.…”
Section: K-nearest Neighbormentioning
confidence: 99%
“…The experiments over some benchmark databases show its advantages over other similarity learning methods. In the future, we will investigate to use some other similarity function as similarity measure instead of linear function, such as Bayesian network [12], [13], [14], and also to develop novel algorithms of other machine learning problems and applications besides similarity learning, to maximize top precision measure, such as importance sampling [15], [16], [17], portfolio choices [18], [19], multimedia technology [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], computational biology [30], [31], [32], [33], [34], big data processing [35], [36], [37], [38], [39], computer vision [40], [41], [42], [43], [44], [45], [46], [47], [48], [49], [50], [51], [52], [53], information security [54], [55], [56]…”
Section: Discussionmentioning
confidence: 99%
“…Majority voting [28] is a commonly used combination technique. The ensemble classifier predicts a class for an instance using the majority of base classifiers [29, 30]. The classification phase employs only feed forward based on (1) and (2) to do the classification.…”
Section: Algorithm Designmentioning
confidence: 99%