2014
DOI: 10.1007/978-3-662-45620-0_11
|View full text |Cite
|
Sign up to set email alerts
|

Hubness-Aware Classification, Instance Selection and Feature Construction: Survey and Extensions to Time-Series

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
25
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 34 publications
(25 citation statements)
references
References 38 publications
0
25
0
Order By: Relevance
“…The presentation of NHBNN and self-training is based on [21] and [11] respectively. Subsequently, we describe our proposed semisupervised approach in Section 2.3, which is followed by the methods used for the experimental evaluation in Section 2.4.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The presentation of NHBNN and self-training is based on [21] and [11] respectively. Subsequently, we describe our proposed semisupervised approach in Section 2.3, which is followed by the methods used for the experimental evaluation in Section 2.4.…”
Section: Methodsmentioning
confidence: 99%
“…Even though (k, C)-occurrences are highly correlated, as shown in [21] and [23], NHBNN offers improvement over the basic kNN. This is in accordance with other results from the literature that state that Naive Bayes can deliver good results even in cases with high independence assumption violation [16].…”
Section: Nhbnn: Naive Hubness Bayesianmentioning
confidence: 99%
See 1 more Smart Citation
“…Informally, this estimate can be interpreted as follows: we consider m additional pseudo-instances from each class and we assume that x i appears as one of the k-nearest neighbors of the pseudo-instances from class C. We use m = 1 in our experiments. Even though k-occurrences are highly correlated, as shown in [19] and [21], NHBNN offers improvement over the basic kNN. This is in accordance with other results from the literature that state that Naive Bayes can deliver good results even in cases with high independence assumption violation [15].…”
Section: Nhbnn: Naive Hubness Bayesian K-nearest Neighbormentioning
confidence: 99%
“…[4], [11], [12], [14], [20], [22], [23], and [19] for a survey. Hubs were observed in gene expression data [8], [14] and hubness was brought into relation with the performance of the SUCCESS semi-supervised time-series classifier [9], however, none of the aforementioned works focused on hubness-aware classifiers in semisupervised mode, i.e., when the classifier is allowed to learn both from labeled and unlabeled instances.…”
Section: Introductionmentioning
confidence: 99%