2020
DOI: 10.1016/j.asoc.2020.106250
|View full text |Cite
|
Sign up to set email alerts
|

A one-class classification decision tree based on kernel density estimation

Abstract: One-class Classification (OCC) is an area of machine learning which addresses prediction based on unbalanced datasets. Basically, OCC algorithms achieve training by means of a single class sample, with potentially some additional counter-examples. The current OCC models give satisfaction in terms of performance, but there is an increasing need for the development of interpretable models. In the present work, we propose a one-class model which addresses concerns of both performance and interpretability. Our hyb… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 29 publications
(11 citation statements)
references
References 57 publications
0
10
0
Order By: Relevance
“…The problem of pocket retrieval thus appears as an instance of a one-class discrimination problem [28]. One-class discrimination is a learning task that typically arises in outlier (anomaly) detection or, more generally, in binary discrimination data mining problems where obtaining examples of one class can be too expensive or daunting, or where examples of one class are largely under represented (data imbalance) [29,30]. Different approaches are used in the literature to solve one-class or data imbalance problems.…”
Section: Discussionmentioning
confidence: 99%
“…The problem of pocket retrieval thus appears as an instance of a one-class discrimination problem [28]. One-class discrimination is a learning task that typically arises in outlier (anomaly) detection or, more generally, in binary discrimination data mining problems where obtaining examples of one class can be too expensive or daunting, or where examples of one class are largely under represented (data imbalance) [29,30]. Different approaches are used in the literature to solve one-class or data imbalance problems.…”
Section: Discussionmentioning
confidence: 99%
“…In Itani et al., 22 by combining kernel density estimation with decision tree, the data density distribution of kernel density estimation is used as the division criterion of decision number, and the boundary width of each feature is determined by forming the final decision tree structure, thus forming the super rectangle discriminant boundary. This method determines the coefficient multiplied by Ntotal1/5${N_{total}}^{-{1/5}}$ according to the standard deviation and interquartile range of the sample.…”
Section: Fast Decision Algorithm Designmentioning
confidence: 99%
“…Different types of classification methods, such as artificial neural networks [35,36], SVMs [37], random forest [17], decision trees [38] and k-nearest neighbors [39], can be used to classify the CTI functioning/failed state from monitoring signals. In this work, SVM classifiers have been adopted, since they offer high classification performance, low computational cost, and ability to handle imbalanced datasets by using its Cost-Sensitive adjustment [26,40].…”
Section: Classifiermentioning
confidence: 99%