2016
DOI: 10.1016/j.patcog.2015.10.014
|View full text |Cite
|
Sign up to set email alerts
|

Online active learning of decision trees with evidential data

Abstract: Learning from uncertain data has been drawing increasing attention in recent years. In this paper, we propose a tree induction approach which can not only handle uncertain data, but also furthermore reduce epistemic uncertainty by querying the most valuable uncertain instances within the learning procedure. We extend classical decision trees to the framework of belief functions to deal with a variety of uncertainties in the data. In particular, we use entropy intervals extracted from the evidential likelihood … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 72 publications
(33 citation statements)
references
References 28 publications
0
33
0
Order By: Relevance
“…Data pick up is itself judged by measure called entropy , which utilized to measures the impurity of set of training objects. For a collection of data set S, entropy formula is given in equation 1, formulated by Claude Shannon , also used by [28,[35][36][37][38]. Entropy helps decision tree to determine how informative a node is.…”
Section: Algorithm 2: Id3 Working Algorithm Of Decision Treementioning
confidence: 99%
“…Data pick up is itself judged by measure called entropy , which utilized to measures the impurity of set of training objects. For a collection of data set S, entropy formula is given in equation 1, formulated by Claude Shannon , also used by [28,[35][36][37][38]. Entropy helps decision tree to determine how informative a node is.…”
Section: Algorithm 2: Id3 Working Algorithm Of Decision Treementioning
confidence: 99%
“…A second category of applications concerns the problem of learning from uncertain data. For instance, Sutton-Charani et al [44,45] and Ma et al [30] applied the E 2 M algorithm to decision tree inference from data with uncertain attributes, while the problem of clustering data with fuzzy attributes was considered by Quost and Denoeux in [38]. In [48], the authors considered the modeling of lifetime data using mixture models and progressively censored observations.…”
Section: E 2 M Algorithmmentioning
confidence: 99%
“…Such a procedure, called the Evidential EM (E 2 M) algorithm, was introduced in [12]; it extends the Expectation-Maximization (EM) algorithm [10] by allowing for maximum-likelihood estimation from partially missing data. The E 2 M algorithm has been applied to a variety of tasks and models, including partially supervised Independent Factor Analysis [5], Hidden Markov Models with partially hidden states [39], fuzzy data clustering using Gaussian mixture models [38], decision trees [44,45,30], and mixture models with progressively censored data [48].…”
Section: Introductionmentioning
confidence: 99%
“…Recent work highlighted that intrinsic uncertainty related to learning as well as uncertainty due to imprecise data may be jointly managed inside the decision tree by defining entropy intervals from evidential likelihood [14].…”
Section: B Fusionmentioning
confidence: 99%