2009
DOI: 10.1155/2009/134807
|View full text |Cite
|
Sign up to set email alerts
|

A New Information Measure Based on Example-Dependent Misclassification Costs and Its Application in Decision Tree Learning

Abstract: This article describes how the costs of misclassification given with the individual training objects for classification learning can be used in the construction of decision trees for minimal cost instead of minimal error class decisions. This is demonstrated by defining modified, cost-dependent probabilities, a new, cost-dependent information measure, and using a cost-sensitive extension of the CAL5 algorithm for learning decision trees. The cost-dependent information measure ensures the selection of the (loca… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 12 publications
0
1
0
Order By: Relevance
“…This approximate way of dealing with EDCs suffers the drawbacks of sampling techniques, which can modify the problem by reducing the influence of critical samples and/or emphasizing unimportant instances [20]. Decision trees have also been considered for EDC problems [21], [22], and perceptrons and piecewise linear classifiers were used in [23] with an hybrid learning algorithm that constructs separating hyperplanes for each pair of classes. In general, all these techniques suffer from the limitations of the constrained partition of the input space.…”
Section: Introductionmentioning
confidence: 99%
“…This approximate way of dealing with EDCs suffers the drawbacks of sampling techniques, which can modify the problem by reducing the influence of critical samples and/or emphasizing unimportant instances [20]. Decision trees have also been considered for EDC problems [21], [22], and perceptrons and piecewise linear classifiers were used in [23] with an hybrid learning algorithm that constructs separating hyperplanes for each pair of classes. In general, all these techniques suffer from the limitations of the constrained partition of the input space.…”
Section: Introductionmentioning
confidence: 99%