1996
DOI: 10.1007/bf00115299
|View full text |Cite
|
Sign up to set email alerts
|

Using the minimum description length principle to infer reduced ordered decision graphs

Abstract: Abstract.We propose an algorithm for the inference of decision graphs from a set of labeled instances. In particular, we propose to infer decision graphs where the variables can only be tested in accordance with a given order and no redundant nodes exist. This type of graphs, reduced ordered decision graphs, can be used as canonical representations of Boolean functions and can be manipulated using algorithms developed for that purpose. This work proposes a local optimization algorithm that generates compact de… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2002
2002
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(21 citation statements)
references
References 27 publications
0
21
0
Order By: Relevance
“…The minimum description length principle (already investigated in the context of decision diagrams in [17]) might become a valuable tool for increasing the generalization abilities of decision diagrams, while at the same time reducing their size.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The minimum description length principle (already investigated in the context of decision diagrams in [17]) might become a valuable tool for increasing the generalization abilities of decision diagrams, while at the same time reducing their size.…”
Section: Discussionmentioning
confidence: 99%
“…A method for top-down induction of RODDs is presented by Kohavi in [16]. In [17], Oliveira and Sangiovanni-Vincentelli show an interesting study of a pruning algorithm used for generalization. The suitability of RODDs for visualization of simple decision rules is studied in [18].…”
Section: Decision Diagramsmentioning
confidence: 99%
“…Vapnik (1999) provided a formal proof to justify the principle for classification problems. Although much work remains to be done on effective constructive procedures under the principle (Cherkassky and Mulier 1998), it has been applied for machine learning (Yamanishi 1992;Zemel 1993;Pfahringer 1995;Rissanen and Yu 1996), model selection (Hansen and Yu 1998), coding decision trees and graphs (Wallace and Patrick 1993;Oliveira and Sangiovanni-Vincentelli 1995), and even for classification of protein structure (Edgoose, Allison, and Dowe 1998).…”
Section: Introductionmentioning
confidence: 99%
“…Primary focuses of these researches are controlling tree size (Kim and Koehler, 1994;Auer et al, 1995; Kalkanis, 1993; 18 Distribution A: Approved for public release; distribution is unlimited. 88ABW-2011-0727, 23 Feb 2011 Esposito et al, 1997;Smyth et al, 1995), modifying test space (Fayyad and Irani, 1992; Utgoff and Brodley, 1990;Heath et al, 1993b;Zheng, 1995), modifying test search (Quinlan, 1986;Dietterich et al, 1996; Utgoff and Clouse, 1996;Pazzani et al, 1994), database restrictions (Wirth and Catlett, 1988;John, 1995) and alternative data structures (Oliver, 1993;Oliveira and Sangiovanni-Vincentelli, 1995; Kohavi and Li, 1995). …”
mentioning
confidence: 99%