1996
DOI: 10.1007/bf00058656
|View full text |Cite
|
Sign up to set email alerts
|

Unifying instance-based and rule-based induction

Abstract: Abstract. Several well-developed approaches to inductive learning now exist, but each has specific limitations that are hard to overcome. Multi-strategy learning attempts to tackle this problem by combining multiple methods in one algorithm. This article describes a unification of two widely-used empirical approaches: rule induction and instance-based learning. In the new algorithm, instances are treated as maximally specific rules, and classification is performed using a best-match strategy. Rules are learned… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
125
0
2

Year Published

2000
2000
2010
2010

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 119 publications
(131 citation statements)
references
References 40 publications
(67 reference statements)
3
125
0
2
Order By: Relevance
“…The first 2/3 of the randomized data was reserved for training and/or cross-validation. For each algorithm, we report the average test set performance and sample standard deviation on 10 randomizations in each domain (Bay, 1999;De Groot, 1986;Domingos, 1996;Lim, Loh, & Shih, 2000). The best average test set performance was underlined and denoted in bold face for each domain.…”
Section: Description Of the Reference Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation
“…The first 2/3 of the randomized data was reserved for training and/or cross-validation. For each algorithm, we report the average test set performance and sample standard deviation on 10 randomizations in each domain (Bay, 1999;De Groot, 1986;Domingos, 1996;Lim, Loh, & Shih, 2000). The best average test set performance was underlined and denoted in bold face for each domain.…”
Section: Description Of the Reference Algorithmsmentioning
confidence: 99%
“…Considering the Average Accuracy (AA) and Average Ranking (AR) over all domains (Bay, 1999;De Groot, 1986;Domingos, 1996), the RBF SVM gets the best average accuracy and the RBF LS-SVM yields the best average rank. There is no significant difference between the performance of both classifiers.…”
Section: Ls-svmmentioning
confidence: 99%
“…However, this algorithm stores the entire data set in memory. Domingos also proposes 75 an integrated technique, the RISE algorithm, combining instance-based learning and rule induction [19]. Under 77 this algorithm, instances are treated as rules and data reduction is achieved using speciÿc rules formed by 79 generalization of instances.…”
Section: Motivation 21mentioning
confidence: 99%
“…conducts the ÿltering. 19 Unlike PGF1 which ÿlters on the original instances, PGF2 performs ÿltering on the prototype set. The pro-21 totype set usually contains intermediate prototypes and original instances.…”
Section: The Pgf Algorithmmentioning
confidence: 99%
“…Originally, the term was most often used to refer to the combination of analytical and inductive methods, but the combination of inductive methods having different biases is consistent with the term "multistrategy." This has been called "empirical multistrategy learning" (Domingos, 1996). We have also used the term "multistrategy learning" to refer to our framework in which individual learners are used as input to a combining method, which treats them as black boxes and attempts to model their behavior in pursuit of improved performance.…”
Section: Related Workmentioning
confidence: 99%