1999
DOI: 10.1007/bfb0095128
|View full text |Cite
|
Sign up to set email alerts
|

Learning from inconsistent and noisy data: The AQ18 approach

Abstract: In concept learning or data mining tasks, the learner is typically faced with a choice of many possible hypotheses characterizing the data. If one can assume that the training data are noise-free, then the generated hypothesis should be complete and consistent with regard to the data. In real-world problems, however, data are often noisy, and an insistence on full completeness and consistency is no longer valid. The problem then is to determine a hypothesis that represents the "best" trade-off between complete… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2000
2000
2019
2019

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 36 publications
(24 citation statements)
references
References 6 publications
0
24
0
Order By: Relevance
“…Some decision trees learners are CART [7], ID3 [67], C4.5 [65], T1 [41], and C5.0 [73]. Example rule learners are the AQ family of algorithms [42], [56], INDUCE [30], FOIL [68], REP [24], C4.5 rules [65], IREP [37], RISE [31], RIPPER [25], [27], DiVS [75], BEXA [77], DLG [80], SLIPPER [23], LAD [5], LERILS [14], and IREP++ [29]. Hybrid learners are represented by the CN2 [21], [22], and CLIP family of algorithms [16], [19].…”
Section: State-of-the-art Rule Induction and Decision Treesmentioning
confidence: 99%
“…Some decision trees learners are CART [7], ID3 [67], C4.5 [65], T1 [41], and C5.0 [73]. Example rule learners are the AQ family of algorithms [42], [56], INDUCE [30], FOIL [68], REP [24], C4.5 rules [65], IREP [37], RISE [31], RIPPER [25], [27], DiVS [75], BEXA [77], DLG [80], SLIPPER [23], LAD [5], LERILS [14], and IREP++ [29]. Hybrid learners are represented by the CN2 [21], [22], and CLIP family of algorithms [16], [19].…”
Section: State-of-the-art Rule Induction and Decision Treesmentioning
confidence: 99%
“…In Machine Learning mode, LEM1 employs an AQ-type learner (specifically, AQ15; Wnek et al, 1995, Kaufman & Michalski, 1999. In Darwinian Evolution mode, LEM1 uses a simple genetic algorithm, GA2 (Section 3.4).…”
Section: An Overviewmentioning
confidence: 99%
“…At present, we are in the process of completing and experimenting with a new system, LEM2. In Machine Learning mode, LEM2 uses AQ18 learner (an enhancement of AQ15c; Kaufman & Michalski, 1999), and in Darwinian Evolution mode uses an evolutionary algorithm EV. The following subsections describe briefly the AQ learning process and evolutionary algorithms used in the experiments.…”
Section: An Overviewmentioning
confidence: 99%
See 1 more Smart Citation
“…Discretization is often performed prior to the learning process and has played an important role in data mining and knowledge discovering. For example, many classification algorithms as AQ [1], CLIP [2], and CN2 [3] are only designed for category data, therefore, numerical data are usually first discretized before being processed by these classification algorithms. Assume A is one of the continuous attributes of a dataset, A can be discretized into n intervals as…”
Section: Introductionmentioning
confidence: 99%