1994
DOI: 10.1016/0004-3702(94)90084-1
|View full text |Cite
|
Sign up to set email alerts
|

Learning Boolean concepts in the presence of many irrelevant features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
179
0
2

Year Published

1998
1998
2021
2021

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 423 publications
(181 citation statements)
references
References 10 publications
0
179
0
2
Order By: Relevance
“…We chose three representative subset evaluation measures in combination with SF search engine. One, denoted by SF W , uses a target learning algorithm to estimate the worth of gene subsets; the other two are subset search algorithms which exploit sequential forward search and use correlation measures (variation of CFS Correlation-based Feature Selection algorithm (Hall, 2000)) or consistency measure (variation of FOCUS (Almuallim and Dietterich, 1994)) to guide the search, denoted by CF S SF and F OCUS SF , respectively (both of them used in (Yu and Liu, 2004a)). …”
Section: Resultsmentioning
confidence: 99%
“…We chose three representative subset evaluation measures in combination with SF search engine. One, denoted by SF W , uses a target learning algorithm to estimate the worth of gene subsets; the other two are subset search algorithms which exploit sequential forward search and use correlation measures (variation of CFS Correlation-based Feature Selection algorithm (Hall, 2000)) or consistency measure (variation of FOCUS (Almuallim and Dietterich, 1994)) to guide the search, denoted by CF S SF and F OCUS SF , respectively (both of them used in (Yu and Liu, 2004a)). …”
Section: Resultsmentioning
confidence: 99%
“…In this section we describe the use of relevance in FSS briefly. In the filter category we look at RELIEFF (Kira & Rendell, 1992), FOCUS (Almuallim & Dietterich, 1991;Almuallim & Dietterich, 1994) and Schlimmer's approach (Schlimmer, 1993), and in the wrapper category we look at the work by John, Kohavi, and Pfleger (1994).…”
Section: The Use Of Relevance In Fssmentioning
confidence: 99%
“…The optimal selection can only be done by testing all possible sets of M features chosen from N, To deal with the problem of feature selection, many methods have been proposed. In general, they can be classified into two categories: (1) the filter approach, which serves as a filter to sieve the irrelevant and/or redundant features without taking into account the induction algorithm [1][5] [10]; and (2) the wrapper approach, which uses the induction algorithm itself as a black box in the phase of attributes selection to select a good features subset which improve the performance, i.e. the accuracy of the induction algorithm [4] [8][11] [16].…”
Section: Introductionmentioning
confidence: 99%
“…Almuallim and Dietterich MIG: In order to improve FOCUS algorithm, A1-muallim and Dietterich in [1] proposed tree heuristics for the MIN-FEATURES bias. The Mutual-Information-Greedy algorithm use the entropy measure to evaluate a subset entirely.…”
mentioning
confidence: 99%