2013
DOI: 10.1016/j.camwa.2013.06.031
|View full text |Cite
|
Sign up to set email alerts
|

Using reinforcement learning to find an optimal set of features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 42 publications
(25 citation statements)
references
References 7 publications
0
13
0
Order By: Relevance
“…This type of learning has a lot of potential for effective feature selection in the subspace of features. Feature selection can be performed through singleagent 40,41 or multi-agent 42 decision processes. In a single-agent process, only one agent decides on the selection or deselection of features, resulting in a large action space and the risk of getting stuck in a local optimum solution.…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…This type of learning has a lot of potential for effective feature selection in the subspace of features. Feature selection can be performed through singleagent 40,41 or multi-agent 42 decision processes. In a single-agent process, only one agent decides on the selection or deselection of features, resulting in a large action space and the risk of getting stuck in a local optimum solution.…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…Some other score functions include statistical scores (variance, t-score, chisquared score, Gini index (Gini, 1912)), similarity scores (Laplacian score (He et al, 2005), SPEC (Zhao and Liu, 2007), Fisher score (Hart et al, 2000), Trace Ratio (Nie et al, 2008)), and information-theoretical-based scores (Mutual Information (Battiti, 1994), MRMR (Peng et al, 2005), CI (Lin and Tang, 2006), JMI (Yang and Moody, 1999), CMI (Vidal-Naquet and Ullman, 2003)). Despite the computation efficiency (Fard et al, 2013), the variables selected by filtering methods are non-optimal since the filtering is done in the preprocessing step and is independent of the main task.…”
Section: Introductionmentioning
confidence: 99%
“…Numerous research have explored ways in which an agent is used to select a subset of data characteristics. 43 Unfortunately, this method needs the agent to decide whether to include or exclude each characteristic. 44 This results in an exponential increase of the explorable interval (2N), similar to evolutionary algorithms 33,35 , and it is challenging to discover the global optimal solution and has a signi cant computing cost.…”
Section: Introductionmentioning
confidence: 99%