Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery &Amp; Data Mining 2019
DOI: 10.1145/3292500.3330868
|View full text |Cite
|
Sign up to set email alerts
|

Automating Feature Subspace Exploration via Multi-Agent Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
47
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 46 publications
(49 citation statements)
references
References 23 publications
0
47
0
Order By: Relevance
“…A reward is assigned to the agent based on the predictive performance of the current features subset. Liu et al [17] propose a method to reformulate feature engineering as a multi-agent reinforcement learning problem. The multiagent RL formulation reduces the large action space of a single agent since now each of the agents has a smaller action space for one feature selection.…”
Section: Feature Engineeringmentioning
confidence: 99%
See 2 more Smart Citations
“…A reward is assigned to the agent based on the predictive performance of the current features subset. Liu et al [17] propose a method to reformulate feature engineering as a multi-agent reinforcement learning problem. The multiagent RL formulation reduces the large action space of a single agent since now each of the agents has a smaller action space for one feature selection.…”
Section: Feature Engineeringmentioning
confidence: 99%
“…However, this formu-lation also brings challenges: interactions between agents, representation of the environment, and selection of samples. Three technical methods in [17] have been proposed to tackle them respectively: adding inter-feature information to reward formulation, using meta statistics, and deep learning methods to learn the representation of the environment, and Gaussian mixture to independently determine samples. However, although this formulation reduces the action space, the trade-off is using more computing resources to support more agents' learning.…”
Section: Feature Engineeringmentioning
confidence: 99%
See 1 more Smart Citation
“…A reward is assigned to the agent based on the predictive performance of the current features subset. Liu et al [52] propose a method to reformulate feature engineering as a multi-agent reinforcement learning problem. The multi-agent RL formulation reduces the large action space of a single agent since now each of the agents has a smaller action space for one feature selection.…”
Section: Feature Engineeringmentioning
confidence: 99%
“…[19] Liao et al proposed Multi-objective Optimization by Reinforcement Learning (MORL) to solve the optimal power system dispatch and voltage stability problem, which is undertaken on individual dimension in a high-dimensional space via a path selected by an estimated path value which represents the potential of finding a better solution [9] . Liu et al reformulated the feature selection problem into a multi-agent reinforcement learning framework where the selection of each feature is controlled by its corresponding feature agent [5,11]. Yang et al developed deep reinforcement learning algorithms which could handle large scale agents with effective communication protocol [16,24].…”
Section: Reinforcement Learningmentioning
confidence: 99%