Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation 2015
DOI: 10.1145/2739480.2754730
|View full text |Cite
|
Sign up to set email alerts
|

Genetically-regulated Neuromodulation Facilitates Multi-Task Reinforcement Learning

Abstract: In this paper, we use a gene regulatory network (GRN) to regulate a reinforcement learning controller, the StateAction-Reward-State-Action (SARSA) algorithm. The GRN serves as a neuromodulator of SARSA's learning parameters: learning rate, discount factor, and memory depth. We have optimized GRNs with an evolutionary algorithm to regulate these parameters on specific problems but with no knowledge of problem structure. We show that geneticallyregulated neuromodulation (GRNM) performs comparably or better than … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 30 publications
0
4
0
Order By: Relevance
“…This mechanism is not yet sufficiently employed in current mutation operators of genetic algorithms. While crossover operators have been recently improved in [30], mutation is still crucial in artificial gene regulatory network optimization, since most approaches use very high mutation rates (75%), if not exclusively mutation. Whereas the NEAT algorithm has strongly impacted the evolution of neural networks [132], improving the evolutionary algorithm is a central question in order to find the best possible network for a given problem.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…This mechanism is not yet sufficiently employed in current mutation operators of genetic algorithms. While crossover operators have been recently improved in [30], mutation is still crucial in artificial gene regulatory network optimization, since most approaches use very high mutation rates (75%), if not exclusively mutation. Whereas the NEAT algorithm has strongly impacted the evolution of neural networks [132], improving the evolutionary algorithm is a central question in order to find the best possible network for a given problem.…”
Section: Resultsmentioning
confidence: 99%
“…ray of different problems with both discrete and continuous state spaces, as well as one-shot and continuous rewards [30]. Agents were required to learn to solve a series of problems, while the same AGRN was used to regulate the learning parameters for each problem.…”
Section: Neuromodulationmentioning
confidence: 99%
“…The concept of 'neuromodulation' captures the ideas that the parameters of adaptation or learning may, themselves, adapt on a slower timescale. This has been used to learn control schemes for learning rate on re-inforcement learning tasks (Cussat-Blanc and Harrington, 2015). Beyond just tuning the parameters of a fixed learning algorithm, the entirety of the learning algorithm may be made subject to adaptation.…”
Section: Shifts In Individualitymentioning
confidence: 99%
“…It is capable of developing modular robot morphologies [8], controlling cells designed to optimize a wind farm layout [25] and controlling reinforcement learning parameters in [7]. This model has been designed for computational purpose only and not to simulate a biological network.…”
Section: Our Modelmentioning
confidence: 99%