2008 IEEE Symposium on Computational Intelligence and Games 2008
DOI: 10.1109/cig.2008.5035643
|View full text |Cite
|
Sign up to set email alerts
|

Transfer of evolved pattern-based heuristics in games

Abstract: Learning is key to achieving human-level intelligence. Transferring knowledge that is learned on one task to another one speeds up learning in the target task by exploiting the relevant prior knowledge. As a test case, this study introduces a method to transfer local pattern-based heuristics from a simple board game to a more complex one. The patterns are generated by compositional pattern producing networks (CPPNs), which are evolved with the NEAT neuroevolution method. Results show that transfer improves bot… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0
2

Year Published

2012
2012
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 20 publications
0
5
0
2
Order By: Relevance
“…The efficacy of this policy transfer method was supported by improved task performance on target tasks given further behavior evolution. This method was supported by prior work (Bahceci and Miikkulainen, 2008) that evolved behaviors of computer board-game playing agents, where indirectly encoded representations of evolved behaviors facilitated effective agent behavior transfer between games of increasing complexity (board size).…”
Section: Evolutionary Policy Transfermentioning
confidence: 95%
See 2 more Smart Citations
“…The efficacy of this policy transfer method was supported by improved task performance on target tasks given further behavior evolution. This method was supported by prior work (Bahceci and Miikkulainen, 2008) that evolved behaviors of computer board-game playing agents, where indirectly encoded representations of evolved behaviors facilitated effective agent behavior transfer between games of increasing complexity (board size).…”
Section: Evolutionary Policy Transfermentioning
confidence: 95%
“…HyperNEAT was selected as this study's indirect encoding neuroevolution method since previous research indicated that transferring the connectivity patterns (Gauci and Stanley, 2008) of evolved behaviors is an effective way for facilitating transfer learning in multiagent tasks (Bahceci and Miikkulainen, 2008;Verbancsics and Stanley, 2010). That is, HyperNEAT evolved multiagent policies can be effectively transferred to increasingly complex tasks (Stone et al, 2006a) without further adaptation (Verbancsics and Stanley, 2010) and that transferred behaviors often yield comparable task performance to specially designed learning algorithms (Stone et al, 2006b).…”
Section: Hyperneat: Hypercube-based Neatmentioning
confidence: 99%
See 1 more Smart Citation
“…The transfer of neural networks has been studied in [15] and [1]. More specifically, Taylor et al [15] proposed a method, named Transfer via inter-task mappings for Policy Search Reinforcement Learning (TVITM-PS), that initializes the weights of the target task, using the learned weights of the source task, by utilizing mapping functions.…”
Section: Related Workmentioning
confidence: 99%
“…Additionally, we have tested a method that tries to take advantage of the fully, random, connectivity of ESNs and does not require the use of mappings, with promising results. Finally, in [1] Bahceci and Miikkulainen introduce a method that transfers pattern-based heuristic in games. A population of evolved neural networks (which represent the patterns) to a target task as the starting population.…”
Section: Related Workmentioning
confidence: 99%