1996
DOI: 10.1007/3-540-61723-x_1019
|View full text |Cite
|
Sign up to set email alerts
|

Solving MasterMind using GAs and simulated annealing: A case of dynamic constraint optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2001
2001
2015
2015

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 21 publications
(12 citation statements)
references
References 7 publications
0
12
0
Order By: Relevance
“…The variation operators were also adapted to these objects: a permutation and a creep operator, which substituted a number (color) by the next, and the last by the first. A huge improvement was obtained; the algorithm explored only 25% of the space that was explored before [5], that is, around 2% of the total search space, and thus obtained solutions much faster. The game can be played online at http://geneura.ugr.es/˜jmerelo/GenMM; the code can be downloaded from the same site.…”
Section: Applicationsmentioning
confidence: 95%
“…The variation operators were also adapted to these objects: a permutation and a creep operator, which substituted a number (color) by the next, and the last by the first. A huge improvement was obtained; the algorithm explored only 25% of the space that was explored before [5], that is, around 2% of the total search space, and thus obtained solutions much faster. The game can be played online at http://geneura.ugr.es/˜jmerelo/GenMM; the code can be downloaded from the same site.…”
Section: Applicationsmentioning
confidence: 95%
“…When played with P = 4 and N = 6, the algorithm needs an average of 4.64 trials and evaluates only 41.2 combinations on average, which is not even four percent of all 1296 possible combinations. Bernier et al (1996), Bento et al (1999) and Kalisker & Camens (2003) all propose a GA that uses a fitness value that reflects the eligibility. Bernier et al (1996) need an average of 5.62 guesses over 693 games when played with N = P = 6.…”
Section: Rules Of Thumbmentioning
confidence: 99%
“…Bernier et al (1996), Bento et al (1999) and Kalisker & Camens (2003) all propose a GA that uses a fitness value that reflects the eligibility. Bernier et al (1996) need an average of 5.62 guesses over 693 games when played with N = P = 6. Bento et al (1999) reach an average of 6.866 guesses when played with P = 5 and N = 8.…”
Section: Rules Of Thumbmentioning
confidence: 99%
“…the score, or length of time survived). As with all reinforcement learning problems, different methods can be used to solve the problem (find a good policy) [15] including TD-learning [16], evolutionary computation [11], competitive coevolution [17], [18], [19], [20], simulated annealing [21], other optimisation algorithms and a large number of combinations between such algorithms [22]. In recent years a large number of papers that describe the application of various learning methods to different types of video games have appeared in the literature (including several overviews [23], [11], [24], [25]).…”
Section: A Npc Behavior Learningmentioning
confidence: 99%