2013 IEEE Conference on Computational Inteligence in Games (CIG) 2013
DOI: 10.1109/cig.2013.6633630
|View full text |Cite
|
Sign up to set email alerts
|

Monte-Carlo Tree Search and minimax hybrids

Abstract: Abstract-Monte-Carlo Tree Search is a sampling-based search algorithm that has been successfully applied to a variety of games. Monte-Carlo rollouts allow it to take distant consequences of moves into account, giving it a strategic advantage in many domains over traditional depth-limited minimax search with alpha-beta pruning. However, MCTS builds a highly selective tree and can therefore miss crucial moves and fall into traps in tactical situations. Full-width minimax search does not suffer from this weakness… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 28 publications
(17 citation statements)
references
References 22 publications
0
17
0
Order By: Relevance
“…Baier and Winands [11] proposed different modifications of MCTS with minimax-like enhancements during the selection, playout and backpropagation phases to combine minimax short term evaluation and MCTS long term evaluation. Experiments on Connect-4 and Breakthrough show improvements without adding specific game expert knowledge according to depth and choosen enhanced phase.…”
Section: Related Workmentioning
confidence: 99%
“…Baier and Winands [11] proposed different modifications of MCTS with minimax-like enhancements during the selection, playout and backpropagation phases to combine minimax short term evaluation and MCTS long term evaluation. Experiments on Connect-4 and Breakthrough show improvements without adding specific game expert knowledge according to depth and choosen enhanced phase.…”
Section: Related Workmentioning
confidence: 99%
“…in Algorithm 2): The total reward found is then used to update the reward value stored at each of the predecessor nodes. Given a sufficient number of iterations, the MCTS with UCB is guaranteed to converge to the optimal policy [22]. However, this may still require building an exponentially sized tree.…”
Section: B Monte-carlo Tree Searchmentioning
confidence: 99%
“…Scorebounded MCTS extends this idea to games with multiple outcomes, leading to αβ-style pruning in the tree [5]. One can use shallow-depth minimax searches in the tree to initialize nodes during expansion, enhance the playout, or to help MCTS-Solver in backpropagation [2].…”
Section: Related Workmentioning
confidence: 99%
“…Finally, recent work has attempted to explain and identify some of the shortcomings that arise from estimates in MCTS, specifically compared to situations where classic minimax search has historically performed well [25,24]. Attempts have been made to overcome the problem of traps or optimistic moves, i.e., moves that initially seem promising but then later prove to be bad, such as sufficiency thresholds [14] and shallow minimax searches [2].…”
Section: Related Workmentioning
confidence: 99%