2021
DOI: 10.1109/tg.2021.3095264
|View full text |Cite
|
Sign up to set email alerts
|

Which Heroes to Pick? Learning to Draft in MOBA Games With Neural Networks and Tree Search

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 18 publications
0
1
0
Order By: Relevance
“…XLand [20] also focuses on the generalization capability of agents and supports multi-agent scenarios, but it is not open-source. Existing Interest This environment has been used as a testbed for RL in research competitions 2 and many researchers have conducted experiments under the environment of Honor of Kings [3,4,11,24,25,26,28,29,27].Though some of them verified the feasibility of reinforcement learning in tackling the game [11,26,28,29], they are more focused on methodological novelty in planning, treesearching, etc. Unlike these papers, this paper focuses on making the environment open-accessible and providing benchmarking results, which could serve as a reference and foundation for future research.…”
Section: Motivations and Related Workmentioning
confidence: 99%
“…XLand [20] also focuses on the generalization capability of agents and supports multi-agent scenarios, but it is not open-source. Existing Interest This environment has been used as a testbed for RL in research competitions 2 and many researchers have conducted experiments under the environment of Honor of Kings [3,4,11,24,25,26,28,29,27].Though some of them verified the feasibility of reinforcement learning in tackling the game [11,26,28,29], they are more focused on methodological novelty in planning, treesearching, etc. Unlike these papers, this paper focuses on making the environment open-accessible and providing benchmarking results, which could serve as a reference and foundation for future research.…”
Section: Motivations and Related Workmentioning
confidence: 99%
“…So, in this paper, those works are summarized and are put into four categories: Item recommendation, Draft recommendation, Outcome prediction and Strategy making, from Section 2.1 to Section 2.4 respectively, and Section 6 is the conclusion. As Sheng Chen et al [2]'s research, they built their method according to leveraging neural networks and Monte-Carlo tree search (henceforth referred to as MCTS), they used MCTS with a value network and policy network, the value network analyzes the value of current game state while the policy network samples actions for next draft. They tested their model on two datasets individually, based on both matches played by AI and human.…”
Section: Introductionmentioning
confidence: 99%
“…Draft recommendation is literally helping the players to select the most suitable champions in the current game. As shown in Figure 1, champions in the upper left and upper right corners in the figure are the champions banned by both allied team and enemy team, while left and right sides are the champions picked.As Sheng Chen et al[2]'s research, they built their method according to leveraging neural networks and Monte-Carlo tree search (henceforth referred to as MCTS), they used MCTS with a value network and policy network, the value network analyzes the value of current game state while the policy network samples actions for next draft. They tested their model on two datasets individually, based on both matches played by AI and human.…”
mentioning
confidence: 99%