Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2020
DOI: 10.1016/j.knosys.2019.105032
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing Hearthstone agents using an evolutionary algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 25 publications
(11 citation statements)
references
References 13 publications
0
9
0
Order By: Relevance
“…Hearthstone [18] is a two-player, turn-taking adversarial online collectable card game that is an increasingly popular domain for evaluating both classical AI techniques and modern deep reinforcement learning approaches due to the many unique challenges it poses (e.g., large branching factor, partial observability, stochastic actions, and difficulty with planning under uncertainty) [33]. Rather than manipulating the reward function of individual agents in a QD system [2,43] (like the QD approach in AlphaStar [53]), or generate the best gameplay strategy or deck [5,25,50], and deckbuilding work of Fontaine et al [22], the experiments in this paper search for a diversity of gameplay policies.…”
Section: Hearthstonementioning
confidence: 99%
“…Hearthstone [18] is a two-player, turn-taking adversarial online collectable card game that is an increasingly popular domain for evaluating both classical AI techniques and modern deep reinforcement learning approaches due to the many unique challenges it poses (e.g., large branching factor, partial observability, stochastic actions, and difficulty with planning under uncertainty) [33]. Rather than manipulating the reward function of individual agents in a QD system [2,43] (like the QD approach in AlphaStar [53]), or generate the best gameplay strategy or deck [5,25,50], and deckbuilding work of Fontaine et al [22], the experiments in this paper search for a diversity of gameplay policies.…”
Section: Hearthstonementioning
confidence: 99%
“…Due to their distinct merits of strong robustness, wide applicability and rapid searching capability, the EAs have shown tremendous potential in dealing with the global multiagent optimization problems in practical applications [21]. For example, in [6], the task deadlock problem (caused by agent decision) has been avoided through dynamic and variable pheromone placement methods.…”
Section: Related Workmentioning
confidence: 99%
“…In addition to deckbuilding, Hearthstone presents a unique agentbased challenge due to the stochasticity of initial state and actions, the large branching factor, the partial observability of the game state, and the large variety of opponents possible. As a result, many works train AI agents to play Hearthstone [16][17][18][19][20][21][22][23][24][25]. Other works predict the result of a game given a partial log of games [26,27] or predict the archetype of a deck from the first round of the game [28].…”
Section: Background 21 Hearthstone and Automated Deckbuildingmentioning
confidence: 99%