2019 6th Swiss Conference on Data Science (SDS) 2019
DOI: 10.1109/sds.2019.00-12
|View full text |Cite
|
Sign up to set email alerts
|

Survey of Artificial Intelligence for Card Games and Its Application to the Swiss Game Jass

Abstract: In the last decades we have witnessed the success of applications of Artificial Intelligence to playing games. In this work we address the challenging field of games with hidden information and card games in particular. Jass is a very popular card game in Switzerland and is closely connected with Swiss culture. To the best of our knowledge, performances of Artificial Intelligence agents in the game of Jass do not outperform top players yet. Our contribution to the community is two-fold. First, we provide an ov… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 33 publications
(47 reference statements)
0
3
0
Order By: Relevance
“…Generally speaking, the problem of reinforcement learning can be converted to the problem of solving Markov Decision Process (MDP), that is to say, it can be solved using traditional algorithms. However, the card games have obvious characteristic of imperfect information [14], which makes it require more complex reasoning than similar sized perfect information games. Therefore, if card games directly use the common algorithm like Temporal Difference Learning (TDL) [4,5], Policy Gradient (PG) [6,7], Full-Width Extensive-Form Fictitious Play (XFP) [19] and Counterfactual Regret Minimization (CFR) [20] etc., the results are often not ideal.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Generally speaking, the problem of reinforcement learning can be converted to the problem of solving Markov Decision Process (MDP), that is to say, it can be solved using traditional algorithms. However, the card games have obvious characteristic of imperfect information [14], which makes it require more complex reasoning than similar sized perfect information games. Therefore, if card games directly use the common algorithm like Temporal Difference Learning (TDL) [4,5], Policy Gradient (PG) [6,7], Full-Width Extensive-Form Fictitious Play (XFP) [19] and Counterfactual Regret Minimization (CFR) [20] etc., the results are often not ideal.…”
Section: Related Workmentioning
confidence: 99%
“…Mixed learners mean centralized training and decentralized executing [12]. Although the control methods and algorithms may different, yet it can all apply to neural network to solve and so various approximation algorithms have appeared [13][14][15][16].…”
Section: Introductionmentioning
confidence: 99%
“…According to [Niklaus et al 2019], implicit information, combined with stochastic aspects, allows card games to simulate challenges presented in different decision scenarios with a partial view. According to [Rubin and Watson 2012], the random distribution of cards, and the partial vision of the game state, make it difficult to create the game trees, e.g., Minimax an AI approach well known for many solving many games such as Chess.…”
Section: Introductionmentioning
confidence: 99%