Proceedings of the 7th International Conference on Pattern Recognition Applications and Methods 2018
DOI: 10.5220/0006535502760283
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Evaluate Chess Positions with Deep Neural Networks and Limited Lookahead

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 8 publications
0
6
0
Order By: Relevance
“…Te optimal combination of hidden layer sizes was found using the validation subset, by performing an exhaustive trial and error procedure in the range [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20] and [5][6][7][8][9][10] for the frst and the second layer, respectively. Te other rival method concerns a CNN model of two 2D convolutional layers with 20 5 × 5 and 50 3 × 3 flters in the frst and the second layer, respectively, trained with stochastic gradient descent (SGD) algorithm, as proposed in [4]. Due to their stochastic nature, the MLP and CNN models produce a diferent result for each run, and thus, 30 diferent runs were performed in order to test the consistency of the results.…”
Section: Case Studymentioning
confidence: 99%
See 2 more Smart Citations
“…Te optimal combination of hidden layer sizes was found using the validation subset, by performing an exhaustive trial and error procedure in the range [5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20] and [5][6][7][8][9][10] for the frst and the second layer, respectively. Te other rival method concerns a CNN model of two 2D convolutional layers with 20 5 × 5 and 50 3 × 3 flters in the frst and the second layer, respectively, trained with stochastic gradient descent (SGD) algorithm, as proposed in [4]. Due to their stochastic nature, the MLP and CNN models produce a diferent result for each run, and thus, 30 diferent runs were performed in order to test the consistency of the results.…”
Section: Case Studymentioning
confidence: 99%
“…Nowadays, even top-level grandmasters use chess engines for their tournament preparation and overall training. Advancements in the felds of artifcial intelligence and machine learning have led to new ideas about engines, like chessplaying agents with no coding of any chess rules in them whatsoever and enhancements on the position evaluation algorithms with the aid of neural networks replacing, for example, the heuristic function used in a tree search algorithm [2][3][4].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…While computer chess has been explored extensively, there are very few papers that specifically focus on playing the game with limited look ahead. The most notable work that addresses this is by Sabatelli et al [2018], which is the primary influence of this research. In this work, a supervised learning approach was used to train an artificial neural network that served as a chess board evaluator.…”
Section: Prior Workmentioning
confidence: 99%
“…Similar to the dataset proposed by Sabatelli et al [2018], we extract unique board configurations from public chess databases. The board configurations are labelled using Stockfish 11, where each label represents the centipawn advantage relative to white.…”
Section: The Datasetmentioning
confidence: 99%