2017
DOI: 10.14569/ijacsa.2017.080364
|View full text |Cite
|
Sign up to set email alerts
|

Generation of Sokoban Stages using Recurrent Neural Networks

Abstract: Abstract-Puzzles and board games represent several important classes of AI problems, but also represent difficult complexity classes. In this paper, we propose a deep learning based alternative to train a neural network model to find solution states of the popular puzzle game Sokoban. The network trains against a classical solver that uses theorem proving as the oracle of valid and invalid games states, in a setup that is similar to the popular adversarial training framework. Using our approach, we have been a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 1 publication
(1 reference statement)
0
4
0
Order By: Relevance
“…There is a variety of generative models that rely on deep learning such as Variational Autoencoders (VAE) [4], Generative Adversarial Networks (GANs) [5], Auto-Regressive Models [14] and more. Many of these models has already been applied to PCG such as using GANs to generate Zelda levels [9], using VAEs for generating and blending levels from multiple games [15] and using LSTMs to generate Sokoban Levels [16]. All of the aforementioned methods rely on training the model to capture the distribution of the training data.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…There is a variety of generative models that rely on deep learning such as Variational Autoencoders (VAE) [4], Generative Adversarial Networks (GANs) [5], Auto-Regressive Models [14] and more. Many of these models has already been applied to PCG such as using GANs to generate Zelda levels [9], using VAEs for generating and blending levels from multiple games [15] and using LSTMs to generate Sokoban Levels [16]. All of the aforementioned methods rely on training the model to capture the distribution of the training data.…”
Section: Related Workmentioning
confidence: 99%
“…In this paper, we created an experimental setup to test some of the most recent deep-learning level generation methods but there are some techniques that we did not include such as LSTMs [16] and Adverserial Reinforcement Learning [31]. We are interested in adapting these methods to our experimental setting.…”
Section: Future Workmentioning
confidence: 99%
“…There is a variety of generative models that rely on deep learning such as Variational Autoencoders (VAE) [4], Generative Adversarial Networks (GANs) [5], Auto-Regressive Models [14] and more. Many of these models has already been applied to PCG such as using GANs to generate Zelda levels [9], using VAEs for generating and blending levels from multiple games [15] and using LSTMs to generate Sokoban Levels [16]. All of the aforementioned methods rely on training the model to capture the distribution of the training data.…”
Section: Related Workmentioning
confidence: 99%
“…In this paper, we created an experimental setup to test some of the most recent deep-learning level generation methods but there are some techniques that we did not include such as LSTMs [16] and Adverserial Reinforcement Learning [31]. We are interested in adapting these methods to our experimental setting.…”
Section: Future Workmentioning
confidence: 99%