2019 18th Brazilian Symposium on Computer Games and Digital Entertainment (SBGames) 2019
DOI: 10.1109/sbgames.2019.00014
|View full text |Cite
|
Sign up to set email alerts
|

A Minimal Training Strategy to Play Flappy Bird Indefinitely with NEAT

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 12 publications
0
5
0
Order By: Relevance
“…In order to create autonomous virtual players utilizing the NEAT neuro-evolutionary method to generate an agent that can play the Flappy Bird game, this research [5] suggests a minimal training technique. The simplest neural network architecture capable of playing the game flawlessly was discovered using NEAT.…”
Section: Literature Surveymentioning
confidence: 99%
“…In order to create autonomous virtual players utilizing the NEAT neuro-evolutionary method to generate an agent that can play the Flappy Bird game, this research [5] suggests a minimal training technique. The simplest neural network architecture capable of playing the game flawlessly was discovered using NEAT.…”
Section: Literature Surveymentioning
confidence: 99%
“…The fitness function is a weighted average that relies on multiple scenarios with scenariospecific components. The neural network almost achieved perfect behaviour in the game, in which it took around 20 generations [21]. But in 2020, a new Statistical Forward Planning (SFP) method was presented.…”
Section: Related Workmentioning
confidence: 99%
“…Neuroevolution and Reinforcement learning algorithms are some of the algorithms that are used to create AI bots or artificial agents. [1], [7] and [8] have implemented a configuration of an ANN called Neuroevolution. The algorithm does not depend on the actions taken by the agents as a whole.…”
Section: Literature Surveymentioning
confidence: 99%
“…Authors in [3], [4], [6] and [7] use DNN to extract the features from the frame of the game and they form the input to the agent. However, [1], [5] and [8] make use of the game itself and place the agent to perceive its surroundings. There are several combinations of Reinforcement Learning algorithms possible, like Deep Neural Networks (DNN), Long short-term memory (LSTM), Deep Q-Network (DQN) and the like.…”
Section: Literature Surveymentioning
confidence: 99%
See 1 more Smart Citation