2020
DOI: 10.48550/arxiv.2009.14456
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Strategy and Benchmark for Converting Deep Q-Networks to Event-Driven Spiking Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 28 publications
0
4
0
Order By: Relevance
“…Due to the limitation of resources, we choose 17 topperforming Atari games selected by [19] to test the method. The network architecture and hyper-parameters keep identical across all 17 games.…”
Section: Experimental Settingsmentioning
confidence: 99%
See 1 more Smart Citation
“…Due to the limitation of resources, we choose 17 topperforming Atari games selected by [19] to test the method. The network architecture and hyper-parameters keep identical across all 17 games.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…The DSQN is compared with DQN and Converted SNN (ANN-SNN) [19], where DQN and ANN-SNN are re-run under the same experimental setting for a fair comparison. As shown in well on most games, as shown in Figure 6 (see the supplementary for the learning curves of all Atari games).…”
Section: The Performance Of Dsqns On Atari Gamesmentioning
confidence: 99%
“…These approaches are typically based on reward-modulated local plasticity rules that perform well in simple control tasks, but commonly fail in complex robotic control tasks due to limited optimization capability. Some methods directly convert Deep Q-Networks (DQNs) [6] to SNNs and achieve competitive scores on Atari games with discrete action space [34,35].…”
Section: Related Workmentioning
confidence: 99%
“…Because of the optimizing hardness and learning latency, agent based on SNN is challenging to be trained in reinforcement learning tasks. ANN-to-SNN conversion method (Rueckauer et al, 2017 ) is used to implement DQN with a spiking neural network (Patel et al, 2019 ; Tan et al, 2020 ). They first trained ANN-based DQN policy and then transferred the network weight to SNN, using SNN to plat Atari game as shown in Figure 1 .…”
Section: Introductionmentioning
confidence: 99%