2017 Chinese Automation Congress (CAC) 2017
DOI: 10.1109/cac.2017.8243989
|View full text |Cite
|
Sign up to set email alerts
|

Double DQN method for object detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(2 citation statements)
references
References 5 publications
0
2
0
Order By: Relevance
“…For a specific power transformer, we should consider the different conditions and adapt the weights more inclined to predict the main fault of that one. For this reason, we employ the idea of experience replay, which is used in the reinforcement learning model DQN [29].…”
Section: Experience Replaymentioning
confidence: 99%
“…For a specific power transformer, we should consider the different conditions and adapt the weights more inclined to predict the main fault of that one. For this reason, we employ the idea of experience replay, which is used in the reinforcement learning model DQN [29].…”
Section: Experience Replaymentioning
confidence: 99%
“…The DoubleDQN is an extension of the DQN where the Q-value estimation and action selection is decoupled by using separate neural networks to solve the inherent problem of overestimating Qvalues. In essence, the DoubleDQN improves the stability and convergence rate of the RL agent, as presented by [70,277].…”
Section: Training Algorithmsmentioning
confidence: 99%