2020 IEEE International Symposium on Electromagnetic Compatibility &Amp; Signal/Power Integrity (EMCSI) 2020
DOI: 10.1109/emcsi38923.2020.9191512
|View full text |Cite
|
Sign up to set email alerts
|

An Enhanced Deep Reinforcement Learning Algorithm for Decoupling Capacitor Selection in Power Distribution Network Design

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(6 citation statements)
references
References 14 publications
0
5
0
Order By: Relevance
“…According to physical understanding, decap locations closer to the IC are more effective as they provide current return paths with smaller loop inductances. Therefore, the method proposed in [12], [16] is used to quantitatively evaluate the priority of each decap location.…”
Section: A Port Prioritization Based On Physical Inductancementioning
confidence: 99%
See 3 more Smart Citations
“…According to physical understanding, decap locations closer to the IC are more effective as they provide current return paths with smaller loop inductances. Therefore, the method proposed in [12], [16] is used to quantitatively evaluate the priority of each decap location.…”
Section: A Port Prioritization Based On Physical Inductancementioning
confidence: 99%
“…To solve this problem, a concept of sub-priority for each decap type is proposed. Namely, for each decap type, the subpriorities of the locations that allow this decap type are calculated using the approach proposed in [12]. When all the decap types are allowed at each location, the sub-priority lists for different decap types will be identical to the global priority without considering package size constraints.…”
Section: B Initial Solution Determinationmentioning
confidence: 99%
See 2 more Smart Citations
“…Deep Reinforcement Learning-based Methods. Several DRL methods to solve DPP were proposed since the work of [20] using Q-learning and several approaches with convolutional neural network (CNN)-based Q approximator were proposed by [21,22] later. However, their methods were sample inefficient and their trained policies were non-reusable; if the DPP task changes, they must be re-trained.…”
Section: Related Workmentioning
confidence: 99%