2017
DOI: 10.1007/s40565-017-0323-y
|View full text |Cite
|
Sign up to set email alerts
|

Distributed reinforcement learning to coordinate current sharing and voltage restoration for islanded DC microgrid

Abstract: A novel distributed reinforcement learning (DRL) strategy is proposed in this study to coordinate current sharing and voltage restoration in an islanded DC microgrid. Firstly, a reward function considering both equal proportional current sharing and cooperative voltage restoration is defined for each local agent. The global reward of the whole DC microgrid which is the sum of the local rewards is regarged as the optimization objective for DRL. Secondly, by using the distributed consensus method, the predefined… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(11 citation statements)
references
References 31 publications
0
11
0
Order By: Relevance
“…A policy gradient method was applied to make decisions with limited market information. For current and voltage control, [58] integrate the consensus method and deep reinforcement learning to coordinate distributed generators in an island microgrid. The distributed reactive power optimization was solved by the collaborative equilibrium Qlearning to minimize operating cost and carbon emission [59].…”
Section: Category 3 Surrogate Modelmentioning
confidence: 99%
“…A policy gradient method was applied to make decisions with limited market information. For current and voltage control, [58] integrate the consensus method and deep reinforcement learning to coordinate distributed generators in an island microgrid. The distributed reactive power optimization was solved by the collaborative equilibrium Qlearning to minimize operating cost and carbon emission [59].…”
Section: Category 3 Surrogate Modelmentioning
confidence: 99%
“…In a transient period of turning-off and turning-on of a semiconductor, switching energy loss occurs which is a summation of turn on and off switching energy losses, and it is demonstrated in (9). E sw (n)= E swon (n) + E swoff (n) (9) where E sw,on is switch-on energy losses and E sw,off is switchoff energy losses. Switch-on and switch-off energy losses are also related to semiconductor current and T j , which is explained in (10).…”
Section: Semiconductor Power Loss Comparison Of Selected Icsmentioning
confidence: 99%
“…In [6], the RL method is used for reactive power control. In [7], voltage restoration for an islanded microgrid is achieved via a distributed RL method. In [8], an application for disturbance classification is proposed based on image embedding and convolutional neural network (CNN).…”
Section: Introductionmentioning
confidence: 99%