2021
DOI: 10.3390/e23101311
|View full text |Cite
|
Sign up to set email alerts
|

Improved Deep Q-Network for User-Side Battery Energy Storage Charging and Discharging Strategy in Industrial Parks

Abstract: Battery energy storage technology is an important part of the industrial parks to ensure the stable power supply, and its rough charging and discharging mode is difficult to meet the application requirements of energy saving, emission reduction, cost reduction, and efficiency increase. As a classic method of deep reinforcement learning, the deep Q-network is widely used to solve the problem of user-side battery energy storage charging and discharging. In some scenarios, its performance has reached the level of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 28 publications
0
6
0
Order By: Relevance
“…In order to reduce the dependence on the model, the data-driven method is proposed in large quantities to solve the optimized dispatching problem of microgrids and improve the economic goals as much as possible under various constraints. Chen et al proposed a modified deep Q-network to solve the power-charging and -discharging problem of userside battery energy storage, reducing the cost and energy consumption of the charging and discharging of the industrial park [12]. Paudyal et al transformed the mixed integer nonlinear programming problem into a nonlinear programming problem based on actual needs and established a three-phase power distribution optimization power flow model, thereby reducing the computational burden and facilitating the Hierarchical implementation and application [13].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to reduce the dependence on the model, the data-driven method is proposed in large quantities to solve the optimized dispatching problem of microgrids and improve the economic goals as much as possible under various constraints. Chen et al proposed a modified deep Q-network to solve the power-charging and -discharging problem of userside battery energy storage, reducing the cost and energy consumption of the charging and discharging of the industrial park [12]. Paudyal et al transformed the mixed integer nonlinear programming problem into a nonlinear programming problem based on actual needs and established a three-phase power distribution optimization power flow model, thereby reducing the computational burden and facilitating the Hierarchical implementation and application [13].…”
Section: Related Workmentioning
confidence: 99%
“…The voltage V m of the microgrid is generally 220 V. At the same time, the line loss of the h-th transmission line at time t can be calculated based on Formulas (12) and (13).…”
Section: Operation Conditions Constraintsmentioning
confidence: 99%
“…It currently performs well on many decision-based problems. For example, in games or other fields, it has many applications [ 21 , 22 , 23 ]. At the same time, we believe that the application of reinforcement learning to NLP still has great potential.…”
Section: Related Workmentioning
confidence: 99%
“…For example, we proposed a multistep adaptive dynamic programming algorithm, one type of reinforcement learning, for cooperative target tracking in energy harvesting WSNs to schedule sensors over an infinite horizon (Liu et al, 2020). We also proposed an improved deep Q-network approach for user-side battery energy storage charging and discharging strategy to reduce the costs and energy consumptions of charging and discharging actions (Chen et al, 2021). In (Chen et al, 2022), we proposed an approach that combined reinforcement learning and traditional optimization methods to address the problems of insufficient performance in terms of economic operation and efficient dispatching.…”
Section: Introductionmentioning
confidence: 99%
“…In our previous work (Liu et al, 2020;Chen et al, 2021Chen et al, , 2022Jiang et al, 2022), reinforcement learning methods have been successfully applied to the charging scheduling problem of microgrids, energy harvesting WSNs, and WRSNs. For example, we proposed a multistep adaptive dynamic programming algorithm, one type of reinforcement learning, for cooperative target tracking in energy harvesting WSNs to schedule sensors over an infinite horizon (Liu et al, 2020).…”
Section: Introductionmentioning
confidence: 99%