2022
DOI: 10.1016/j.renene.2021.11.052
|View full text |Cite
|
Sign up to set email alerts
|

Optimization of the Operation and Maintenance of renewable energy systems by Deep Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 43 publications
(9 citation statements)
references
References 66 publications
0
9
0
Order By: Relevance
“…Thirdly, immersive learning experience is not enough and multidirectional. The multidirectional and integrated teaching context calls for a deeper physical experience of blended teaching [27]. That is the development trend of hybrid teaching in China.…”
Section: Status Of Blended Learningmentioning
confidence: 99%
“…Thirdly, immersive learning experience is not enough and multidirectional. The multidirectional and integrated teaching context calls for a deeper physical experience of blended teaching [27]. That is the development trend of hybrid teaching in China.…”
Section: Status Of Blended Learningmentioning
confidence: 99%
“…Despite the agent may in principle find the optimal O&M policy by means of direct interactions with the real-world system, this turns out to be unfeasible in the case of CPES for economic, safety and time issues: the trial-and-error nature of the learning process consists in performing several times the actions suggested by the algorithm to explore the solution space, leading to economically inconvenient and unsafe system management in the early stage of the learning process (when they are not yet optimal); thus, the learning agent is typically trained using a white-box environment model of the system of interest [4].…”
Section: The Environment Modelmentioning
confidence: 99%
“…The main idea is to avoid too large policy updates, which can increase the probability of accidental performance collapses. PPO is considered relatively easy to implement and tune, and despite its simplicity, it has been shown able to outperform many state-of-the-art approaches on discrete and continuous benchmarks [35] and on several applications in different research fields, such as supply chains [47], autonomous vehicles [48] and power production plants [4,[30][31][32].…”
Section: Reinforcement Learning Algorithmsmentioning
confidence: 99%
See 2 more Smart Citations