2020
DOI: 10.1016/j.ifacol.2020.12.767
|View full text |Cite
|
Sign up to set email alerts
|

Model-Free Optimization Scheme for Efficiency Improvement of Wind Farm Using Decentralized Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 17 publications
0
7
0
Order By: Relevance
“…Many wind farm control methods based on reinforcement learning (RL) use the Q-learning algorithm [64]. Additionally, distributed Q-learning is developed for optimizing farmlevel power production [65], with strategies to avoid abrupt changes in control variables. Further research has proposed distributed RL algorithms for increasing power generation through yaw angle control [49], and concepts like gradient approximation and incremental comparison in RL for optimal control actions [66].…”
Section: Ref Yearmentioning
confidence: 99%
“…Many wind farm control methods based on reinforcement learning (RL) use the Q-learning algorithm [64]. Additionally, distributed Q-learning is developed for optimizing farmlevel power production [65], with strategies to avoid abrupt changes in control variables. Further research has proposed distributed RL algorithms for increasing power generation through yaw angle control [49], and concepts like gradient approximation and incremental comparison in RL for optimal control actions [66].…”
Section: Ref Yearmentioning
confidence: 99%
“…Many RL-based wind farm control methods are based on the Q-learning algorithm [131]. A decentralized Q-learning algorithm was developed in [132] for optimizing farm-level power production. This scheme has the ability to avoid sharp changes in control variables.…”
Section: Power Generation Maximization Via Rlmentioning
confidence: 99%
“…In the last decade significant progress has been achieved by RL methods for many sequential decision-making problems with successful applications in various fields including games, robotics or biology. By formulating farm power output maximization as a distributed optimization problem, decentralized multi-agent RL solutions have led to significant increases in total power production on experiments with both static [1,4,5] and dynamic wind farms simulators [6,7]. In particular, decentralized learning approaches bypass the exponential dependency of the search space on the number of turbines, and promise to be more tractable by modeling every turbine as an agent following a local learning algorithm [5,[7][8][9]].…”
Section: Introductionmentioning
confidence: 99%
“…By formulating farm power output maximization as a distributed optimization problem, decentralized multi-agent RL solutions have led to significant increases in total power production on experiments with both static [1,4,5] and dynamic wind farms simulators [6,7]. In particular, decentralized learning approaches bypass the exponential dependency of the search space on the number of turbines, and promise to be more tractable by modeling every turbine as an agent following a local learning algorithm [5,[7][8][9]]. Yet new evaluations on dynamic simulations suggest that convergence time remains too long for online learning on real wind farms [7,10].…”
Section: Introductionmentioning
confidence: 99%