The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2019
DOI: 10.1109/tsg.2019.2903756
|View full text |Cite
|
Sign up to set email alerts
|

Deep Reinforcement Learning for Joint Bidding and Pricing of Load Serving Entity

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
34
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 101 publications
(38 citation statements)
references
References 20 publications
0
34
0
Order By: Relevance
“…To reduce the revenue loss of wind power producer, [92] adopts the A3C algorithm for the strategic bidding of wind power producer when participating in the energy and reserve market. Reference [93] formulates the joint bidding problem of energy volume and price as an MDP, which is then solved by the DDPG algorithm. NN is used to learn a response function and extract the state transfer pattern from historical data in a supervised learning manner.…”
Section: Electricity Marketmentioning
confidence: 99%
“…To reduce the revenue loss of wind power producer, [92] adopts the A3C algorithm for the strategic bidding of wind power producer when participating in the energy and reserve market. Reference [93] formulates the joint bidding problem of energy volume and price as an MDP, which is then solved by the DDPG algorithm. NN is used to learn a response function and extract the state transfer pattern from historical data in a supervised learning manner.…”
Section: Electricity Marketmentioning
confidence: 99%
“…In (16), the chain rule is applied to calculate the gradient of the action value to the weights of the actor network.…”
Section: B Ddpg Algorithm For Continuous Controlmentioning
confidence: 99%
“…In power systems, the potentials of implementing deep RL for demand-side energy management and electric vehicle charging/discharging scheduling are shown in [14], [15]. The deep deterministic policy gradient (DDPG) algorithm is applied to solve the bidding problem of a load serving entity and GEN-COs in [16], [17].…”
Section: Introductionmentioning
confidence: 99%
“…In [23], a model-free method based on DRL with continuous action space was introduced to replace the traditional linear controller for load frequency control. Reference [27] applied the deep deterministic policy gradient algorithm to solve the joint bidding and pricing problems of the load-serving entity. Similarly, the autonomous voltage control strategies were proposed by [24] based on DQN and DDPG to support grid operators in making effective control actions.…”
Section: B Drl-based Power System Managementmentioning
confidence: 99%
“…Unlike traditional reinforcement learning, the DRL algorithms use powerful deep neuron networks to approximate their value function (such as Q-table), enabling automatic high-dimensional feature extraction and end-toend learning. Recently, the advantages of DRL were recognized by the community and some attempts were made to leverage DRL in various applications for electrical grid, including operational control [21]- [24], electricity market [25], [26], demand response [27] and energy management [28]. Although these applications presented advantageous results in their respective fields, several challenges were encountered.…”
mentioning
confidence: 99%