2014 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS) 2014
DOI: 10.1109/infcomw.2014.6849306
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic pricing for smart grid with reinforcement learning

Abstract: In the smart grid system, dynamic pricing can be an efficient tool for the service provider which enables efficient and automated management of the grid. However, in practice, the lack of information about the customers' time-varying load demand and energy consumption patterns and the volatility of electricity price in the wholesale market make the implementation of dynamic pricing highly challenging. In this paper, we study a dynamic pricing problem in the smart grid system where the service provider decides … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(11 citation statements)
references
References 10 publications
(21 reference statements)
0
11
0
Order By: Relevance
“…Q-learning also used for optimization of energy consumption and the delay in [286]. AIOT systems for energy storage management and energy trading process are discussed in [287], [288] and ( [289], [290], [291], [292], [293], [294]) respectively. Some other applications of Qlearning for AIOT may be seen in ( [295], [296], [297], [298], [299], [300], [301], [302], [303], [304], [305]).…”
Section: E Autonomous Iotmentioning
confidence: 99%
“…Q-learning also used for optimization of energy consumption and the delay in [286]. AIOT systems for energy storage management and energy trading process are discussed in [287], [288] and ( [289], [290], [291], [292], [293], [294]) respectively. Some other applications of Qlearning for AIOT may be seen in ( [295], [296], [297], [298], [299], [300], [301], [302], [303], [304], [305]).…”
Section: E Autonomous Iotmentioning
confidence: 99%
“…Paper 74 proposed RL‐based dynamic algorithm for pricing without a feed of required data about the dynamic system. Paper 81 uses the RL algorithm with nash equilibrium (NE) to develop trade energy games among different players (consumers) for a dynamic pricing approach. Q learning with broker agent architecture 82 found its equal play in dynamic pricing in the grid market.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Wu et al (Wu, Joseph, and Russell 2016) use tabular Q-learning to set prices for Uber-type on-demand economies, by discretizing prices into ranges as actions. Kim et al (Kim et al 2014) use tabular Q-learning to decide the electricity prices in smart grids, for charging users' energy consumption. These studies mostly consider simpler models and use tabular RL to handle the relatively small state and action spaces.…”
Section: Background and Related Workmentioning
confidence: 99%