2015
DOI: 10.1109/tsg.2015.2393059
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement Learning of Heuristic EV Fleet Charging in a Day-Ahead Electricity Market

Abstract: Abstract-This paper addresses the problem of defining a day-ahead consumption plan for charging a fleet of electric vehicles (EVs), and following this plan during operation. A challenge herein is the beforehand unknown charging flexibility of EVs, which depends on numerous details about each EV (e.g., plug-in times, power limitations, battery size, power curve, etc.). To cope with this challenge, EV charging is controlled during opertion by a heuristic scheme, and the resulting charging behavior of the EV flee… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
92
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
6
3
1

Relationship

2
8

Authors

Journals

citations
Cited by 169 publications
(93 citation statements)
references
References 17 publications
1
92
0
Order By: Relevance
“…Note that our work is different from [8] and [10] in two aspects: (i) unlike [8] and [10], our proposed approach does not take the control decisions in separate steps (i.e., taking aggregate energy consumption in one step and coordinating individual EV charging in a second step to meet the already decided energy consumption) and instead it takes decisions directly and jointly for all individual EVs using an efficient representation of an aggregate state of a group of EVs, hence (ii) our approach does not need a heuristic algorithm, but instead learns the aggregate load while finding an optimum policy to flatten the load curve. We now describe our MDP model, and subsequently the batch reinforcement learning approach to train it.…”
Section: Related Workmentioning
confidence: 79%
“…Note that our work is different from [8] and [10] in two aspects: (i) unlike [8] and [10], our proposed approach does not take the control decisions in separate steps (i.e., taking aggregate energy consumption in one step and coordinating individual EV charging in a second step to meet the already decided energy consumption) and instead it takes decisions directly and jointly for all individual EVs using an efficient representation of an aggregate state of a group of EVs, hence (ii) our approach does not need a heuristic algorithm, but instead learns the aggregate load while finding an optimum policy to flatten the load curve. We now describe our MDP model, and subsequently the batch reinforcement learning approach to train it.…”
Section: Related Workmentioning
confidence: 79%
“…Multi-agents are usually generated from optimization models by using expert knowledge and intelligent learning methods [23]. If the multi-agents are only demanded to imitate the behaviors of participants, they can be directly generated from one or more types of sources in the behavioral/statistical/causal data.…”
Section: Multi-agent Based Integration Methodsmentioning
confidence: 99%
“…Power system components considered include: dynamic brake Ernst et al (2004); Glavic (2005), thyristor controlled series capacitor Ernst et al (2004Ernst et al ( , 2009, quadrature booster Li and Wu (1999), synchronous generators (all AGC related references), individual or aggregated loads Vandael et al (2015); Ruelens et al (2016), etc. If used as a multi-agent system, then additional state variables must be introduced to ensure convergence of these essentially distributed computation schemes, and an adapted variant of standard RL methods is often used (for example correlated equilibrium Q(λ) Yu et al (2012a)).…”
Section: Past and Recent Considerations Of Rl For Electric Power Systmentioning
confidence: 99%