2021
DOI: 10.1109/tsg.2021.3098298
|View full text |Cite
|
Sign up to set email alerts
|

Deep Reinforcement Learning for Continuous Electric Vehicles Charging Control With Dynamic User Behaviors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 96 publications
(39 citation statements)
references
References 32 publications
0
27
0
Order By: Relevance
“…where t a and t d are the EV arriving time and departing time, respectively; k 1 and k 2 are the shape parameters which can be established based on insights of the occupant's driving behavior, the occupant's sensitivity to electricity price and the tolerance to SOC concern [33]. Figure 3 provides a visible demonstration of how the SOC concern involves overtime in a charging duration.…”
Section: Ev Model With the Occupant's Soc Concernmentioning
confidence: 99%
See 2 more Smart Citations
“…where t a and t d are the EV arriving time and departing time, respectively; k 1 and k 2 are the shape parameters which can be established based on insights of the occupant's driving behavior, the occupant's sensitivity to electricity price and the tolerance to SOC concern [33]. Figure 3 provides a visible demonstration of how the SOC concern involves overtime in a charging duration.…”
Section: Ev Model With the Occupant's Soc Concernmentioning
confidence: 99%
“…As the EV may become the major (only) transportation for a household and even for a sustained community, the expected EV state of charge would be another serious concern for residential occupants [31][32][33]. In [31], the authors incorporated the dynamic of driver's behaviors into the EV charging model and proposed a stochastic game approach to address the renewable energy uncertainty.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Zhou et al [21] combined Nash equilibrium and Lyapunov optimization and developed an incentive-based distributed scheduling of EV charging. Yan et al [22] have designed a model-free deep-reinforcement learning-based approach for an optimal charging control strategy. Khaki et al [23] have designed a hierarchical distributed framework for EV charging, based on ADMM.…”
Section: Introductionmentioning
confidence: 99%
“…The effectiveness of ADMM in transferring a centralized scheduling framework to a decentralized or distributed framework for the distribution network with community energy systems is presented in [24]. The algorithms in [20][21][22][23] show the effectiveness of distributed EV charging scheduling and control, but the operation stability of the distribution network is not considered. OPF is widely adapted in power system operations to deal with practical issues.…”
Section: Introductionmentioning
confidence: 99%