2018 23rd Asia and South Pacific Design Automation Conference (ASP-DAC) 2018
DOI: 10.1109/aspdac.2018.8297305
|View full text |Cite
|
Sign up to set email alerts
|

A deep reinforcement learning framework for optimizing fuel economy of hybrid electric vehicles

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 33 publications
(22 citation statements)
references
References 21 publications
0
22
0
Order By: Relevance
“…Thus, more data are required in hybrid-algorithm cases. In [26], the authors applied a deep neural network (DNN) to train the offline value functions and used the Qlearning algorithm to compute the online controls, which can be adaptive to different powertrain modeling and driving situations. The authors of [27] constructed DRL-enabled energy-management strategies that considered different drivers' behaviors, which could improve fuel efficiency.…”
Section: Hybrid Algorithmsmentioning
confidence: 99%
“…Thus, more data are required in hybrid-algorithm cases. In [26], the authors applied a deep neural network (DNN) to train the offline value functions and used the Qlearning algorithm to compute the online controls, which can be adaptive to different powertrain modeling and driving situations. The authors of [27] constructed DRL-enabled energy-management strategies that considered different drivers' behaviors, which could improve fuel efficiency.…”
Section: Hybrid Algorithmsmentioning
confidence: 99%
“…Based on the DQL method and considered as a breakthrough in the field of reinforcement learning, a deep reinforcement learning (DRL) framework for optimizing the fuel economy of HEVs was put forward in [99], which is approximately optimal, model-free, and has no need to have any prior knowledge of the driving cycle. The DRL technique consists of an offline deep neural network and an online deep Q-learning network, and can handle the high dimensional state and action space.…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…This strategy can have a good performance in limiting the maximum discharge current and optimizing the system efficiency. To seek the optimal control, different forgetting factors and Kullback-Leibler (KL) [97,99,101] divergence rates, which decide whether to update the power management strategy, were discussed.…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…In [20], the weighted sum of the fuel and battery electric energy use is defined as the cost function of the Q-learning algorithm, which is then applied to a 48V mild HEV. More recent techniques such as Deep Q Networks are utilized in [21]- [24], which combine Q-learning with a deep neural network to obtain fast convergence and improve the learning performance; similarly, gradient-based methods such as Deep Deterministic Policy Gradient (DDPG) are utilized for HEV control in [25].…”
Section: Introductionmentioning
confidence: 99%
“…However, the optimality the of RL-based strategy compared to SDP-based strategies is not presented clearly in the literature. Most papers, including [11], [13], [16], [21], and [24], compare the RL-based strategy only with the rule-based strategy. In [13], [18], [20], and [25], the reward function for RL includes the weighted sum of the fuel and electric energy use, thus an equivalent factor or co-state is needed to achieve optimality.…”
Section: Introductionmentioning
confidence: 99%