2022
DOI: 10.1016/j.energy.2021.121703
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical reinforcement learning based energy management strategy for hybrid electric vehicle

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 73 publications
(21 citation statements)
references
References 25 publications
0
13
0
Order By: Relevance
“…However, this approach requires advanced communication. The stability objective is accomplished using a non-droop method in the hierarchical controller, which implies that an additional controller is essential [49][50] [51].…”
Section: A Controlmentioning
confidence: 99%
“…However, this approach requires advanced communication. The stability objective is accomplished using a non-droop method in the hierarchical controller, which implies that an additional controller is essential [49][50] [51].…”
Section: A Controlmentioning
confidence: 99%
“…These vehicles provide propulsion from both an ICE using liquid fuel and an electric motor (EM). 6 Afterward, with the advancing battery technology, fully electric vehicles (FEVs) were developed. In these vehicles, the propulsion is provided by EMs fed from a DC energy source.…”
Section: Introductionmentioning
confidence: 99%
“…Hybrid electric vehicles (HEVs) were developed to reduce both carbon emissions and fossil fuel consumption. These vehicles provide propulsion from both an ICE using liquid fuel and an electric motor (EM) 6 . Afterward, with the advancing battery technology, fully electric vehicles (FEVs) were developed.…”
Section: Introductionmentioning
confidence: 99%
“…The reinforcement learningbased strategy can solve the optimal control problem in the case of no model or an inaccurate model, but the training of the neural network requires a complex configuration of hardware equipment and many iterations before convergence. 27 According to the above analysis, to design an EMS for practical application, the first point is to satisfy the optimality of energy optimization using vehicle dynamics information and optimization theory. Second, in the process of solving the optimal solution, the computational cost should be minimized to satisfy the requirements of the electronic control unit computational load.…”
Section: Introductionmentioning
confidence: 99%
“…To achieve the best fuel economy, maintain the battery state of charge (SOC), and slow fuel cell ageing, an adaptive online EMS based on the twin delayed deep deterministic policy gradient was designed in Reference [26]. The reinforcement learning‐based strategy can solve the optimal control problem in the case of no model or an inaccurate model, but the training of the neural network requires a complex configuration of hardware equipment and many iterations before convergence 27 …”
Section: Introductionmentioning
confidence: 99%