2020
DOI: 10.1109/access.2020.2992062
|View full text |Cite
|
Sign up to set email alerts
|

Online Data-Driven Energy Management of a Hybrid Electric Vehicle Using Model-Based Q-Learning

Abstract: The energy management strategy of a hybrid electric vehicle directly determines the fuel economy of the vehicle. As a supervisory control strategy to divide the required power into its multiple power sources, engines and batteries, many studies have been conducting using rule-based and optimization-based approaches for energy management strategy so far. Recently, studies using various machine learning techniques have been conducted. In this paper, a novel control framework implementing Model-based Q-learning i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
29
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 41 publications
(29 citation statements)
references
References 27 publications
0
29
0
Order By: Relevance
“…In this study, to solve the optimal control problem, a novel eco-driving strategy utilizing MBRL was developed on the basis of a previous study on MBRL for the HEV control case study in [21]. In (17), the state variable…”
Section: Model-based Reinforcement Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…In this study, to solve the optimal control problem, a novel eco-driving strategy utilizing MBRL was developed on the basis of a previous study on MBRL for the HEV control case study in [21]. In (17), the state variable…”
Section: Model-based Reinforcement Learningmentioning
confidence: 99%
“…whereĝ k , andv k+1 can be determined based on the vehicle powertrain model. Alternatively,ĝ k , andv k+1 can be determined based on approximation model for keeping modelfree characteristic of reinforcement learning(see [21]), that approximation could be done based on the experience as shown in following equation:…”
Section: Model-based Reinforcement Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…However, the interactions between the road participants in the traffic cannot be modelled in deterministic way. To handle the such uncertainties in the traffic prediction, stochastic DP [12], Q-learning-based PMP [13], and reinforcement learning [14,15] have been considered, but the analysis on the practical aspects like computational tractability is omitted. Additionally, since these methods are based on Markov decision process, where the relationship between the control action and probability are pre-determined, these methods cannot ensure robustness when the vehicle is in an unexpected situation [16].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Due to the nonconvexity in P b , the control fluctuates a lot to search the optimum point at every step, and is sometimes stuck at the local minimum points, as shown in the middle plot of Figure 6. In contrast, by exploiting the quadratic form (15), the computation time at each step is significantly reduced compared to the original form, as shown in the bottom plot of Figure 6. Therefore, it can be concluded that the nonlinear cost (10a) can be replaced with the quadratic cost (15) without sacrificing too much performance in terms of the battery SOC reduction.…”
Section: Quadratic Cost Simplificationmentioning
confidence: 99%