2009
DOI: 10.1109/tsmcb.2009.2013272
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement-Learning-Based Output-Feedback Control of Nonstrict Nonlinear Discrete-Time Systems With Application to Engine Emission Control

Abstract: A novel reinforcement-learning-based output adaptive neural network (NN) controller, which is also referred to as the adaptive-critic NN controller, is developed to deliver the desired tracking performance for a class of nonlinear discrete-time systems expressed in nonstrict feedback form in the presence of bounded and unknown disturbances. The adaptive-critic NN controller consists of an observer, a critic, and two action NNs. The observer estimates the states and output, and the two action NNs provide virtua… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
5
0

Year Published

2010
2010
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 34 publications
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…This approach was later improved by a reinforcement learning (RL)-based controller that used an ANN-based adaptive-critic structure. 26,27 All these applications, however, were trained offline using the model of Daw et al 28 While they showed improvements in CCV, they also exhibited fuel enrichment, and it was not possible to determine how much of the improvement was due to next-cycle control actions. Even though the model has been improved 29 and recent model-based controllers have been designed, 30,31 offline training suffers from inaccuracies and uncertainties of the model.…”
Section: Introductionmentioning
confidence: 99%
“…This approach was later improved by a reinforcement learning (RL)-based controller that used an ANN-based adaptive-critic structure. 26,27 All these applications, however, were trained offline using the model of Daw et al 28 While they showed improvements in CCV, they also exhibited fuel enrichment, and it was not possible to determine how much of the improvement was due to next-cycle control actions. Even though the model has been improved 29 and recent model-based controllers have been designed, 30,31 offline training suffers from inaccuracies and uncertainties of the model.…”
Section: Introductionmentioning
confidence: 99%
“…30,31 RL has been used for automotive powertrain control systems especially in energy management of hybrid electric vehicles [32][33][34] and for internal combustion engines. [35][36][37][38][39][40] Q-learning RL is used as idle speed control of a spark-ignition (SI) engine by controlling the spark timing and intake throttle valve position. 41 Similar studies have been carried out for diesel engine idle speed control by controlling the fuel injection timing.…”
Section: Introductionmentioning
confidence: 99%
“… 36 RL has also been used for emission control of SI engines. 37 , 38 A very limited number of studies have been carried out utilizing RL for internal combustion control, and most of the existing work has focused on SI engines. To the authors’ knowledge, deep RL algorithms have not been previously implemented for diesel engine performance and emission control.…”
Section: Introductionmentioning
confidence: 99%
“…Zhang et al (2022) discussed the model reference control (MRC) issue for a class of discrete-time linear systems, where the MRC feedback control law was designed only using the reference input and quantized output signals. In the work by Shih et al (2009), by proposing a dynamic state observer, an RL-based output feedback controller was designed to guarantee the desired tracking performance for some uncertain nonlinear discrete-time systems.…”
Section: Introductionmentioning
confidence: 99%