This article studies the control ideas of the optimal backstepping technique, proposing an eventtriggered optimal tracking control scheme for a class of strict-feedback nonlinear systems with nonaffine and nonlinear faults. A simplified identifier-critic-actor framework is employed in the reinforcement learning algorithm to achieve optimal control. The identifier estimates the unknown dynamic functions, the critic evaluates the system performance, and the actor implements control actions, enabling modeling and control of anonymous systems for achieving optimal control performance. In this paper, a simplified reinforcement learning algorithm is designed by deriving update rules from the negative gradient of a simple positive function related to the Hamilton-Jacobi-Bellman equation, and it also releases the stringent persistent excitation condition. Then, a fault-tolerant control method is developed by applying filtered signals for controller design. Additionally, to address communication resource reduction, an event-triggered mechanism is employed for designing the actual controller. Finally, the proposed scheme's feasibility is validated through theoretical analysis and simulation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.