Abstract:In this article, an event-triggered guaranteed cost optimal tracking control problem is investigated for a class of uncertain nonlinear system with partial loss of actuator effectiveness faults. To begin with, an augmented system consisted of error system and reference system is constructed to simplify the tracking controller design. Then, in order to consider both actuator faults and system uncertainties in optimal tracking control, an improved discounted cost function is developed. Furthermore, a single crit… Show more
“…Among them, optimal control has been focused on in the past several decades because it considers the stability for the controlled systems and reduces the energy consumption simultaneously 21‐24 . The optimal control focuses on minimizing the value function, which can be obtained from the Hamilton–Jacobi–Bellman (HJB) equation 25‐27 . While, owing to its strong nonlinearity, solving the HJB equation by analytical solution becomes intractable.…”
Section: Introductionmentioning
confidence: 99%
“…[21][22][23][24] The optimal control focuses on minimizing the value function, which can be obtained from the Hamilton-Jacobi-Bellman (HJB) equation. [25][26][27] While, owing to its strong nonlinearity, solving the HJB equation by analytical solution becomes intractable. Thus, the reinforcement learning (RL) has gradually utilized to obtain the approximate solution of HJB equation by learning networks.…”
This article proposes an adaptive fault‐tolerant formation control strategy for strict‐feedback nonlinear multiagent systems with nonlinear faults and external disturbances. A simplified reinforcement learning algorithm is developed for approximating the optimized controller. Different from the existing optimized control results, the system dynamics is totally unknown in this article. In order to surmount “explosion of complexity”, the dynamic surface control technique is employed. A disturbance‐fault observer is employed to alleviate the influence caused by the external disturbances and the impact coming from the nonlinear faults. Based on the Lyapunov stability theorem, it can be demonstrated that all signals within the closed‐loop systems are semiglobal uniformly ultimately bounded and the optimized formation control performance can be guaranteed. Finally, some simulation results are exhibited the validity of the developed control method.
“…Among them, optimal control has been focused on in the past several decades because it considers the stability for the controlled systems and reduces the energy consumption simultaneously 21‐24 . The optimal control focuses on minimizing the value function, which can be obtained from the Hamilton–Jacobi–Bellman (HJB) equation 25‐27 . While, owing to its strong nonlinearity, solving the HJB equation by analytical solution becomes intractable.…”
Section: Introductionmentioning
confidence: 99%
“…[21][22][23][24] The optimal control focuses on minimizing the value function, which can be obtained from the Hamilton-Jacobi-Bellman (HJB) equation. [25][26][27] While, owing to its strong nonlinearity, solving the HJB equation by analytical solution becomes intractable. Thus, the reinforcement learning (RL) has gradually utilized to obtain the approximate solution of HJB equation by learning networks.…”
This article proposes an adaptive fault‐tolerant formation control strategy for strict‐feedback nonlinear multiagent systems with nonlinear faults and external disturbances. A simplified reinforcement learning algorithm is developed for approximating the optimized controller. Different from the existing optimized control results, the system dynamics is totally unknown in this article. In order to surmount “explosion of complexity”, the dynamic surface control technique is employed. A disturbance‐fault observer is employed to alleviate the influence caused by the external disturbances and the impact coming from the nonlinear faults. Based on the Lyapunov stability theorem, it can be demonstrated that all signals within the closed‐loop systems are semiglobal uniformly ultimately bounded and the optimized formation control performance can be guaranteed. Finally, some simulation results are exhibited the validity of the developed control method.
“…The stability analysis of nonlinear systems has received a great deal of attention in the literature in recent decades 1‐7 . Successful applications of nonlinear systems have been established in many areas such as robotic control, image encryption, confidential communication, and combinatorial optimization 8‐10 .…”
Section: Introductionmentioning
confidence: 99%
“…The stability analysis of nonlinear systems has received a great deal of attention in the literature in recent decades. [1][2][3][4][5][6][7] Successful applications of nonlinear systems have been established in many areas such as robotic control, image encryption, confidential communication, and combinatorial optimization. [8][9][10] Modeling inaccuracies and changes in the environment often result in the existence of parameter uncertainties in real systems.…”
This article investigates the stability of nonlinear uncertain distributed delay system via integral‐based event‐triggered impulsive control (IETIC) strategy. First, a IETIC mechanism is presented to reduce the redundant data transmission over the system, in which the integral‐based event‐triggered mechanism uses the integration of system states over a time period in the past. Second, a new lemma is proposed to eliminate the Zeno behavior of the established model through the IETIC mechanism. Third, a novel Lyapunov–Krasovskii functional (LKF) method related to probability density function is constructed to guarantee the stability of the established model based on LMI conditions, where a probability density function is introduced as a distributed delay kernel. Compared with existing methods, the constructed novel LKF method is less conservative or requiring less number of decision variables. Numerical examples are further provided to confirm the effectiveness and advantages of the proposed approach.
“…According to event-trigger mechanism and ADP approaches, a new optimal control method for unknown nonlinear continuous-time systems was proposed in [22]. Guo et al [23] studied the event-triggered guaranteed cost optimal tracking control problem for a class of uncertain nonlinear system by using ADP approaches. However, there is no research for Itô-type stochastic systems with ETOC based on ADP methods.…”
For nonlinear Itô-type stochastic systems, the problem of event-triggered optimal control (ETOC) is studied in this paper, and the adaptive dynamic programming (ADP) approach is explored to implement it. The value function of the Hamilton-Jacobi Bellman(HJB) equation is approximated by applying critical neural network (CNN). Moreover, a new event-triggering scheme is proposed, which can be used to design ETOC directly via the solution of HJB equation. By utilizing the Lyapunov direct method, it can be proved that the ETOC based on ADP approach can ensure that the CNN weight errors and states of system are semiglobally uniformly ultimately bounded (SGUUB) in probability. Furthermore, an upper bound is given on predetermined cost function. Specifically, there has been no published literature on the ETOC for nonlinear Itô-type stochastic systems via the ADP method. This work is the first attempt to fill the gap in this subject. Finally, the effectiveness of the proposed method are illustrated through two numerical examples. Keywords Event-triggered control • Optimal control • Adaptive dynamic programming (ADP) • Nonlinear Itô-type stochastic systems • Hamilton-Jacobi Bellman (HJB) equation • Neural network.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.