Human operators have the trend of increasing physical and mental workloads when performing teleoperation tasks in uncertain and dynamic environments. In addition, their performances are influenced by subjective factors, potentially leading to operational errors or task failure. Although agent-based methods offer a promising solution to the above problems, the human experience and intelligence are necessary for teleoperation scenarios. In this paper, a truncated quantile critics reinforcement learning-based integrated framework is proposed for human–agent teleoperation that encompasses training, assessment and agent-based arbitration. The proposed framework allows for an expert training agent, a bilateral training and cooperation process to realize the co-optimization of agent and human. It can provide efficient and quantifiable training feedback. Experiments have been conducted to train subjects with the developed algorithm. The performances of human–human and human–agent cooperation modes are also compared. The results have shown that subjects can complete the tasks of reaching and picking and placing with the assistance of an agent in a shorter operational time, with a higher success rate and less workload than human–human cooperation.
Finite/Fixed-time control yields a promising tool to optimize a system's settling time, but lacks the ability to separately define the settling time and the convergence domain (known as practically prescribed-time stability, PPTS). We provide a sufficient condition for PPTS based on a new piecewise exponential function, which decouples the settling time and convergence domain into separately user-defined parameters. We propose an adaptive event-triggered prescribed-time control scheme for nonlinear systems with asymmetric output constraints, using an exponentialtype barrier Lyapunov function. We show that this PPTS control scheme can guarantee tracking error convergence performance, while restricting the output state according to the prescribed asymmetric constraints. Compared with traditional finite/fixed-time control, the proposed methodology yields separately user-defined settling time and convergence domain without the prior information on disturbance. Moreover, asymmetric state constraints can be handled in the control structure through bias state transformation, which offers an intuitive analysis technique for general constraint issues. Simulation and experiment results on a heterogeneous teleoperation system demonstrate the merits of the proposed control scheme.
It is known that the interval type-2 (IT2) fuzzy controllers are superior compared to their type-1 counterparts in terms of robustness, flexibility, etc. However, how to conduct the type reduction optimally with the consideration of system stability under the fuzzy-model-based (FMB) control framework is still an open problem. To address this issue, we present a new approach through the membership-function-dependent (MFD) and deep reinforcement learning (DRL) approaches. In the proposed approach, the reduction of IT2 membership functions of the fuzzy controller is completing during optimizing the control performance. Another fundamental issue is that the stability conditions must hold subject to different type-reduction methods. It is tedious and impractical to resolve the stability conditions according to different type-reduction methods, which could lead to infinite possibility. It is more practical to guarantee the holding of stability conditions during type-reduction rather than resolving the stability conditions, the MFD approach is proposed with the imperfect premise matching (IPM) concept. Thanks to the unique merit of the MFD approach, the stability conditions according to all the different embedded type-1 membership functions within the footprint of uncertainty (FOU) are guaranteed to be valid. During the control processes, the state transitions associated with properly engineered cost/reward function can be used to approximately calculate the deterministic policy gradient to optimize the acting policy and then to improve the control performance through determining the grade of IT2 membership functions of the fuzzy controller. The detailed simulation example is provided to verify the merits of the proposed approach.Impact Statement-The connection between the membership functions of type-2 fuzzy systems and reinforcement learning is observed and investigated for the first time. In the paper, the authors present the reinforcement-learning-based type-reduction for the interval type-2 fuzzy-model-based control systems. The theoretical guarantee of the stability conditions holds during the optimization process conducted by the reinforcement learning agent. The proposed research work bridges the areas of fuzzy control with reinforcement learning. Adopting reinforcement learning techniques to improve the control performance of fuzzy systems with the theoretically guaranteed stability conditions, which has impact on both artificial intelligence and control communities.
Assisting humans in collaborative tasks is a promising application for robots, however effective assistance remains challenging. In this paper, we propose a method for providing intuitive robotic assistance based on learning from human natural limb coordination. To encode coupling between multiple-limb motions, we use a novel interval type-2 (IT2) polynomial fuzzy inference for modeling trajectory adaptation. The associated polynomial coefficients are estimated using a modified recursive least-square with a dynamic forgetting factor. We propose to employ a Gaussian process to produce robust human motion predictions, and thus address the uncertainty and measurement noise of the system caused by interactive environments. Experimental results on two types of interaction tasks demonstrate the effectiveness of this approach, which achieves high accuracy in predicting assistive limb motion and enables humans to perform bimanual tasks using only one limb.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.