From a controls point of view, micro electromechanical systems (MEMS) can be driven in an open-loop and closed-loop fashion. Commonly, these devices are driven open-loop by applying simple input signals. If these input signals become more complex by being derived from the system dynamics, we call such control techniques pre-shaped open-loop driving. The ultimate step for improving precision and speed of response is the introduction of feedback, e.g. closed-loop control. Unlike macro mechanical systems, where the implementation of the feedback is relatively simple, in the MEMS case the feedback design is quite problematic, due to the limited availability of sensor data, the presence of sensor dynamics and noise, and the typically fast actuator dynamics. Furthermore, a performance comparison between open-loop and closed-loop control strategies has not been properly explored for MEMS devices. The purpose of this paper is to present experimental results obtained using both open-and closed-loop strategies and to address the comparative issues of driving and control for MEMS devices. An optical MEMS switching device is used for this study. Based on these experimental results, as well as computer simulations, we point out advantages and disadvantages of the different control strategies, address the problems that distinguish MEMS driving systems from their macro counterparts, and discuss criteria to choose a suitable control driving strategy.
An intelligent human-robot interaction (HRI) system with adjustable robot behavior is presented. The proposed HRI system assists the human operator to perform a given task with minimum workload demands and optimizes the overall human-robot system performance. Motivated by human factor studies, the presented control structure consists of two control loops. First, a robot-specific neuro-adaptive controller is designed in the inner loop to make the unknown nonlinear robot behave like a prescribed robot impedance model as perceived by a human operator. In contrast to existing neural network and adaptive impedance-based control methods, no information of the task performance or the prescribed robot impedance model parameters is required in the inner loop. Then, a task-specific outer-loop controller is designed to find the optimal parameters of the prescribed robot impedance model to adjust the robot's dynamics to the operator skills and minimize the tracking error. The outer loop includes the human operator, the robot, and the task performance details. The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task. To obviate the requirement of the knowledge of the human model, integral reinforcement learning is used to solve the given LQR problem. Simulation results on an x - y table and a robot arm, and experimental implementation results on a PR2 robot confirm the suitability of the proposed method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.