Abstract:With the accelerated aging of the global population and escalating labor costs, more service robots are needed to help people perform complex tasks. As such, human-robot interaction is a particularly important research topic. To effectively transfer human behavior skills to a robot, in this study, we conveyed skill-learning functions via our proposed wearable device. The robotic teleoperation system utilizes interactive demonstration via the wearable device by directly controlling the speed of the motors. We p… Show more
“…Another important type of application of interactions with robots is robotic manipulation learning, which means helping robots learn operation skills from humans effectively, in other words, transferring human experience to robots. 96 This is a useful technique to augment a robot’s behavioral inventory, especially for small or medium-size production lines, where the production process needs to be adapted or modified often. 97 To achieve this goal, wearable devices need to acquire human manipulation data, and build a mapping between humans and robots, extracting skill features from the mapped manipulation data, similar to the human-experience learning system shown in.…”
Wearable sensing devices, which are smart electronic devices that can be worn on the body as implants or accessories, have attracted much research interest in recent years. They are rapidly advancing in terms of technology, functionality, size, and real-time applications along with the fast development of manufacturing technologies and sensor technologies. By covering some of the most important technologies and algorithms of wearable devices, this paper is intended to provide an overview of upper-limb wearable device research and to explore future research trends. The review of the state-of-the-art of upper-limb wearable technologies involving wearable design, sensor technologies, wearable computing algorithms and wearable applications is presented along with a summary of their advantages and disadvantages. Toward the end of this paper, we highlight areas of future research potential. It is our goal that this review will guide future researchers to develop better wearable sensing devices for upper limbs.
“…Another important type of application of interactions with robots is robotic manipulation learning, which means helping robots learn operation skills from humans effectively, in other words, transferring human experience to robots. 96 This is a useful technique to augment a robot’s behavioral inventory, especially for small or medium-size production lines, where the production process needs to be adapted or modified often. 97 To achieve this goal, wearable devices need to acquire human manipulation data, and build a mapping between humans and robots, extracting skill features from the mapped manipulation data, similar to the human-experience learning system shown in.…”
Wearable sensing devices, which are smart electronic devices that can be worn on the body as implants or accessories, have attracted much research interest in recent years. They are rapidly advancing in terms of technology, functionality, size, and real-time applications along with the fast development of manufacturing technologies and sensor technologies. By covering some of the most important technologies and algorithms of wearable devices, this paper is intended to provide an overview of upper-limb wearable device research and to explore future research trends. The review of the state-of-the-art of upper-limb wearable technologies involving wearable design, sensor technologies, wearable computing algorithms and wearable applications is presented along with a summary of their advantages and disadvantages. Toward the end of this paper, we highlight areas of future research potential. It is our goal that this review will guide future researchers to develop better wearable sensing devices for upper limbs.
“…They can be applied in areas to which humans cannot reach, such as for aerial photography, field exploration, etc. Also, human-robot interaction (Fang et al, 2019) has also been focused on recently, including human-UAV interaction technology. However, a traditional approach to the interaction between UAVs equipped with remote devices and a human is not convenient when that human is busy with other tasks during field exploration.…”
This paper presents an intuitive end-to-end interaction system between a human and a hexacopter Unmanned Aerial Vehicle (UAV) for field exploration in which the UAV can be commanded by natural human poses. Moreover, LEDs installed on the UAV are used to communicate the state and intents of the UAV to the human as feedback throughout the interaction. A real time multi-human pose estimation system is built that can perform with low latency while maintaining competitive performance. The UAV is equipped with a robotic arm, kinematic and dynamic attitude models for which are provided by introducing the center of gravity (COG) of the vehicle. In addition, a super-twisting extended state observer (STESO)-based back-stepping controller (BSC) is constructed to estimate and attenuate complex disturbances in the attitude control system of the UAV, such as wind gusts, model uncertainties, etc. A stability analysis for the entire control system is also presented based on the Lyapunov stability theory. The pose estimation system is integrated with the proposed intelligent control architecture to command the UAV to execute an exploration task stably. Additionally, all the components of this interaction system are described. Several simulations and experiments have been conducted to demonstrate the effectiveness of the whole system and its individual components.
“…It plays an increasingly vital role in human daily life, such as entertainment, education, and home service, etc. In most cases (Billard et al, 2008 ; Yang et al, 2018 ; Fang et al, 2019 ), robots need to learn and execute many complex and repetitive tasks, which include learning the motion skills from observing humans performing these tasks, also known as teaching by demonstration (TbD). TbD is an efficient approach to reduce the complexity of teaching a robot to perform new tasks (Billard et al, 2008 ; Yang et al, 2018 ).…”
Section: Introductionmentioning
confidence: 99%
“…Recently, the multimodal sensor fusion is widely engaged in human–robot interaction (HRI) to enhance the performance of interaction (Gui et al, 2017 ; Argyrou et al, 2018 ; Deng et al, 2018 ; Fang et al, 2019 ; Li C. et al, 2019 ). Gui et al ( 2017 ) designed a multimodal rehabilitation HRI system, which combines the electroencephalogram (EEG)-based HRI and electromyography (EMG)-based HRI to assistant gait pattern, to enhance active participation of users for gait rehabilitation and to accomplish abundant locomotion modes for the exoskeleton.…”
Though a robot can reproduce the demonstration trajectory from a human demonstrator by teleoperation, there is a certain error between the reproduced trajectory and the desired trajectory. To minimize this error, we propose a multimodal incremental learning framework based on a teleoperation strategy that can enable the robot to reproduce the demonstration task accurately. The multimodal demonstration data are collected from two different kinds of sensors in the demonstration phase. Then, the Kalman filter (KF) and dynamic time warping (DTW) algorithms are used to preprocessing the data for the multiple sensor signals. The KF algorithm is mainly used to fuse sensor data of different modalities, and the DTW algorithm is used to align the data in the same timeline. The preprocessed demonstration data are further trained and learned by the incremental learning network and sent to a Baxter robot for reproducing the task demonstrated by the human. Comparative experiments have been performed to verify the effectiveness of the proposed framework.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.