With the developments of collaborative robots in manufacturing, physical interactions between humans and robots represent a vital role in performing tasks collaboratively. Most conducted studies focused on robot motion planning and control during the execution of a task. However, for effective task distribution and allocation, human physical and psychological status are essential. In this research, a hardware setup and support software for a set of wearable sensors and a data acquisition framework, are developed. This can be used to develop more efficient Human-Robot collaboration strategies. The developed framework is intended to recognise the human mental state and physical activities. Subsequently, a robot could effectively and naturally perform the given task with the human. Besides, the collected data through the developed hardware enables online classification of human intentions and activities; therefore, robots can actively adapt to ensure the safety of the human while delivering the required task.
Manufacturing challenges are increasing the demands for more agile and dexterous means of production. At the same time, these systems aim to maintain or even increase productivity. The challenges risen from these developments can be tackled through Human-Robot Collaboration (HRC). HRC requires effective task distribution according to each parties’ distinctive strengths, which is envisioned to generate synergetic effects. To enable a seamless collaboration, the human and robot require a mutual awareness, which is challenging, due to the human and robot “speaking” different languages as in analogue and digital. Thus, this challenge can be addressed by equipping the robot with a model of the human. Despite a range of models being available, data-driven models of the human are still at an early stage. This paper proposes an adaptive human sensor framework, which incorporates objective, subjective, and physiological metrics, as well as associated Machine Learning. Thus, it is envisioned to adapt to the uniqueness and dynamic nature of human behavior. To test the framework, a validation experiment was performed, including 18 participants, which aims to predict Perceived Workload during two scenarios, namely a manual and an HRC assembly task. Perceived Workloads are described to have a substantial impact on a human operator’s task performance. Throughout the experiment physiological data from an electroencephalogram (EEG), an electrocardiogram (ECG), and respiration sensor was collected and interpreted. For subjective metrics, the standardized NASA Task Load Index was used. Objective metrics included task completion time and number of errors/assistance requests. Overall, the framework revealed a promising potential towards an adaptive behavior, which is ultimately envisioned to enable a more effective HRC.
Manufacturing challenges are increasing the demands for more agile and dexterous means of production. At the same time, these systems aim to maintain or even increase productivity. The challenges risen from these developments can be tackled through human–robot collaboration (HRC). HRC requires effective task distribution according to each party’s distinctive strengths, which is envisioned to generate synergetic effects. To enable a seamless collaboration, the human and robot require a mutual awareness, which is challenging, due to the human and robot “speaking” different languages as in analogue and digital. This challenge can be addressed by equipping the robot with a model of the human. Despite a range of models being available, data-driven models of the human are still at an early stage. For this purpose, this paper proposes an adaptive human sensor framework, which incorporates objective, subjective, and physiological metrics, as well as associated machine learning. Thus, it is envisioned to adapt to the uniqueness and dynamic nature of human behavior. To test the framework, a validation experiment was performed, including 18 participants, which aims to predict perceived workload during two scenarios, namely a manual and an HRC assembly task. Perceived workloads are described to have a substantial impact on a human operator’s task performance. Throughout the experiment, physiological data from an electroencephalogram (EEG), an electrocardiogram (ECG), and respiration sensor was collected and interpreted. For subjective metrics, the standardized NASA Task Load Index was used. Objective metrics included task completion time and number of errors/assistance requests. Overall, the framework revealed a promising potential towards an adaptive behavior, which is ultimately envisioned to enable a more effective HRC.
In this paper, a grounding framework is proposed that combines unsupervised and supervised grounding by extending an unsupervised grounding model with a mechanism to learn from explicit human teaching. To investigate whether explicit teaching improves the sample efficiency of the original model, both models are evaluated through an interaction experiment between a human tutor and a robot in which synonymous shape, color, and action words are grounded through geometric object characteristics, color histograms, and kinematic joint features. The results show that explicit teaching improves the sample efficiency of the unsupervised baseline model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.