In our daily activities, we often need to walk while interacting with our environment in order to meet everyday goals. Walking requires the processing of both external and internal sensory information that help maintain action goals, reacting to changing environmental features, and readapting motor programs anytime unexpected events occur. Therefore, despite often perceived as undemanding, walking involves both sensory and cognitive systems
Interest in the virtualization of human–robot interactions is increasing, yet the impact that collaborating with either virtual or physical robots has on the human operator’s mental state is still insufficiently studied. In the present work, we aimed to fill this gap by conducting a systematic assessment of a human–robot collaborative framework from a user-centric perspective. Mental workload was measured in participants working in synergistic co-operation with a physical and a virtual collaborative robot (cobot) under different levels of task demands. Performance and implicit and explicit workload were assessed as a function of pupil size variation and self-reporting questionnaires. In the face of a similar self-reported mental demand when maneuvering the virtual or physical cobot, operators showed shorter operation times and lower implicit workload when interacting with the virtual cobot compared to its physical counterpart. Furthermore, the benefits of collaborating with a virtual cobot most vividly manifested when the user had to position the robotic arm with higher precision. These results shed light on the feasibility and importance of relying on multidimensional assessments in real-life work settings, including implicit workload predictors such as pupillometric measures. From a broader perspective, our findings suggest that virtual simulations have the potential to bring significant advantages for both the user's mental well-being and industrial production, particularly for highly complex and demanding tasks.
While walking in our natural environment, we continuously solve additional cognitive tasks. This increases the demand of resources needed for both the cognitive and motor systems, resulting in Cognitive-Motor Interference (CMI). While it is well known that a performance decrease in one or both tasks can be observed, little is known about human brain dynamics underlying CMI during dual-task walking. Moreover, a large portion of previous investigations on CMI took place in static settings, emphasizing the experimental rigor but overshadowing the ecological validity. To address these problems, we developed a dual-task walking scenario in virtual reality (VR) combined with Mobile Brain/Body Imaging (MoBI). We aimed at investigating how brain dynamics are modulated during natural overground walking while simultaneously performing a visual discrimination task in an ecologically valid scenario. Even though the visual task did not affect performance while walking, a P3 amplitude reduction along with changes in power spectral densities (PSDs) during dual-task walking were observed. Replicating previous results, this reflects the impact of walking on the parallel processing of visual stimuli, even when the cognitive task is particularly easy. This standardized and easy to modify VR-paradigm helps to systematically study CMI, allowing researchers to control the complexity of different tasks and sensory modalities. Future investigations implementing an improved virtual design with more challenging cognitive and motor tasks will have to investigate the roles of both cognition and motion, allowing for a better understanding of the functional architecture of attention reallocation between cognitive and motor systems during active behavior.
A valid Human-Robot Interaction (HRI) should be effective for the majority of the population. However, gender, gaming experience, or other individual factors are often likely to affect users' performance when interacting with a robot. In the present study, we measured the performance and perceived workload of participants driving a robot through a pick-and-place task in Virtual Reality (VR) via controller buttons or physical actions. The following individual factors were considered in the analysis: gaming experience, gender, learnability skills, problem solving and trust in technology. Results showed that all the accounted individual factors impacted either performance or perceived demand, but only when guiding the robot via controller buttons. Our findings foster the adoption of more natural ways of teleoperating robots, such as by physical actions, as they demonstrated to be exempt from the influence of individual factors, and are likely to be effective for a broader section of the population.
This work involved human subjects or animals in its research. Approval of all ethical and experimental procedures and protocols was granted by the Ethical Committee of the HiT Center under Application No. 2019_56R1 and 2020_78, and performed in line with the Declaration of Helsinki.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.