Recent developments in commercial virtual reality (VR) hardware with embedded eye-tracking create tremendous opportunities for human subjects researchers. Accessible eye-tracking in VR opens new opportunities for highly controlled experimental setups in which participants can engage novel 3D digital environments. However, because VR embedded eye-tracking differs from the majority of historical eye-tracking research, in both providing for relatively unconstrained movement and stimulus presentation distances, there is a need for greater discussion around methods for implementation and validation of VR based eye-tracking tools. The aim of this paper is to provide a practical introduction to the challenges of, and methods for, 3D gaze-tracking in VR with a focus on best practices for results validation and reporting. Specifically, first, we identify and define challenges and methods for collecting and analyzing 3D eye-tracking data in VR. Then, we introduce a validation pilot study with a focus on factors related to 3D gaze tracking. The pilot study provides both a reference data point for a common commercial hardware/software platform (HTC Vive Pro Eye) and illustrates the proposed methods. One outcome of this study was the observation that accuracy and precision of collected data may depend on stimulus distance, which has consequences for studies where stimuli is presented on varying distances. We also conclude that vergence is a potentially problematic basis for estimating gaze depth in VR and should be used with caution as the field move towards a more established method for 3D eye-tracking.
Although the automation level is high within the automotive industry, there are still a large number of manual tasks, especially is the final assembly of the vehicle. Overhead assembly operations is an example of a problematic manual task that can cause workers to develop musculoskeletal disorders in the shoulder complex. Exoskeletons may be a solution to reduce the risk for developing musculoskeletal disorders from the work tasks. This study evaluates and compares how the use of three different passive upper body exoskeletons affects the range of motion (ROM) of workers at overhead assembly tasks. An experiment consisting of three tasks was set up in order to analyze the differences between the models. Seventeen subjects were involved in the study. Interviews, observations, videos and motion capture recordings were the methods of collecting data. The results show agreement from all the subjects that the exoskeletons help the worker at this specific assembly operation. The results also show that different exoskeleton models cause different levels of ROM reductions. The subjects’ opinions about how the different exoskeletons influence the ROM corresponds with the analysis of the motion capture data. Positive and negative aspects of each exoskeleton from a ROM and an implementation point of view are discussed. In general, the results indicate that the exoskeleton models can be applicable for the type of work tasks studied. However, the exoskeletons would benefit from further development in order to decrease ROM limitations and therefore cover a larger number of different manual assembly tasks.
This paper presents a solution that integrates a smart textiles system with virtual reality to assess the design of workstations from an ergonomics point of view. By using the system, ergonomists, designers, engineers, and operators, can test design proposals of workstations in an immersive virtual environment while they see their ergonomics evaluation results displayed in real-time. The system allows its users to evaluate the ergonomics of the workplace in a pre-production phase. The workstation design can be modified, enabling workstation designers to better understand, test and evaluate how to create successful workstation designs, eventually to be used by the operators in production. This approach uses motion capture together with virtual reality and is aimed to complement and integrate with the use of digital human modelling (DHM) software at virtual stages of the production development process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.