Not just detecting but also predicting impairment of a car driver's operational state is a challenge. This study aims to determine whether the standard sources of information used to detect drowsiness can also be used to predict when a given drowsiness level will be reached. Moreover, we explore whether adding data such as driving time and participant information improves the accuracy of detection and prediction of drowsiness. Twenty-one participants drove a car simulator for 110min under conditions optimized to induce drowsiness. We measured physiological and behavioral indicators such as heart rate and variability, respiration rate, head and eyelid movements (blink duration, frequency and PERCLOS) and recorded driving behavior such as time-to-lane-crossing, speed, steering wheel angle, position on the lane. Different combinations of this information were tested against the real state of the driver, namely the ground truth, as defined from video recordings via the Trained Observer Rating. Two models using artificial neural networks were developed, one to detect the degree of drowsiness every minute, and the other to predict every minute the time required to reach a particular drowsiness level (moderately drowsy). The best performance in both detection and prediction is obtained with behavioral indicators and additional information. The model can detect the drowsiness level with a mean square error of 0.22 and can predict when a given drowsiness level will be reached with a mean square error of 4.18min. This study shows that, on a controlled and very monotonous environment conducive to drowsiness in a driving simulator, the dynamics of driver impairment can be predicted.
While modern dynamic driving simulators equipped with six degrees-of-freedom (6-DOF) hexapods and X-Y platforms have improved realism, mechanical limitations prevent them from offering a fully realistic driving experience. Solutions are often sought in the "washout" algorithm, with linear accelerations simulated by an empirically chosen combination of translation and tilt-coordination, based on the incapacity of otolith organs to distinguish between inclination of the head and linear acceleration. In this study, we investigated the most effective combination of tilt and translation to provide a realistic perception of movement. We tested 3 different braking intensities (deceleration), each with 5 inverse proportional tilt/translation ratios. Subjects evaluated braking intensity using an indirect method corresponding to a 2-Alternative-Forced-Choice Paradigm. We find that perceived intensity of braking depends on the tilt/translation ratio used: for small and average decelerations (0.6 and 1.0m/s 2 ), increased tilt yielded an increased overestimation of braking, inverse proportionally with intensity; for high decelerations (1.4m/s 2 ), on half the conditions braking was overestimated with more tilt than translation and underestimated with more translation than tilt. We define a mathematical function describing the relationship between tilt, translation and the desired level of deceleration, intended as a supplement to motion cueing algorithms, that should improve the realism of driving simulations.
Tilt-coordination is a technique which uses the tilt-translation ambiguity of the vestibular system to simulate linear accelerations on dynamic driving simulators, in combination with real linear accelerations. However, the tilt/translation ratio is chosen empirically. We experimentally determine the most realistic tilt/translation ratio to simulate a given value of deceleration. Under specific conditions of driving simulation, five tilt/translation ratios were applied, with an inverse-proportional quantity of tilt and translation, so that the sum of the two (the proportion of the deceleration simulated by translational motion and the proportion simulated by tilt) was always equal to the same overall value (0.8 m/s 2 ). We find that different ratios lead to different perceptions, depending on the quantity of tilt and translation. With a higher tilt ratio, the braking is perceived as being stronger than when there is a higher translation ratio and the most realistic tilt/translation ratio found is neither pure tilt, nor pure translation, but 35/65% tilt/translation. The way these different ratios are perceived during braking is discussed from vestibular and non-vestibular points of view.
Self-motion perception, which partly determines the realism of dynamic driving simulators, is based on multisensory integration. However, it remains unclear how the brain integrates these cues to create adequate motion perception, especially for curvilinear displacements. In the present study, the effect of visual, inertial and visuo-inertial cues (concordant or discordant bimodal cues) on self-motion perception was analyzed. Subjects were asked to evaluate (externally produced) or produce (self-controlled) curvilinear displacements as accurately as possible. The results show systematic overestimation of displacement, with better performance for active subjects than for passive ones. Furthermore, it was demonstrated that participants used unimodal or bimodal cues differently in performing their activity. When passive, subjects systematically integrated visual and inertial cues even when discordant, but with weightings that depended on the dynamics. On the contrary, active subjects were able to reject the inertial cue when the discordance became too high, producing self-motion perception on the basis of more reliable information. Thereby, multisensory integration seems to follow a non-linear integration model of, i.e., the cues' weight changes with the cue reliability and/or the intensity of the stimuli, as reported by previous studies. These results represent a basis for the adaptation of motion cueing algorithms are developed for dynamic driving simulators, by taking into account the dynamics of simulated motion in line with the status of the participants (driver or passenger).
Self-motion perception, which partly determines the realism of dynamic driving simulators, is based on multisensory integration. For curved trajectories, adding linear translations to visual stimulation seems to improve the perception of motion (Bertin et al., 2004). However, cornering means not only translational motion but also rotational motion in yaw. Wikie and Wand (2005) found no effect of yaw motion on steering performance, but it seems that higher rates of physical stimulation could have a more visible effect. However, the influence of yaw acceleration on cornering perception is still a matter of debate, especially when associated to optic flow. Therefore, the present study aims to analyze the respective role of vestibular (yaw acceleration) and visual stimulations for the perception of curvilinear trajectories. We designed two experiments in which the subjects had 1/to orally estimate their angular displacements as passive drivers, 2/to generate angular displacements by controlling the steering wheel as active drivers. In both experiments, subjects were submitted to three different conditions: (1) visual motion, (2) physical yaw acceleration, (3) combined visual and physical motions. Preliminary results of the first experiment show that visual stimulation produces greater overestimations of angular displacements than physical yaw motion, this overestimation being in between in the visuo-vestibular condition. They also suggest that the weights of visual and vestibular cues in cornering perception depend on the amplitude of the angular displacements. The second experiment should allow us to observe the evolution of visuo-vestibular interaction when subjects are active rather than passive drivers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.