This study can inform display design to support multitasking performance of anesthesiologists in the clinical setting and other supervisory control operators in work domains characterized by high demands for visual and auditory resources.
We describe a multimodal dataset acquired in a controlled experiment on a driving simulator. The set includes data for n=68 volunteers that drove the same highway under four different conditions: No distraction, cognitive distraction, emotional distraction, and sensorimotor distraction. The experiment closed with a special driving session, where all subjects experienced a startle stimulus in the form of unintended acceleration—half of them under a mixed distraction, and the other half in the absence of a distraction. During the experimental drives key response variables and several explanatory variables were continuously recorded. The response variables included speed, acceleration, brake force, steering, and lane position signals, while the explanatory variables included perinasal electrodermal activity (EDA), palm EDA, heart rate, breathing rate, and facial expression signals; biographical and psychometric covariates as well as eye tracking data were also obtained. This dataset enables research into driving behaviors under neatly abstracted distracting stressors, which account for many car crashes. The set can also be used in physiological channel benchmarking and multispectral face recognition.
Objective: The objective of this study was to analyze a set of driver performance and physiological data using advanced machine learning approaches, including feature generation, to determine the best-performing algorithms for detecting driver distraction and predicting the source of distraction. Background: Distracted driving is a causal factor in many vehicle crashes, often resulting in injuries and deaths. As mobile devices and in-vehicle information systems become more prevalent, the ability to detect and mitigate driver distraction becomes more important. Method: This study trained 21 algorithms to identify when drivers were distracted by secondary cognitive and texting tasks. The algorithms included physiological and driving behavioral input processed with a comprehensive feature generation package, Time Series Feature Extraction based on Scalable Hypothesis tests. Results: Results showed that a Random Forest algorithm, trained using only driving behavior measures and excluding driver physiological data, was the highest-performing algorithm for accurately classifying driver distraction. The most important input measures identified were lane offset, speed, and steering, whereas the most important feature types were standard deviation, quantiles, and nonlinear transforms. Conclusion: This work suggests that distraction detection algorithms may be improved by considering ensemble machine learning algorithms that are trained with driving behavior measures and nonstandard features. In addition, the study presents several new indicators of distraction derived from speed and steering measures. Application: Future development of distraction mitigation systems should focus on driver behavior–based algorithms that use complex feature generation techniques.
In a simulation experiment we studied the effects of cognitive, emotional, sensorimotor, and mixed stressors on driver arousal and performance with respect to (wrt) baseline. In a sample of n = 59 drivers, balanced in terms of age and gender, we found that all stressors incurred significant increases in mean sympathetic arousal accompanied by significant increases in mean absolute steering. The latter, translated to significantly larger range of lane departures only in the case of sensorimotor and mixed stressors, indicating more dangerous driving wrt baseline. In the case of cognitive or emotional stressors, often a smaller range of lane departures was observed, indicating safer driving wrt baseline. This paradox suggests an effective coping mechanism at work, which compensates erroneous reactions precipitated by cognitive or emotional conflict. This mechanisms’ grip slips, however, when the feedback loop is intermittently severed by sensorimotor distractions. Interestingly, mixed stressors did not affect crash rates in startling events, suggesting that the coping mechanism’s compensation time scale is above the range of neurophysiological latency.
Objectives: This study sought to determine whether performance effects of crossmodal spatial links that were observed in earlier laboratory studies scale to more complex environments and need to be considered in multimodal interface design. It also revisits the unresolved issue of cross-modal cuing asymmetries. Background: Previous laboratory studies employing simple cues, tasks, and/or targets have demonstrated that the efficiency of processing visual, auditory, and tactile stimuli is affected by the modality, lateralization, and timing of surrounding cues. Very few studies have investigated these cross-modal constraints in the context of more complex environments to determine whether they scale and how complexity affects the nature of crossmodal cuing asymmetries. Method: A microworld simulation of battlefield operations with a complex task set and meaningful visual, auditory, and tactile stimuli was used to investigate cuing effects for all cross-modal pairings. Results: Significant asymmetric performance effects of cross-modal spatial links were observed. Auditory cues shortened response latencies for collocated visual targets but visual cues did not do the same for collocated auditory targets. Responses to contralateral (rather than ipsilateral) targets were faster for tactually cued auditory targets and each visual-tactile cue-target combination, suggesting an inhibition-of-return effect. Conclusions: The spatial relationships between multimodal cues and targets significantly affect target response times in complex environments. The performance effects of cross-modal links and the observed cross-modal cuing asymmetries need to be examined in more detail and considered in future interface design. Application: The findings from this study have implications for the design of multimodal and adaptive interfaces and for supporting attention management in complex, data-rich domains.
Objective: This study examined the effectiveness of using informative peripheral visual and tactile cues to support task switching and interruption management. Background: Effective support for the allocation of limited attentional resources is needed for operators who must cope with numerous competing task demands and frequent interruptions in data-rich, event-driven domains. One prerequisite for meeting this need is to provide information that allows them to make informed decisions about, and before, (re)orienting their attentional focus. Method: Thirty participants performed a continuous visual task. Occasionally, they were presented with a peripheral visual or tactile cue that indicated the need to attend to a separate visual task. The location, frequency, and duration parameters of these cues represented the domain, importance, and expected completion time, respectively, of the interrupting task. Results: The findings show that the informative cues were detected and interpreted reliably. Information about the importance (rather than duration) of the task was used by participants to decide whether to switch attention to the interruption, indicating adherence to experimenter instructions. Erroneous task-switching behavior (nonadherence to experimenter instructions) was mostly caused by misinterpretation of cues. Conclusion: The effectiveness of informative peripheral visual and tactile cues for supporting interruption management was validated in this study. However, the specific implementation of these cues requires further work and needs to be tailored to specific domain requirements. Application: The findings from this research can inform the design of more effective notification systems for a variety of complex event-driven domains, such as aviation, medicine, or process control.
The design of multimodal interfaces rarely takes into consideration recent data suggesting the existence of considerable crossmodal spatial and temporal links in attention. This can be partly explained by the fact that crossmodal links have been studied almost exclusively in spartan laboratory settings with simple cues and tasks. As a result, it is not clear whether they scale to more complex settings. To examine this question, participants in this experiment drove a simulated military vehicle and were periodically presented with lateralized visual indications marking locations of roadside mines and safe areas of travel. Valid and invalid auditory and tactile cues preceded these indications at varying stimulus-onset asynchronies. The findings confirm that the location and timing of crossmodal cue combinations affect response time and accuracy in complex domains as well. In particular, presentation of crossmodal cues at SOAs below 500ms and tactile cuing resulted in lower accuracy and longer response times.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.