Most automated vehicle studies have focused on limited automation where the role of the user is that of a driver, supervisor or fallback, but comparatively fewer have considered riders. If riders’ experiences are ignored, it could undermine the adoption of the technologies and, consequently, the realization of their anticipated benefits. A driving simulator study was conducted to evaluate the response of riders to intersection negotiation with conservative, moderate, or aggressive automated driving styles. Riders’ emotional responses—operationalized as changes in facial action units—were detected using video processing software. Results showed that changes in speed, acceleration, and jerk preceded changes in the facial action units and were associated with the magnitude of the change. The speed, acceleration, and jerk changes were represented in different automated driving styles that then affected the magnitude and timing of emotion response. Facial action units may provide a way to gauge riders’ emotional responses to vehicle control algorithms could be used to improve rider experiences.
Summary:Driving is performed while processing various internal driver and external cues from the driving environment (e.g., subtle vibrations, lateral and longitudinal acceleration). The present study was conducted for the purpose of identifying how much external cues affect driver's gaze behavior in an automated driving environment. Fifteen participants drove a commercially available vehicle with longitudinal and lateral automation on an oval test track. Participants were asked to drive the vehicle with and without automation, with or without a side-task, and either with their hands-on or hands-off-wheel. Driver's gaze behavior, handson-wheel status and driving conditions were annotated from video data. The results showed that during automated driving and side-task performance, eyes-on-road time was significantly greater after entering a curve than before and as a result of changes in speed. These differences were not observed in automated driving mode when no side-task is performed. Also, these were more sensitive than hands-on or hands-off-wheel conditions. The results also suggest that drivers may process nonvisual information (e.g., vestibular information produced by changes in lateral and longitudinal vehicle acceleration) prior to or even during the implementation of a visual resource allocation strategy. The present study suggests driver awareness can be aided without requiring the driver to grab the steering wheel.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.