While traffic signals, signs, and road markings provide explicit guidelines for those operating in and around the roadways, some decisions, such as determinations of “who will go first,” are made by implicit negotiations between road users. In such situations, pedestrians are today often dependent on cues in drivers’ behavior such as eye contact, postures, and gestures. With the introduction of more automated functions and the transfer of control from the driver to the vehicle, pedestrians cannot rely on such non-verbal cues anymore. To study how the interaction between pedestrians and automated vehicles (AVs) might look like in the future, and how this might be affected if AVs were to communicate their intent to pedestrians, we designed an external vehicle interface called automated vehicle interaction principle (AVIP) that communicates vehicles’ mode and intent to pedestrians. The interaction was explored in two experiments using a Wizard of Oz approach to simulate automated driving. The first experiment was carried out at a zebra crossing and involved nine pedestrians. While it focused mainly on assessing the usability of the interface, it also revealed initial indications related to pedestrians’ emotions and perceived safety when encountering an AV with/without the interface. The second experiment was carried out in a parking lot and involved 24 pedestrians, which enabled a more detailed assessment of pedestrians’ perceived safety when encountering an AV, both with and without the interface. For comparison purposes, these pedestrians also encountered a conventional vehicle. After a short training course, the interface was deemed easy for the pedestrians to interpret. The pedestrians stated that they felt significantly less safe when they encountered the AV without the interface, compared to the conventional vehicle and the AV with the interface. This suggests that the interface could contribute to a positive experience and improved perceived safety in pedestrian encounters with AVs – something that might be important for general acceptance of AVs. As such, this topic should be further investigated in future studies involving a larger sample and more dynamic conditions.
In-vehicle information systems (IVIS) may contribute to increased levels of cognitive workload, which in turn can lead to a more dangerous driving behaviour. An experiment was conducted to examine the use of auditory signs to support drivers' traffic situation awareness. Eighteen experienced truck drivers identified traffic situations based on information conveyed by brief sounds. Aspects of learning, cognitive demand and pleasantness were monitored and rated by the drivers. Differences in cognitive effort was estimated using a dual-task set-up, in which drivers responded to auditory signs while simultaneously performing a simulated driving task. As expected, arbitrary sounds required significantly longer learning times compared to sounds that have a natural meaning in the driving context. The arbitrary sounds also resulted in a significant degradation in response performance, even after the drivers got a chance to learn the sounds. Finally, the results indicate that the use of arbitrary sounds can negatively impact driver satisfaction. These results have implications for a broad range of developing intelligent transport systems designed to assist drivers in absence of fundamental visual information or in visually demanding traffic situations.
The current position paper discusses vital challenges related to the user experience design in unsupervised, highly automated cars. These challenges are: (1) how to avoid motion sickness, (2) how to ensure users’ trust in the automation, (3) how to ensure usability and support the formation of accurate mental models of the automation system, and (4) how to provide a pleasant and enjoyable experience. We argue for that auditory displays have the potential to help solve these issues. While auditory displays in modern vehicles typically make use of discrete and salient cues, we argue that the use of less intrusive continuous sonic interaction could be more beneficial for the user experience.
This paper presents the development of a multimodal warning display for a paper mill control room. In previous work, an informative auditory display for control room warnings was proposed. The proposed auditory solution conveys information about urgent events by using a combination of auditory icons and tonal components. The main aim of the present study was to investigate if a complementary visual display could increase the effectiveness and acceptance of the existing auditory solution. The visual display was designed in a user-driven design process with operators. An evaluation was conducted both before and after the implementation. Subjective ratings showed that operators found it easier to identify the alarming section using the multimodal display. These results can be useful for any designer intending to implement a multimodal display for warnings in an industrial context.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.