The effects of a mobile telephone task on young and elderly drivers choice reaction time, headway, lateral position, and workload were studied when the subjects were driving in a car-following situation, in the VTI driving simulator. It was found that a mobile telephone task had a negative effect upon the drivers choice reaction time, and that the effect was more pronounced for the elderly drivers. Furthermore, the subjects did not compensate for their increased reaction time by increasing their headway during the phone task. The subjects mental workload, as measured by the NASA-TLX, increased as a function of the mobile telephone task. No effect on the subjects lateral position could be detected. Taken together, these results indicate that the accident risk can increase when a driver is using the mobile telephone in a car following situation. The reasons for the increased risk, and possible ways to eliminate it, are also discussed.
The effects of a mobile telephone task on drivers' reaction time, lane position, speed level, and workload were studied in two driving conditions (an easy or rather straight versus a hard or very curvy route). It was predicted that the mobile telephone task would have a negative effect on drivers' reaction time, lane position, and workload and lead to a reduction of speed. It was also predicted that the effects would be stronger for the hard driving task. The study was conducted in the VTI driving simulator. A total of 40 subjects, experienced drivers aged 23 to 61, were randomly assigned to four experimental conditions (telephone and easy or hard driving task versus control and easy or hard driving task). Contrary to the predictions, the strongest effects were found when the subjects were exposed to the easy driving task. In the condition where drivers had to perform the easy driving task, findings showed that a mobile telephone task had a negative effect on reaction time and led to a reduction of the speed level. In the condition where drivers had to perform the hard driving task, findings showed that a mobile telephone task had an effect only on the drivers' lateral position. Finally, the mobile telephone task led to an increased workload for both the easy and the hard driving task. The results are discussed in terms of which subtask, car driving or telephone task, the subjects gave the highest priority. Some implications for information systems in future cars are discussed.
Twenty-nine patients with brain lesion and 29 matched controls participated in the study. The patients were socially well recovered with a high rate of employment. Compared with the controls, they performed significantly worse on a neuropsychological test battery, especially on executive and cognitive functions. Patients drove as well as controls in predictable situations in the advanced simulator used. In unpredictable situations, they demonstrated longer reaction times and safety margins, as well as difficulties in allocating processing resources to a secondary task. The patients showed significantly less attention, worse traffic behavior, and less risk awareness when driving in real traffic. Forty-one percent of the patients did not pass the driving test. The neuropsychological test battery was factor analyzed into four factors: executive capacity, cognitive capacity, automatic attentional capacity, and simple perceptual-motor capacity. The second factor was the mast significant with a simultaneous capacity test predicting driving performance with 78% confidence.
Interaction between drivers and pedestrians is often facilitated by informal communicative cues, like hand gestures, facial expressions, and eye contact. In the near future, however, when semi-and fully autonomous vehicles are introduced into the traffic system, drivers will gradually assume the role of mere passengers, who are casually engaged in non-driving-related activities and, therefore, unavailable to participate in traffic interaction. In this novel traffic environment, advanced communication interfaces will need to be developed that inform pedestrians of the current state and future behavior of an autonomous vehicle, in order to maximize safety and efficiency for all road users. The aim of the present review is to provide a comprehensive account of empirical work in the field of external human-machine interfaces for autonomous vehicle-topedestrian communication. In the great majority of covered studies, participants clearly benefited from the presence of a communication interface when interacting with an autonomous vehicle. Nevertheless, standardized interface evaluation procedures and optimal interface specifications are still lacking.
Pedestrians base their street-crossing decisions on vehicle-centric as well as driver-centric cues. In the future, however, drivers of autonomous vehicles will be preoccupied with non-driving related activities and will thus be unable to provide pedestrians with relevant communicative cues. External human–machine interfaces (eHMIs) hold promise for filling the expected communication gap by providing information about a vehicle’s situational awareness and intention. In this paper, we present an eHMI concept that employs a virtual human character (VHC) to communicate pedestrian acknowledgement and vehicle intention (non-yielding; cruising; yielding). Pedestrian acknowledgement is communicated via gaze direction while vehicle intention is communicated via facial expression. The effectiveness of the proposed anthropomorphic eHMI concept was evaluated in the context of a monitor-based laboratory experiment where the participants performed a crossing intention task (self-paced, two-alternative forced choice) and their accuracy in making appropriate street-crossing decisions was measured. In each trial, they were first presented with a 3D animated sequence of a VHC (male; female) that either looked directly at them or clearly to their right while producing either an emotional (smile; angry expression; surprised expression), a conversational (nod; head shake), or a neutral (neutral expression; cheek puff) facial expression. Then, the participants were asked to imagine they were pedestrians intending to cross a one-way street at a random uncontrolled location when they saw an autonomous vehicle equipped with the eHMI approaching from the right and indicate via mouse click whether they would cross the street in front of the oncoming vehicle or not. An implementation of the proposed concept where non-yielding intention is communicated via the VHC producing either an angry expression, a surprised expression, or a head shake; cruising intention is communicated via the VHC puffing its cheeks; and yielding intention is communicated via the VHC nodding, was shown to be highly effective in ensuring the safety of a single pedestrian or even two co-located pedestrians without compromising traffic flow in either case. The implications for the development of intuitive, culture-transcending eHMIs that can support multiple pedestrians in parallel are discussed.
In-vehicle information systems (IVIS) may contribute to increased levels of cognitive workload, which in turn can lead to a more dangerous driving behaviour. An experiment was conducted to examine the use of auditory signs to support drivers' traffic situation awareness. Eighteen experienced truck drivers identified traffic situations based on information conveyed by brief sounds. Aspects of learning, cognitive demand and pleasantness were monitored and rated by the drivers. Differences in cognitive effort was estimated using a dual-task set-up, in which drivers responded to auditory signs while simultaneously performing a simulated driving task. As expected, arbitrary sounds required significantly longer learning times compared to sounds that have a natural meaning in the driving context. The arbitrary sounds also resulted in a significant degradation in response performance, even after the drivers got a chance to learn the sounds. Finally, the results indicate that the use of arbitrary sounds can negatively impact driver satisfaction. These results have implications for a broad range of developing intelligent transport systems designed to assist drivers in absence of fundamental visual information or in visually demanding traffic situations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.