Using interactive displays, such as a touchscreen, in vehicles typically requires dedicating a considerable amount of visual as well as cognitive capacity and undertaking a hand pointing gesture to select the intended item on the interface. This can act as a distractor from the primary task of driving and consequently can have serious safety implications. Due to road and driving conditions, the user input can also be highly perturbed resulting in erroneous selections compromising the system usability. In this paper, we propose intent-aware displays that utilize a pointing gesture tracker in conjunction with suitable Bayesian destination inference algorithms to determine the item the user intends to select, which can be achieved with high confidence remarkably early in the pointing gesture. This can drastically reduce the time and effort required to successfully complete an in-vehicle selection task. In the proposed probabilistic inference framework, the likelihood of all the nominal destinations is sequentially calculated by modeling the hand pointing gesture movements as a destination-reverting process. This leads to a Kalman filter-type implementation of the prediction routine that requires minimal parameter training and has low computational burden; it is also amenable to parallelization. The substantial gains obtained using an intent-aware display are demonstrated using data collected in an instrumented vehicle driven under various road conditions.
In present-day highly-automated vehicles, there are occasions when the driving system disengages and the human driver is required to take-over. This is of great importance to a vehicle's safety and ride comfort. In the U.S state of California, the Autonomous Vehicle Testing Regulations require every manufacturer testing autonomous vehicles on public roads to submit an annual report summarizing the disengagements of the technology experienced during testing. On 1 January 2016, seven manufacturers submitted their first disengagement reports: Bosch, Delphi, Google, Nissan, Mercedes-Benz, Volkswagen, and Tesla Motors. This work analyses the data from these disengagement reports with the aim of gaining abetter understanding of the situations in which a driver is required to takeover, as this is potentially useful in improving the Society of Automotive Engineers (SAE) Level 2 and Level 3 automation technologies. Disengagement events from testing are classified into different groups based on attributes and the causes of disengagement are investigated and compared in detail. The mechanisms and time taken for take-over transition occurred in disengagements are studied. Finally, recommendations for OEMs, manufacturers, and government organizations are also discussed.
Given the proliferation of 'intelligent' and 'socially-aware' digital assistants embodying everyday mobile technology - and the undeniable logic that utilising voice-activated controls and interfaces in cars reduces the visual and manual distraction of interacting with in-vehicle devices - it appears inevitable that next generation vehicles will be embodied by digital assistants and utilise spoken language as a method of interaction. From a design perspective, defining the language and interaction style that a digital driving assistant should adopt is contingent on the role that they play within the social fabric and context in which they are situated. We therefore conducted a qualitative, Wizard-of-Oz study to explore how drivers might interact linguistically with a natural language digital driving assistant. Twenty-five participants drove for 10 min in a medium-fidelity driving simulator while interacting with a state-of-the-art, high-functioning, conversational digital driving assistant. All exchanges were transcribed and analysed using recognised linguistic techniques, such as discourse and conversation analysis, normally reserved for interpersonal investigation. Language usage patterns demonstrate that interactions with the digital assistant were fundamentally social in nature, with participants affording the assistant equal social status and high-level cognitive processing capability. For example, participants were polite, actively controlled turn-taking during the conversation, and used back-channelling, fillers and hesitation, as they might in human communication. Furthermore, participants expected the digital assistant to understand and process complex requests mitigated with hedging words and expressions, and peppered with vague language and deictic references requiring shared contextual information and mutual understanding. Findings are presented in six themes which emerged during the analysis - formulating responses; turn-taking; back-channelling, fillers and hesitation; vague language; mitigating requests and politeness and praise. The results can be used to inform the design of future in-vehicle natural language systems, in particular to help manage the tension between designing for an engaging dialogue (important for technology acceptance) and designing for an effective dialogue (important to minimise distraction in a driving context).
In order to design an advanced human-automation collaboration system for highly automated vehicles, research into the driver's neuromuscular dynamics is needed. In this paper a dynamic model of drivers' neuromuscular interaction with a steering wheel is firstly established. The transfer function and the natural frequency of the systems are analyzed. In order to identify the key parameters of the driver-steering-wheel interacting system and investigate the system properties under different situations, experiments with driver-in-the-loop are carried out. For each test subject, two steering tasks, namely the passive and active steering tasks, are instructed to be completed. Furthermore, during the experiments, subjects manipulated the steering wheel with two distinct postures and three different hand positions. Based on the experimental results, key parameters of the transfer function model are identified by using the Gauss-Newton algorithm. Based on the estimated model with identified parameters, investigation of system properties is then carried out. The characteristics of the driver neuromuscular system are discussed and compared with respect to different steering tasks, hand positions and driver postures. These experimental results with identified system properties provide a good foundation for the development of a haptic take-over control system for automated vehicles.
Automatic classification of drivers' mental states is an important yet relatively unexplored topic. In this paper, we define a taxonomy of a set of complex mental states that are relevant to driving, namely: Happy, Bothered, Concentrated and Confused. We present our video segmentation and annotation methodology of a spontaneous dataset of natural driving videos from 10 different drivers. We also present our real-time annotation tool used for labelling the dataset via an emotion perception experiment and discuss the challenges faced in obtaining the ground truth labels. Finally, we present a methodology for automatic classification of drivers' mental states. We compare SVM models trained on our dataset with an existing nearest neighbour model pre-trained on posed dataset, using facial Action Units as input features. We demonstrate that our temporal SVM approach yields better results. The dataset's extracted features and validated emotion labels, together with the annotation tool, will be made available to the research community.
Using a Wizard-of-Oz approach, we explored the effectiveness of engaging drivers in conversation with a digital assistant as an operational strategy to combat the symptoms of passive task-related fatigue. Twenty participants undertook two 30-minute drives in a medium-fidelity driving simulator between 13:00 and 16:30, when circadian and homeostatic influences naturally reduce alertness. Participants were asked to follow a lead-car travelling at a constant speed of 68mph, in a sparsely-populated UK motorway scenario. During one of the counterbalanced drives, participants were engaged in conversation by a digital assistant ('Vid'). Results show that interacting with Vid had a positive effect on driving performance and arousal, evidenced by better lane-keeping, earlier response to a potential hazard situation, larger pupil diameter, and an increased spread of attention to the road-scene (i.e. fewer fixations concentrated on the road-centre indicating a lower incidence of 'cognitive tunnelling'). Drivers also reported higher levels of alertness and lower sleepiness following the Vid drive. Subjective workload ratings suggest that drivers exerted less effort to 'stay awake' when engaged with Vid. The findings support the development and application of in-vehicle natural language interfaces, and can be used to inform the design of novel countermeasures for driver fatigue.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.