Today, and possibly for a long time to come, the full driving task is too complex an activity to be fully formalized as a sensing-acting robotics system that can be explicitly solved through model-based and learning-based approaches in order to achieve full unconstrained vehicle autonomy. Localization, mapping, scene perception, vehicle control, trajectory optimization, and higher-level planning decisions associated with autonomous vehicle development remain full of open challenges. This is especially true for unconstrained, real-world operation where the margin of allowable error is extremely small and the number of edge-cases is extremely large. Until these problems are solved, human beings will remain an integral part of the driving task, monitoring the AI system as it performs anywhere from just over 0% to just under 100% of the driving. The governing objectives of the MIT Advanced Vehicle Technology (MIT-AVT) study are to (1) undertake large-scale real-world driving data collection that includes high-definition video to fuel the development of deep learning based internal and external perception systems, (2) gain a holistic understanding of how human beings interact with vehicle automation technology by integrating video data with vehicle state data, driver characteristics, mental models, and self-reported experiences with technology, and (3) identify how technology and other factors related to automation adoption and use can be improved in ways that save lives. In pursuing these objectives, we have instrumented 23 Tesla Model S and Model X vehicles, 2 Volvo S90 vehicles, 2 Range Rover Evoque, and 2 Cadillac CT6 vehicles for both long-term (over a year per driver) and medium term (one month per driver) naturalistic driving data collection. Furthermore, we are continually developing new methods for analysis of the massive-scale dataset collected from the instrumented vehicle fleet. The recorded data streams include IMU, GPS, CAN messages, and high-definition video streams of the driver face, the driver cabin, the forward roadway, and the instrument cluster (on select vehicles). The study is on-going and growing. To date, we have 122 participants, 15,610 days of participation, 511,638 miles, and 7.1 billion video frames. This paper presents the design of the study, the data collection hardware, the processing of the data, and the computer vision algorithms currently being used to extract actionable knowledge from the data. 01 231 4523 67 89 8 %& 'ÿ )*+ ,-,.,*/ÿ 0123 45 1 '142-,5 ,67ÿ 8+ *97 :; <=>ÿ @AB; CDÿ ; AE =F; GH ÿIJ KFL; M NM OFB; ÿ =F>DH ÿPQR SPT ULM VGLDH ÿPWW XGCM NY GDH ÿWZ [M Y GDÿ =LM VGBH ÿQPPR SI\ XM =GAÿ ] LF@GDH ÿJPPÿa b b a cd :; <=>ÿ =F; Fÿ NAY Y GN; M ABÿ M Dÿ ABeAM Bef ÿ :; F; M D; M NDÿ
Previous studies have shown adaptive cruise control (ACC) can compromise driving safety when drivers do not understand how the ACC functions, suggesting that drivers need to be informed about the capabilities of this technology. This study applies ecological interface design (EID) to create a visual representation of ACC behavior, which is intended to promote appropriate reliance and support effective transitions between manual and ACC control. The EID display reveals the behavior of ACC in terms of time headway (THW), time to collision (TTC), and range rate. This graphical representation uses emergent features that signal the state of the ACC. Two failure modes-exceedance of braking algorithm limits and sensor failures-were introduced in the driving contexts of traffic and rain, respectively. A medium-fidelity driving simulator was used to evaluate the effect of automation (manual, ACC control), and display (EID, no display) on ACC reliance, brake response, and driver intervention strategies. Drivers in traffic conditions relied more appropriately on ACC when the EID display was present than when it was not, proactively disengaging the ACC. The EID display promoted faster and more consistent braking responses when braking algorithm limits were exceeded, resulting in safe following distances and no collisions. In manual control, the EID display aided THW maintenance in both rain and traffic conditions, reducing the demands of driving and promoting more consistent and less variable car-following performance. These results suggest that providing drivers with continuous information about the state of the automation is a promising alternative to the more common approach of providing imminent crash warnings when it fails. Informing drivers may be more effective than warning drivers. r
Despite an abundant use of the term "Out of the loop" (OOTL) in the context of automated driving and human factors research, there is currently a lack of consensus on its precise definition, how it can be measured, and the practical implications of being in or out of the loop during automated driving. The main objective of this paper is to consider the above issues, with the goal of achieving a shared understanding of the OOTL concept between academics and practitioners. To this end, the paper reviews existing definitions of OOTL and outlines a set of concepts, which, based on the human factors and driver behaviour literature, could serve as the basis for a commonly-agreed definition. Following a series of working group meetings between representatives from academia, research institutions and industrial partners across Europe, North America, and Japan, we suggest a precise definition of being in, out, and on the loop in the driving context. These definitions are linked directly to whether or not the driver is in physical control of the vehicle, and also the degree of situation monitoring required and afforded by the driver. A consideration of how this definition can be operationalized and measured in empirical studies is then provided, and the paper concludes with a short overview of the implications of this definition for the development of automated driving functions.
Predictive processing has been proposed as a unifying framework for understanding brain function, suggesting that cognition and behaviour can be fundamentally understood based on the single principle of prediction error minimization. According to predictive processing, the brain is a statistical organ that continuously attempts get a grip on states in the world by predicting how these states cause sensory input and minimizing the deviations between the predicted and actual input. While these ideas have had a strong influence in neuroscience and cognitive science, they have so far not been adopted in applied human factors research. The present paper represents a first attempt to do so, exploring how predictive processing concepts can be used to understand automobile driving. It is shown how a framework based on predictive processing may provide a novel perspective on a range of driving phenomena and offer a unifying framework for traditionally disparate human factors models.
How quickly can a driver perceive a critical hazard on or near the road? Evidence from vision research suggests that static scene perception is fast and holistic, but does this apply in dynamic road environments? Understanding how quickly drivers can perceive hazards in moving scenes is essential because it improves driver safety now, and will enable autonomous vehicles to work safely with drivers in the future. This paper describes a new, publicly available set of videos, the Road Hazard Stimuli, and a study assessing how quickly participants in the laboratory can detect and correctly respond to briefly presented hazards in them. We performed this laboratory experiment with a group of younger (20–25 years) and older (55–69 years) drivers, and found that while both groups only required brief views of the scene, older drivers required significantly longer to both detect (220 ms, younger; 403 ms, older) and correctly respond to hazards (388 ms younger; 605 ms older). Our results indicate that participants can perceive the scene and detect hazards holistically, without serially searching the scene, and can understand the scene and hazard sufficiently well to respond adequately with only slightly longer viewing durations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.