This article reports on a study to investigate how the driving behaviour of autonomous vehicles influences trust and acceptance. Two different designs were presented to two groups of participants (n = 22/21), using actual autonomously driving vehicles. The first was a vehicle programmed to drive similarly to a human, “peeking” when approaching road junctions as if it was looking before proceeding. The second design had a vehicle programmed to convey the impression that it was communicating with other vehicles and infrastructure and “knew” if the junction was clear so could proceed without ever stopping or slowing down. Results showed non-significant differences in trust between the two vehicle behaviours. However, there were significant increases in trust scores overall for both designs as the trials progressed. Post-interaction interviews indicated that there were pros and cons for both driving styles, and participants suggested which aspects of the driving styles could be improved. This paper presents user information recommendations for the design and programming of driving systems for autonomous vehicles, with the aim of improving their users’ trust and acceptance.
Pixel-based visualization is a popular method of conveying large amounts of numerical data graphically. Application scenarios include business and finance, bioinformatics and remote sensing. In this work, we examined how the usability of such visual representations varied across different tasks and block resolutions. The main stimuli consisted of temporal pixel-based visualization with a white-red color map, simulating monthly temperature variation over a six-year period. In the first study, we included 5 separate tasks to exert different perceptual loads. We found that performance varied considerably as a function of task, ranging from 75% correct in low-load tasks to below 40% in high-load tasks. There was a small but consistent effect of resolution, with the uniform patch improving performance by around 6% relative to higher block resolution. In the second user study, we focused on a high-load task for evaluating month-to-month changes across different regions of the temperature range. We tested both CIE L*u*v* and RGB color spaces. We found that the nature of the change-evaluation errors related directly to the distance between the compared regions in the mapped color space. We were able to reduce such errors by using multiple color bands for the same data range. In a final study, we examined more fully the influence of block resolution on performance, and found block resolution had a limited impact on the effectiveness of pixel-based visualization.
Crowdsourcing platforms, such as Amazon's Mechanical Turk (MTurk), are providing visualization researchers with a new avenue for conducting empirical studies. While such platforms offer several advantages over lab-based studies, they also feature some "unknown" or "uncontrolled" variables, which could potentially introduce serious confounding effects in the resultant measurement data. In this paper, we present our experience of using repeated measures in three empirical studies using MTurk. Each study presented participants with a set of stimuli, each featuring a condition of an independent variable. Participants were exposed to stimuli repeatedly in a pseudo-random order through four trials and their responses were measured digitally. Only a small portion of the participants were able to perform with absolute consistency for all stimuli throughout each experiment. This suggests that a repeated measures design is highly desirable (if not essential) when designing empirical studies for crowdsourcing platforms. Additionally, the majority of participants performed their tasks with reasonable consistency when all stimuli in an experiment are considered collectively. In other words, to most participants, inconsistency occurred occasionally. This suggests that crowdsourcing remains a valid experimental environment, provided that one can integrate the means to observe and alleviate the potential confounding effects of "unknown" or "uncontrolled" variables in the design of the experiment.
Building exceptional user experiences means designing for users of all digital skill level. An increased emphasis on personalization and, with it, adaptive interfaces exacerbates the necessity for digital inclusivity. However, how can designers ensure that they are meeting the needs of those with high and low skillsets? The research reported here employed semi-structured interviews to explore whether the Digital Native Assessment Scale (DNAS) can be used as a tool to classify users and act as a surrogate for predicting their digital profiles. Sixteen participants answered questions about their everyday technology behaviours, as well as their attitudes towards technology. Nine themes emerged through thematic analysis, however only one of these themes was associated with an even, dichotomous split between high scorers on the DNAS and low scorers on the DNAS. Therefore, the DNAS only clearly indicated digital behaviour in a limited number of issues and cannot be relied upon as a proxy for the participant characteristics to be supported in interface design. .
As in-vehicle infotainment systems become increasingly complex, and as manufacturers increasingly move functions and features into the in-vehicle screen, interacting with these systems is resulting in increased demand, eyes-off-the-road time, and task completion time. To combat this complexity, some manufacturers have incorporated voice assistants into their vehicles, allowing drivers to speak to their vehicles to perform tasks rather than use touch. However, these assistants currently offer a limited feature set, and are generally passive, requiring manual activation. Here we outline early, but on-going work looking at techniques that can be used to nudge users towards using voice. Participants were presented with 6 prototype in-vehicle infotainment systems (IVIS), which varied in terms of how they nudged participants towards using voice and asked to perform a series of representative in-vehicle tasks. Data shows the most effective method for nudging was automatic activation of the voice assistant when opening the appropriate app, with participants using voice 60% of the time.
The rapid development of automated vehicles offers the promising development of driver-vehicle interaction and cooperation. Trust is an important concept to consider for the future implementation of autonomous driving. An inappropriate level of trust can lead drivers to under-trust and reject the system's potential benefits or allow drivers to over-trust and abuse it. Therefore, autonomous vehicles need an appropriate level of trust for drivers to experience the full benefits of autonomous driving. This paper reports a systematic review of the literature to analyse the critical role of trust and also discusses various methods of evaluating the trust between drivers and automated vehicles to promote the use of autonomous driving on the ground.The review surveyed the trust in automated vehicles and followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. First, the importance of trust in increasing the acceptance of autonomous driving is investigated. Second, the factors influencing drivers' trust in autonomous driving are grouped and presented. The analysis focuses on individual driver characteristics, automated vehicles and the driving environment, such as driver preference, driving automation system and driving scenarios. Finally, the methodologies to measure trust in autonomous driving are reviewed and analysed. The key measurement indicators include questionnaires, physiological signals such as eye gaze, head and body postures, etc. and psychological signals such as electroencephalogram (EEG).This study is expected to summarise the factors that influence trust and to find reliable and replicable methods to measure trust. The results show that the influence of different factors on trust varies considerably. Currently, questionnaires are the most commonly used subjective measurement method, while psychophysiological measures are a promising objective complement and attract increasing investigations.
Touchscreens are becoming commonplace in the modern-day vehicle, meaning they need to be accessible to all users, young or old. An experiment was conducted to understand the impact of age-related decline on touchscreen task performance when driving where users were asked to complete a simple touchscreen task in both a stationary (static) and moving (dynamic) condition. As expected, a significant decrease in task performance was found when comparing the static condition to a dynamic one. However, when analysing these two conditions by age, only the dynamic condition produced a significant decrease. A positive moderate correlation was also found in both conditions. This result has implications for the design of in-vehicle touchscreen systems to be inclusive of users of different ages and provides insight about the impact of when tasks are carried out in the vehicle.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.