Abstract:Autonomous vehicles are complex systems that may behave in unexpected ways. From the drivers' perspective, this can cause stress and lower trust and acceptance of autonomous driving. Prior work has shown that explanation of system behavior can mitigate these negative effects. Nevertheless, it remains unclear in which situations drivers actually need an explanation and what kind of interaction is relevant to them. Using thematic analysis of real-world experience reports, we first identified 17 situations in whi… Show more
“…Placebo-Control 4 [35,71,97,111] Placebo Study 12 [18,20,22,23,28,82,100,107,110,110] Acknowledge Placebo Effects as a Confound 11 [1,7,10,21,47,64,74,83,86,91,94,105] modalities. For example, one can adapt letter presentation in an E-reader to alpha oscillations; an indicator of workload in the electroencephalogram [55], reduce the number of interaction possibilities with the number of errors made by the user [45], or adapt the interface to the mood of the user [61].…”
In medicine, patients can obtain real benefits from a sham treatment. These benefits are known as the placebo effect. We report two experiments (Experiment I: N=369; Experiment II: N=100) demonstrating a placebo effect in adaptive interfaces. Participants were asked to solve word puzzles while being supported by no system or an adaptive AI interface. All participants experienced the same word puzzle difficulty and had no support from an AI throughout the experiments. Our results showed that the belief of receiving adaptive AI support increases expectations regarding the participant's own task performance, sustained after interaction.These expectations were positively correlated to performance, as indicated by the number of solved word puzzles. We integrate our findings into technological acceptance theories and discuss implications for the future assessment of AI-based user interfaces and novel technologies. We argue that system descriptions can elicit placebo effects through user expectations biasing the results of user-centered studies.CCS Concepts: • Human-centered computing → User studies; HCI theory, concepts and models; Empirical studies in HCI .
“…Placebo-Control 4 [35,71,97,111] Placebo Study 12 [18,20,22,23,28,82,100,107,110,110] Acknowledge Placebo Effects as a Confound 11 [1,7,10,21,47,64,74,83,86,91,94,105] modalities. For example, one can adapt letter presentation in an E-reader to alpha oscillations; an indicator of workload in the electroencephalogram [55], reduce the number of interaction possibilities with the number of errors made by the user [45], or adapt the interface to the mood of the user [61].…”
In medicine, patients can obtain real benefits from a sham treatment. These benefits are known as the placebo effect. We report two experiments (Experiment I: N=369; Experiment II: N=100) demonstrating a placebo effect in adaptive interfaces. Participants were asked to solve word puzzles while being supported by no system or an adaptive AI interface. All participants experienced the same word puzzle difficulty and had no support from an AI throughout the experiments. Our results showed that the belief of receiving adaptive AI support increases expectations regarding the participant's own task performance, sustained after interaction.These expectations were positively correlated to performance, as indicated by the number of solved word puzzles. We integrate our findings into technological acceptance theories and discuss implications for the future assessment of AI-based user interfaces and novel technologies. We argue that system descriptions can elicit placebo effects through user expectations biasing the results of user-centered studies.CCS Concepts: • Human-centered computing → User studies; HCI theory, concepts and models; Empirical studies in HCI .
“…In challenging and critical driving scenarios, intelligent vehicles are likely to make decisions that are confusing to end-users [1], [2], e.g., unexpectedly initiating a lane change. As a way to assist end-users, and to establish trust, explanation provisions have been put forward [3], [4], [5]. While explanations are considered helpful, we argue that they would not be effective in achieving the aforementioned goals if they are not provided in intelligible forms as obligated by the General Data Protection Right (GDPR) Article 12 1 .…”
Section: Introductionmentioning
confidence: 99%
“…of Computer Science, Umeå University, Sweden. Email: sule.anjomshoae@umu.se 3 Helena Webb is with the Dept. of Computer Science, University of Nottingham.…”
Commentary driving is a technique in which drivers verbalise their observations, assessments and intentions. By speaking out their thoughts, both learning and expert drivers are able to create a better understanding and awareness of their surroundings. In the intelligent vehicle context, automated driving commentary can provide intelligible explanations about driving actions, and thereby assist a driver or an end-user during driving operations in challenging and safety-critical scenarios. In this paper, we conducted a field study in which we deployed a research vehicle in an urban environment to obtain data. While collecting sensor data of the vehicle's surroundings, we obtained driving commentary from a driving instructor using the think-aloud protocol. We analysed the driving commentary and uncovered an explanation style; the driver first announces his observations, announces his plans, and then makes general remarks. He also made counterfactual comments. We successfully demonstrated how factual and counterfactual natural language explanations that follow this style could be automatically generated using a simple tree-based approach. Generated explanations for longitudinal actions (e.g., stop and move) were deemed more intelligible and plausible by human judges compared to lateral actions, such as lane changes. We discussed how our approach can be built on in the future to realise more robust and effective explainability for driver assistance as well as partial and conditional automation of driving functions.
“…In manual driving, video-recorded situations with pedestrians are perceived as more hazardous and critical (Finn and Bragg, 1986;Borowsky and Oron-Gilad, 2013) especially when drivers are inexperienced. For automated driving, recent studies have shown that in situations with pedestrians, drivers have a high need for information regarding the system's behavior (Wiegand et al, 2020;Wintersberger et al, 2020). Around 70% of drivers in a thinking-aloud driving simulator study requested an explanation for the automated vehicle's stopping behavior in a situation where a child unexpectedly ran across the street (Wiegand et al, 2020).…”
Section: Introductionmentioning
confidence: 99%
“…For automated driving, recent studies have shown that in situations with pedestrians, drivers have a high need for information regarding the system's behavior (Wiegand et al, 2020;Wintersberger et al, 2020). Around 70% of drivers in a thinking-aloud driving simulator study requested an explanation for the automated vehicle's stopping behavior in a situation where a child unexpectedly ran across the street (Wiegand et al, 2020). When participants were asked to identify objects in videos of urban driving scenes that an automated vehicle should inform them about, high priority was given to pedestrians near or on the road (Wintersberger et al, 2020).…”
Automated driving in urban environments not only has the potential to improve traffic flow and heighten driver comfort but also to increase traffic safety, particularly for vulnerable road users such as pedestrians. For these benefits to take effect, drivers need to trust and use automated vehicles. This decision is influenced by both system and context factors. However, it is not yet clear how these factors interact with each other, especially for automated driving in city scenarios with crossing pedestrians. Therefore, we conducted an online experiment in which participants (N = 68) experienced short automated rides from the driver’s perspective through an urban environment. In each of the presented videos, a pedestrian crossed the street in front of the automated vehicle while system and context factors were varied: 1) the crossing pedestrian’s intention was either visualized correctly (as crossing) or incorrectly (visualization missing) by the automated vehicle (system factor), 2) the pedestrian was either distracted by using a smartphone while crossing or not (context factor), and 3) the scenario was either more or less complex depending on the number of other vehicles and pedestrians being present (context factor). In situations with a system malfunction where the crossing pedestrian’s intention was not visualized, participants perceived the situation as more critical, had less trust in the automated system, and a higher willingness to take over control regardless of any context factors. However, when the system worked correctly, the crossing pedestrian’s smartphone usage came into play, especially in the less complex scenario. Participants perceived situations with a distracted pedestrian as more critical, trusted the system less, indicated a higher willingness to take over control, and were more uncertain about their decision. As this study demonstrates the influence of distracted pedestrians, more research is needed on context factors and their inclusion in the design of interfaces to keep drivers informed during automated driving in urban environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.