This study investigates the differences and effects of transparency and explainability on trust, situation awareness, and satisfaction in the context of an automated car. Three groups were compared in a between-subjects design (n = 73). Participants in every group saw six graphically manipulated videos of an automated car from the driver’s perspective with either transparency, post-hoc explanations or both combined. Transparency resulted in higher trust, higher satisfaction and higher level 2 situational awareness (SA) than explainability. Transparency also resulted in higher level 2 SA than the combined condition, but did not differ in terms of trust or satisfaction. Moreover, explainability led to significantly worse satisfaction compared to combined feedback. Although our findings should be replicated in more ecologically valid driving situations, we tentatively conclude that transparency alone should be implemented in semi self-driving cars, and possibly automated systems in general, whenever possible to make them most satisfactory, trustworthy, and resulting in higher SA.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.