“…Trust in AI, and the extent to which AI is deemed trustworthy, is contingent on communications processes and products in AI, such as model or XAI outputs, or interfaces for imposing constraints on AI models; the visual presence of AI tends to increase trust in AI (Gilkson & Woolley, 2020). Many studies have called for or investigated explanations and XAI (McGovern, Bostrom, et al., 2022) as an approach to increasing trust (e.g., Hoffman et al., 2018; Lockey et al., 2021; Miller, 2019; Mueller et al., 2019; Tulio et al, 2007), while such explanations have often relied on visualizations (McGovern et al., 2019).…”