“…In fact, several studies suggest that XAI class output (Wang et al, 2019), its level of transparency (Kulesza et al, 2013;Guesmi et al, 2021) and framing (Narayanan et al, 2018) are all factors that can affect both humans' cognition and affect. Another reason for the effect of XAI class on trust calibration is the nature of humans' cognitive biases (Naiseh et al, 2021c). For instance, under-trust may be resulted from anchoring bias when humans look at only salient features of XAI class and thus judge the quality of the XAI class to be untrustworthy.…”