“…Here, we mention a few selected methods as examples of the plethora of approaches for understanding CNN decision-making: Saliency maps show the importance of each pixel to the classification decision (Springenberg et al, 2014;Bach et al, 2015;Smilkov et al, 2017;Zintgraf et al, 2017), concept activation vectors show a model's sensitivity to human-defined concepts (Kim et al, 2018), and other methods -amongst feature visualizations -focus on explaining individual units Bau et al (2020). Some tools integrate interactive, software-like aspects (Hohman et al, 2019;Wang et al, 2020;Carter et al, 2019;Collaris & van Wijk, 2020;OpenAI, 2020), combine more than one explanation method (Shi et al, 2020;Addepalli et al, 2020) or make progress towards automated explanation methods (Lapuschkin et al, 2019;Ghorbani et al, 2019b). As overviews, we recommend Zhang & Zhu (2018); Montavon et al (2018) and Samek et al (2020).…”