“…In recent years, explainability, seen as the ability to provide a human with understandable explanations of the results produced by AI and ML algorithms, has become an essential aspect of designing tools based on these techniques [1], especially in critical areas such as healthcare [26]. Even if explainability is a term coined in the area of AI, interest in it is also growing in the software engineering and requirement engineering communities [9], [25]; researchers in these communities have proposed, for example, explainable analytical models for predictions and decision-making [25], explainable counterexamples [14], explainable quality attribute trade-offs in software architecture selection [4], the analysis of explainability as a non-functional requirement and its tradeoff with other quality attributes [9], [15] and in relation to human-machine teaming [3]. Work describing the theoretical basis of explainability, exploiting concepts from philosophy, psychology, and sociology can be found, for example, in [8], [21], [22], [24].…”