“…Some works have aimed to understand the model's prediction strategies, e.g., in order to validate the model [104]. Others visualize the learned representations and try to make the model itself more interpretable [75]. Finally, other works have sought to use explanations to learn about the data, e.g., by visualizing interesting input-prediction patterns extracted by a deep neural network model in scientific applications [186].…”