2022
DOI: 10.1007/978-3-031-16564-1_13
|View full text |Cite
|
Sign up to set email alerts
|

Impact of Feedback Type on Explanatory Interactive Learning

Abstract: Explanatory Interactive Learning (XIL) collects user feedback on visual model explanations to implement a Human-in-the-Loop (HITL) based interactive learning scenario. Different user feedback types will have different impacts on user experience and the cost associated with collecting feedback since different feedback types involve different levels of image annotation. Although XIL has been used to improve classification performance in multiple domains, the impact of different user feedback types on model perfo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 37 publications
0
4
0
Order By: Relevance
“…Some Concept Bottleneck Models incorporate two tiers of human knowledge in their pipelines -human-defined concepts used in training; and direct human involvement in rectifying faulty concept predictions during inference. Another popular class of human-in-the-loop interpretability methods [128], [129], [130], [131]), known as eXplainatory Interractive Learning (XIL), employs human supervision to manually edit the heatmaps generated by conventional attribution methods like LIME, CAM Grad-CAM. Although Active Learning (AL) [132] also leverages human in the machine learning loop to improve performance, the fundamental difference is that XIL particularly focuses on achieving this goal by manipulating explanations.…”
Section: Knowledge-informed Explainability Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Some Concept Bottleneck Models incorporate two tiers of human knowledge in their pipelines -human-defined concepts used in training; and direct human involvement in rectifying faulty concept predictions during inference. Another popular class of human-in-the-loop interpretability methods [128], [129], [130], [131]), known as eXplainatory Interractive Learning (XIL), employs human supervision to manually edit the heatmaps generated by conventional attribution methods like LIME, CAM Grad-CAM. Although Active Learning (AL) [132] also leverages human in the machine learning loop to improve performance, the fundamental difference is that XIL particularly focuses on achieving this goal by manipulating explanations.…”
Section: Knowledge-informed Explainability Methodsmentioning
confidence: 99%
“…On the other hand, some approaches may enhance their main objective while compromising other figures of merit. The interpretability-accuracy trade-off -which still a subject of intense debate anyway (see [130], [131], [175], [176], [177])-associated with some knowledge-informed techniques is a classic example of this situation.…”
Section: Summary Of the Main Features Of Prior Knowledgeinformed Appr...mentioning
confidence: 99%
“…Recent work on explanatory supervision (XS) has shown that models can be successfully protected from learning spurious signals by eliciting ground-truth explanations from humans as an additional supervision for the models (Ross, Hughes, and Doshi-Velez 2017;Teso and Kersting 2019;Rieger et al 2020;Schramowski et al 2020;Hagos, Curran, and Mac Namee 2022;Friedrich et al 2023). Of these methods, "right for the right reasons" (RRR) (Ross, Hughes, and Doshi-Velez 2017) and "right for better reasons" (RBR) (Shao et al 2021) seem to perform particularly well (Friedrich et al 2023).…”
Section: Introductionmentioning
confidence: 99%
“…An approach that builds on a combination of concept bottleneck models trained on basic concepts like shapes, patterns, colours, et cetera, together with explainable interactive learning (XIL) Hagos et al, 2022), is a promising path forward. In addition, interesting upcoming work by, for example, Mutahar and Miller (2022) builds on a combination of inherently explainable models, such as decision trees and neural networks.…”
Section: Cathy O'neilmentioning
confidence: 99%