2021
DOI: 10.1002/ail2.41
|View full text |Cite
|
Sign up to set email alerts
|

Explainable, interactive content‐based image retrieval

Abstract: Quantifying the value of explanations in a human‐in‐the‐loop (HITL) system is difficult. Previous methods either measure explanation‐specific values that do not correspond to user tasks and needs or poll users on how useful they find the explanations to be. In this work, we quantify how much explanations help the user through a utility‐based paradigm that measures change in task performance when using explanations vs not. Our chosen task is content‐based image retrieval (CBIR), which has well‐established basel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 17 publications
(16 reference statements)
0
1
0
Order By: Relevance
“…Interpretable content-based classification has appeared in the literature multiple times based on the classifier under investigation for content classification (Nauck and Kruse 1999;Rui et al 1998;Vasu et al 2021). Further improvements in the wider field of interpreting machine learning decisions were achieved with the introduction of increasingly complicated and opaque classifiers.…”
Section: Related Workmentioning
confidence: 99%
“…Interpretable content-based classification has appeared in the literature multiple times based on the classifier under investigation for content classification (Nauck and Kruse 1999;Rui et al 1998;Vasu et al 2021). Further improvements in the wider field of interpreting machine learning decisions were achieved with the introduction of increasingly complicated and opaque classifiers.…”
Section: Related Workmentioning
confidence: 99%