2022
DOI: 10.1109/tvcg.2021.3114793
|View full text |Cite
|
Sign up to set email alerts
|

Towards Visual Explainable Active Learning for Zero-Shot Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(11 citation statements)
references
References 59 publications
1
10
0
Order By: Relevance
“…Users can label instances recommended by an active learning algorithm or select informative instances to label with the help of visualization, which are used to further improve the underlying model. Such an integration is also substantiated by other work [25], [26], [27], [28], [29], [30], [31].…”
Section: Visualization For Annotation Quality Improvementsupporting
confidence: 76%
“…Users can label instances recommended by an active learning algorithm or select informative instances to label with the help of visualization, which are used to further improve the underlying model. Such an integration is also substantiated by other work [25], [26], [27], [28], [29], [30], [31].…”
Section: Visualization For Annotation Quality Improvementsupporting
confidence: 76%
“…RetainVis [ 31 ] leverages an attention module where users can directly edit attention weights to update the model. Jia et al [ 25 ] designed Semantic Navigator, which guides users to steer the model by editing a class-attribute matrix in a zero-shot learning process.…”
Section: Related Workmentioning
confidence: 99%
“…Apart from showing latent vectors, previous studies have combined interactive visual analytics with interactive or explainable ML to introduce interpretability into the analysis of latent vectors [18,23,68]. Several studies [18,20,59] used DRL to extract semantic dimensions and associate model performance with human concepts (e.g., brightness of images, location of objects).…”
Section: Related Workmentioning
confidence: 99%
“…The semantic dimensions learned by DRL are directly used without refinement, mostly because they are low-level concepts that can be easily extracted by ML. Jia et al [23] proposed a visual explainable active learning approach that asks users questions and uses their answers to learn explainable attributes that can be used to classify images from unseen classes. Zhao et al [68] proposed a visualization tool where users can explore and label image patches with a certain concept.…”
Section: Related Workmentioning
confidence: 99%