2021
DOI: 10.18653/v1/2021.internlp-1
|View full text |Cite
|
Sign up to set email alerts
|

Proceedings of the First Workshop on Interactive Learning for Natural Language Processing

Abstract: Biases and artifacts in training data can cause unwelcome behavior in text classifiers (such as shallow pattern matching), leading to lack of generalizability. One solution to this problem is to include users in the loop and leverage their feedback to improve models. We propose a novel explanatory debugging pipeline called HILDIF, enabling humans to improve deep text classifiers using influence functions as an explanation method. We experiment on the Natural Language Inference (NLI) task, showing that HILDIF c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 26 publications
(49 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?