23rd International Conference on Intelligent User Interfaces 2018
DOI: 10.1145/3172944.3172989
|View full text |Cite
|
Sign up to set email alerts
|

User Modelling for Avoiding Overfitting in Interactive Knowledge Elicitation for Prediction

Abstract: In human-in-the-loop machine learning, the user provides information beyond that in the training data. Many algorithms and user interfaces have been designed to optimize and facilitate this human-machine interaction; however, fewer studies have addressed the potential defects the designs can cause. Effective interaction often requires exposing the user to the training data or its statistics. The design of the system is then critical, as this can lead to double use of data and overfitting, if the user reinforce… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
9
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 19 publications
(27 reference statements)
1
9
0
Order By: Relevance
“…This study is positioned at the intersection of natural language dialogue [21,40,45] and interactive machine learning [1,3,18,22,26], in which the outputs of a machine learning model are visually displayed to the user [14] or are used to select informative questions for the user [13]. Our study is close to the latter and shows a promising way to obtain knowledge from the user during dialogues.…”
Section: Knowledge Acquisition Through Dialogues Using Kgc Scoresmentioning
confidence: 80%
“…This study is positioned at the intersection of natural language dialogue [21,40,45] and interactive machine learning [1,3,18,22,26], in which the outputs of a machine learning model are visually displayed to the user [14] or are used to select informative questions for the user [13]. Our study is close to the latter and shows a promising way to obtain knowledge from the user during dialogues.…”
Section: Knowledge Acquisition Through Dialogues Using Kgc Scoresmentioning
confidence: 80%
“…However, that runs the risk that the users may amplify noise in the statistics, especially in low-simulation regimes, if they are not careful. Using so-called "posterior elicitation" and inferring the priors indirectly may then be helpful (Daee et al, 2018).…”
Section: Discussionmentioning
confidence: 99%
“…Such interfaces can be used for tasks that go beyond labeling to include relevant tasks such as feature creation, re-weighting features, adjusting cost matrices, or otherwise modifying model parameters. Others have pointed to specific threats in interactive ML related to the information a human provides, including that a user's input reinforces noise in the training data or statistics they see [52]. Recent works have also surveyed research in human-in-the-loop machine learning [53,54].…”
Section: Knowledge Elicitation For Expert Decision Makingmentioning
confidence: 99%
“…Finally, related to providing context, some recent interactive machine learning research argues that, when experts are shown labeled data used for training or validation during elicitation, the knowledge that is elicited from them can be redundant in ways that hamper model performance [52]. This possibility is largely not mentioned by papers in our sample that explicitly described giving domain experts access to training data, but as elicitation interfaces become a more focal aspect of applied machine learning research, this risk may be important for researchers to account for.…”
Section: Establishing Context and Common Groundmentioning
confidence: 99%