Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery &Amp; Data Mining 2018
DOI: 10.1145/3219819.3220070
|View full text |Cite
|
Sign up to set email alerts
|

Learning Credible Models

Abstract: In many settings, it is important that a model be capable of providing reasons for its predictions (i.e., the model must be interpretable). However, the model's reasoning may not conform with well-established knowledge. In such cases, while interpretable, the model lacks credibility. In this work, we formally define credibility in the linear setting and focus on techniques for learning models that are both accurate and credible. In particular, we propose a regularization penalty, expert yielded estimates (EYE)… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
26
0
4

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 25 publications
(30 citation statements)
references
References 28 publications
0
26
0
4
Order By: Relevance
“…For instance, if we had better ways of representing and exploring the Rashomon set (see Challenge 9), domain experts might be able to search within it effectively, without fear of leaving that set and producing a suboptimal model. If we knew domain experts' views about the importance of features, we should be able to incorporate that through regularization [307]. Better interfaces might elicit better constraints from domain experts and incorporate such constraints into the models.…”
Section: Sophisticated Rounding Methodsmentioning
confidence: 99%
“…For instance, if we had better ways of representing and exploring the Rashomon set (see Challenge 9), domain experts might be able to search within it effectively, without fear of leaving that set and producing a suboptimal model. If we knew domain experts' views about the importance of features, we should be able to incorporate that through regularization [307]. Better interfaces might elicit better constraints from domain experts and incorporate such constraints into the models.…”
Section: Sophisticated Rounding Methodsmentioning
confidence: 99%
“…We will discuss several forms of interpretable machine learning models for different applications below, but there can never be a single definition; e.g., in some domains, sparsity is useful, and in others is it not. There is a spectrum between fully transparent models (where we understand how all the variables are jointly related to each other) and models that are lightly constrained in model form (such as models that are forced to increase as one of the variables increases, or models that, all else being equal, prefer variables that domain experts have identified as important, see [12]).…”
Section: Introductionmentioning
confidence: 99%
“…In fact, the nocturnal temperature dysregulation as an age-related sleep disturbance contributes to fragmentation of sleep which is a common predisposing factor for sleep complaints in caregivers [9]. Since importance features provided by models agree with well-established CPWD sleep studies, the proposed models can be applicable for CPWD [93].…”
Section: Performance Evaluationmentioning
confidence: 62%