2018
DOI: 10.1080/01621459.2017.1307116
|View full text |Cite
|
Sign up to set email alerts
|

Distribution-Free Predictive Inference for Regression

Abstract: We develop a general framework for distribution-free predictive inference in regression, using conformal inference. The proposed methodology allows for the construction of a prediction band for the response variable using any estimator of the regression function. The resulting prediction band preserves the consistency properties of the original estimator under standard assumptions, while guaranteeing finite-sample marginal coverage even when these assumptions do not hold. We analyze and compare, both empirical… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
509
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 471 publications
(554 citation statements)
references
References 30 publications
5
509
0
Order By: Relevance
“…As an immediate corollary of Theorem 1, note that it also follows that conformal quantile regression bands have asymptotic conditional coverage, which we define as in [13].…”
Section: Theoretical Analysismentioning
confidence: 97%
“…As an immediate corollary of Theorem 1, note that it also follows that conformal quantile regression bands have asymptotic conditional coverage, which we define as in [13].…”
Section: Theoretical Analysismentioning
confidence: 97%
“…Example 2. Figure 2 shows a toy regression problem, where 40 training samples drawn from a sine function have feature x in [0, 5], and 10 training samples have feature x in (10,15]. However, the testing samples have feature x in (5,10].…”
Section: Motivationmentioning
confidence: 99%
“…However, the testing samples have feature x in (5,10]. We use a neural network (NN) regressor to fit the data, and as shown, NN does a better job in fitting the sine function in [0, 5] than in (10,15]. Meanwhile, it does a terrible job in extrapolating outside of the training space, i.e., (5, 10].…”
Section: Motivationmentioning
confidence: 99%
“…They mainly make changes to feature value and test the chain effect to performance loss of predictions. The loss is then taken as the measure of local importance of feature [11]. This method only relies on the output evaluation and provides an unified way to check feature contribution for black-box models.…”
Section: Related Workmentioning
confidence: 99%