Harvard Data Science Review 2022
DOI: 10.1162/99608f92.16c71dad
|View full text |Cite
|
Sign up to set email alerts
|

Private Prediction Sets

Abstract: We introduce a method for online conformal prediction with decaying step sizes. Like previous methods, ours possesses a retrospective guarantee of coverage for arbitrary sequences. However, unlike previous methods, we can simultaneously estimate a population quantile when it exists. Our theory and experiments indicate substantially improved practical properties: in particular, when the distribution is stable, the coverage is close to the desired level for every time point, not just on average over the observed… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(36 citation statements)
references
References 63 publications
0
20
0
Order By: Relevance
“…In these scenarios, one would rather use only model predictions that are more likely to be reliable. GP would detect “distribution shifts”, where new datapoints are very different from the training set by assigning high uncertainty to its predictions, while a deep learning model may “fail silently”, generating low-accuracy predictions without providing any obvious sign of failure. ,, Uncertainty can also be used for Bayesian optimization/active learning, where the model assists the practitioner in selecting which datapoints to experimentally evaluate next. Several techniques for estimating uncertainty in deep learning have been introduced in the literature. As we will show here, however, the uncertainty estimates they provide are much more poorly calibrated than those provided by a GP.…”
Section: Introductionmentioning
confidence: 99%
“…In these scenarios, one would rather use only model predictions that are more likely to be reliable. GP would detect “distribution shifts”, where new datapoints are very different from the training set by assigning high uncertainty to its predictions, while a deep learning model may “fail silently”, generating low-accuracy predictions without providing any obvious sign of failure. ,, Uncertainty can also be used for Bayesian optimization/active learning, where the model assists the practitioner in selecting which datapoints to experimentally evaluate next. Several techniques for estimating uncertainty in deep learning have been introduced in the literature. As we will show here, however, the uncertainty estimates they provide are much more poorly calibrated than those provided by a GP.…”
Section: Introductionmentioning
confidence: 99%
“…In addition, we compared the number of labeled examples needed to reject a null hypothesis at level 1α=95% with high probability. See ( 5 ) for a Python package implementing prediction-powered inference, which contains code for reproducing the experiments, and ( 6 ) for the data used in the experiments.…”
Section: Application Of Prediction-powered Inference To Real Datasetsmentioning
confidence: 99%
“…Data and materials availability: The data and code are available in the accompanying Python package ( 5 ) and data repository ( 6 ). All other data needed to evaluate the conclusions in the paper are present in the paper or the Supplementary Materials.…”
Section: Acknowledgmentsmentioning
confidence: 99%
“…With scores for all targets, it is possible to calibrate predictions and in fact use this calibration data for statistically rigorous conformal prediction to generate prediction sets, which contain the true target with some degree of confidence. 21 However, conformal prediction will not be included in the following experiments.…”
Section: Experimental Approachmentioning
confidence: 99%