2021
DOI: 10.48550/arxiv.2110.00816
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Calibrated Multiple-Output Quantile Regression with Representation Learning

Abstract: We develop a method to generate predictive regions that cover a multivariate response variable with a user-specified probability. Our work is composed of two components. First, we use a deep generative model to learn a representation of the response that has a unimodal distribution. Existing multiple-output quantile regression approaches are effective in such cases, so we apply them on the learned representation, and then transform the solution to the original space of the response. This process results in a f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 28 publications
(55 reference statements)
0
6
0
Order By: Relevance
“…Based on the discussion above, this research differs from other recent studies which did not use and study multiple representations in elementary schools. These include research on 3D multiple representations with M3DETR (Guan et al, 2022), representation learning on multiple family quantile regression material (Feldman et al, 2023), and multimodal representation and correlation learning (Mai et al, 2023). Our findings provide a more comprehensive concept because it discusses the approach, implementation, and impact of multiple representations in elementary school science learning.…”
Section: Multiple Representation Approach Impact On Elementary School...mentioning
confidence: 93%
“…Based on the discussion above, this research differs from other recent studies which did not use and study multiple representations in elementary schools. These include research on 3D multiple representations with M3DETR (Guan et al, 2022), representation learning on multiple family quantile regression material (Feldman et al, 2023), and multimodal representation and correlation learning (Mai et al, 2023). Our findings provide a more comprehensive concept because it discusses the approach, implementation, and impact of multiple representations in elementary school science learning.…”
Section: Multiple Representation Approach Impact On Elementary School...mentioning
confidence: 93%
“…As for the depth-based quantile regions, moreover, they are necessarily convex, even for distributions with highly non-convex shapes as in Figure 1. To circumvent this convexity problem, Feldman, Bates and Romano (2021) propose a clever transformation of the data turning its distribution into a latent one with convex level sets by fitting a conditional variational auto-encoder (Sohn, Lee and Yan (2015)). The probability contents of the quantile regions resulting from this machine-learning type of "convexity reparation," however, remain out of control (they still depend on P).…”
Section: Quantile Regression Single-and Multiple-outputmentioning
confidence: 99%
“…The time-series data sets we use in the experiments in Section 5 include the day of the week as an element in the feature vector. Therefore, we assess the violation of day-stratified coverage [22,37], as a proxy for conditional coverage. That is, we measure the average deviation of the coverage in each day of the week from the nominal coverage level; see Supplementary Section F.2 for a formal definition.…”
Section: ∆Coveragementioning
confidence: 99%
“…where E, D are the encoder and decoder networks of the STDQR model, m is the dimension of z, Q χ 2 m is the inverse cumulative distribution function (ICDF) of the chi-squared distribution with m degrees of freedom, and 0 m is a vectors of zeros of size m. We fit the model with a batch size of 64 with a loss that penalizes deviation of the output of the encoder E from the multivariate normal distribution using the KL-divergence loss [46,47]. We also added a mean squared error loss to encourage the decoder's D output to faithfully reconstruct Y , as explained in [37]. The model's hyper-parameters are summarized in Table 2.…”
Section: B2 Multiple-output Quantile Regression Experimental Setupmentioning
confidence: 99%
See 1 more Smart Citation