2019
DOI: 10.1111/rssc.12370
|View full text |Cite
|
Sign up to set email alerts
|

Prediction with High Dimensional Regression Via Hierarchically Structured Gaussian Mixtures and Latent Variables

Abstract: Summary We propose a hierarchical Gaussian locally linear mapping structured mixture model, named HGLLiM, to predict low dimensional responses based on high dimensional covariates when the associations between the responses and the covariates are non‐linear. For tractability, HGLLiM adopts inverse regression to handle the high dimension and locally linear mappings to capture potentially non‐linear relations. Data with similar associations are grouped together to form a cluster. A mixture is composed of several… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
4
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 19 publications
1
4
0
Order By: Relevance
“…Its computation is straightforward once the GLLiM model has been learned. As shown later in our experiments and various papers [17,16,41,56], it performs well in several cases. Similarly, other moments can be easily computed.…”
Section: Prediction Using the Posterior Meansupporting
confidence: 79%
See 1 more Smart Citation
“…Its computation is straightforward once the GLLiM model has been learned. As shown later in our experiments and various papers [17,16,41,56], it performs well in several cases. Similarly, other moments can be easily computed.…”
Section: Prediction Using the Posterior Meansupporting
confidence: 79%
“…In the same vein as inverse regression approaches, and in contrast to deep learning approaches mentioned in Section 2, we propose to use the Gaussian Locally Linear Mapping (GLLiM) model [17] that provides a probability distribution selected in a family of mixture of Gaussian distributions {p(x | y; θ), θ ∈ Θ}, where the mixture parameters are denoted by θ. There have been several extensions and uses of GLLiM, including more robust [41,56] and deep [32] versions. However in all these contexts, the focus is on using the model for predictions without fully exploiting the posterior distributions provided by GLLiM.…”
Section: Parametric Posterior Approximation With Gaussian Mixturesmentioning
confidence: 99%
“…We note here that both GLoME and BLoME models have been thoroughly studied in the statistics and machine learning literatures in many different guises, including localized MoE [86,87,69,15], normalized Gaussian networks [90], MoE modeling of priors in Bayesian nonparametric regression [83,82], cluster-weighted modeling [47], deep mixture of linear inverse regressions [55], hierarchical Gaussian locally linear mapping structured mixture (HGLLiM) model [95], multiple-output Gaussian gated mixture of linear experts [73], and approximate Bayesian computation with surrogate posteriors using GLLiM [39].…”
Section: Mixture Of Experts Modelsmentioning
confidence: 99%
“…Recently it was proposed to approximate non-linear highdimensional to low-dimensional (high-to-low) mappings with mixtures of linear-Gaussian [17], [19] and linear-Student [20] regressions. These models adopt an inverse regression strategy, namely they learn a low-to-high mapping followed by the evaluation of a high-to-low mapping.…”
Section: Introductionmentioning
confidence: 99%
“…This piecewise linear models are well suited to capture potentially non-linear relations. This was extensively discussed in [17] and in [19], and was successfully applied to both head-pose estimation [18] and audio-source localization [21], [22].…”
Section: Introductionmentioning
confidence: 99%