2014
DOI: 10.1007/s11222-014-9461-5
|View full text |Cite
|
Sign up to set email alerts
|

High-dimensional regression with gaussian mixtures and partially-latent response variables

Abstract: In this work we address the problem of approximating high-dimensional data with a lowdimensional representation. We make the following contributions. We propose an inverse regression method which exchanges the roles of input and response, such that the low-dimensional variable becomes the regressor, and which is tractable. We introduce a mixture of locally-linear probabilistic mapping model that starts with estimating the parameters of inverse regression, and follows with inferring closed-form solutions for th… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
156
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
1
1

Relationship

3
3

Authors

Journals

citations
Cited by 46 publications
(157 citation statements)
references
References 33 publications
(79 reference statements)
1
156
0
Order By: Relevance
“…In this section we summarize the mixture of linear inverse regressions of [8], which is named Gaussian locally linear mapping (GLLiM). GLLiM interchanges the roles of the input (high dimensional) and of the output (low dimensional), such that a low-to-high regression is being learned.…”
Section: Mixture Of Linear Inverse Regressionsmentioning
confidence: 99%
See 4 more Smart Citations
“…In this section we summarize the mixture of linear inverse regressions of [8], which is named Gaussian locally linear mapping (GLLiM). GLLiM interchanges the roles of the input (high dimensional) and of the output (low dimensional), such that a low-to-high regression is being learned.…”
Section: Mixture Of Linear Inverse Regressionsmentioning
confidence: 99%
“…given the old parameter values θ (old) , while the maximization step computes new parameter values via maximization of the expected complete-data log-likelihood function, namely θ (new) = argmax E[log p(x, y, Z|θ (old) )], which yields a closed-form solution [8]. Initial responsibilities are obtained by fitting a K-component GMM to the lowdimensional data {x n } N n=1 .…”
Section: Mixture Of Linear Inverse Regressionsmentioning
confidence: 99%
See 3 more Smart Citations