2015
DOI: 10.1007/s11222-015-9608-z
|View full text |Cite
|
Sign up to set email alerts
|

Random projections for Bayesian regression

Abstract: This article deals with random projections applied as a data reduction technique for Bayesian regression analysis. We show sufficient conditions under which the entire $d$-dimensional distribution is approximately preserved under random projections by reducing the number of data points from $n$ to $k\in O(\operatorname{poly}(d/\varepsilon))$ in the case $n\gg d$. Under mild assumptions, we prove that evaluating a Gaussian likelihood function based on the projected data instead of the original data yields a $(1… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 28 publications
(28 citation statements)
references
References 59 publications
(75 reference statements)
0
28
0
Order By: Relevance
“…More advanced methods comprise functional regression for functional data [38], quantile regression [25], and regression based on loss functions other than squared error loss like, e.g., Lasso regression [11,21]. In the context of Big Data, the challenges are similar to those for classification methods given large numbers of observations n (e.g., in data streams) and / or large numbers of features p. For the reduction of n, data reduction techniques like compressed sensing, random projection methods [20] or sampling-based procedures [28] enable faster computations. For decreasing the number p to the most influential features, variable selection or shrinkage approaches like the Lasso [21] can be employed, keeping the interpretability of the features.…”
Section: Statistical Data Analysismentioning
confidence: 99%
“…More advanced methods comprise functional regression for functional data [38], quantile regression [25], and regression based on loss functions other than squared error loss like, e.g., Lasso regression [11,21]. In the context of Big Data, the challenges are similar to those for classification methods given large numbers of observations n (e.g., in data streams) and / or large numbers of features p. For the reduction of n, data reduction techniques like compressed sensing, random projection methods [20] or sampling-based procedures [28] enable faster computations. For decreasing the number p to the most influential features, variable selection or shrinkage approaches like the Lasso [21] can be employed, keeping the interpretability of the features.…”
Section: Statistical Data Analysismentioning
confidence: 99%
“…By using the training model, the lithium-ion batteries RUL could be predicted. To evaluate the performance of the deep neural network model, the prediction accuracy needs to compare with other approaches such as Bayesian Regression [16], the support vector machine (SVM) [17], Linear Regression [18] and etc. To represent the prediction accuracy, statistics based evaluation methods, e.g., standard deviation, mean squared error, root mean square error (RMSE), could be adopted to evaluate and compare the performance of different prediction models.…”
Section: Process Of Deep Learning Conceptual Framework For Rul Predicmentioning
confidence: 99%
“…In particular, not only maximum likelihood estimators are approximated under random projections. Geppert et al [54] showed that in important classes of Bayesian regression models, the whole structure of the posterior distribution is preserved. This yields much faster algorithms for the widely applicable and flexible, but at the same time computationally demanding Bayesian machinery.…”
Section: Lemma 11 (Distributional Johnson-lindenstrauss Lemma) There mentioning
confidence: 99%
“…A parallel least squares regression solver LSRN was developed in [78,97]. An implementation of some of the presented sketching techniques named RaProR was made available for the statistics programming language R [53,54,87].…”
Section: Lemma 11 (Distributional Johnson-lindenstrauss Lemma) There mentioning
confidence: 99%
See 1 more Smart Citation