2021
DOI: 10.1111/rssc.12494
|View full text |Cite
|
Sign up to set email alerts
|

Sparse Reduced-Rank Regression for Exploratory Visualisation of Paired Multivariate Data

Abstract: This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(22 citation statements)
references
References 64 publications
(137 reference statements)
0
20
0
Order By: Relevance
“…Having established that registered 2-d ADRs are as successful as a standardized, rich set of morphometric features in predicting transcriptomic identity and that ADRs can be generated in a fully automated manner without significant performance loss, we study layer-specific axon and dendrite innervation. We treat the search for genes that are predictive of laminar innervation strength as a sparse regression problem 30, 31 (Table 1, Supplementary Tables S1– S3). We find that the sets of laminae innervation-predicting genes within molecularly defined subclasses (Sst, Pvalb, Vip, Lamp5) and terminal, t-types (3 Sst, 2 Pvalb, 2 Lamp5 types) are highly reproducible (Supplementary Table S3) and almost mutually exclusive(Figure 3f).…”
Section: Resultsmentioning
confidence: 99%
“…Having established that registered 2-d ADRs are as successful as a standardized, rich set of morphometric features in predicting transcriptomic identity and that ADRs can be generated in a fully automated manner without significant performance loss, we study layer-specific axon and dendrite innervation. We treat the search for genes that are predictive of laminar innervation strength as a sparse regression problem 30, 31 (Table 1, Supplementary Tables S1– S3). We find that the sets of laminae innervation-predicting genes within molecularly defined subclasses (Sst, Pvalb, Vip, Lamp5) and terminal, t-types (3 Sst, 2 Pvalb, 2 Lamp5 types) are highly reproducible (Supplementary Table S3) and almost mutually exclusive(Figure 3f).…”
Section: Resultsmentioning
confidence: 99%
“…To determine how the transcriptomic profile of shSLK neurons relates to the changes in electrophysiological parameters, we applied a sparse reduced-rank regression (RRR) approach to predict the three electrophysiological properties we found to be altered in shSLK neurons. This approach uses a low-dimensional representation of the expression of selected genes, 26 and allows to visualize relationships between electrophysiological features and gene expression. We used cross-validation to tune the hyperparameters ( Figures S10 A–S10C) and obtained a model that selected 113 candidate genes best explaining the variability of the electrophysiological data ( Table S3 ).…”
Section: Resultsmentioning
confidence: 99%
“…RRR is a computationally efficient method which increases statistical power in settings where the number of dimensions is large compared to the number of examples. In such m ≫ j settings RRR is nowadays a state-of-the-art method in fields with high dimensional data such as genetics and imaging 21,22 . Given j observations of m predictors and n outcome measures, standard multivariate regression requires fitting m · n coefficients Y = XC + E , with Y being the response matrix of size j × n, X being the j × m predictor matrix, C the m × n coefficient matrix and E being the error term matrix of size j × n .…”
Section: Methodsmentioning
confidence: 99%
“…RRR is a computationally efficient method which increases statistical power in settings where the number of dimensions is large compared to the number of examples. In such 𝑚 ≫ 𝑗 settings RRR is nowadays a state-of-the-art method in fields with high dimensional data such as genetics and imaging 21,22 This decomposition allows for interpretations of 𝐴 and 𝐵. 𝐴 is a mapping from the predictor matrix 𝑋 to a latent representation of dimension 𝑘. 𝐵 is a mapping from the latent scores to the responses 𝑌.…”
Section: Reduced Rank Regression Model (Rrr)mentioning
confidence: 99%