2018
DOI: 10.1080/10618600.2018.1425626
|View full text |Cite
|
Sign up to set email alerts
|

Covariate Selection in High-Dimensional Generalized Linear Models With Measurement Error

Abstract: In many problems involving generalized linear models, the covariates are subject to measurement error. When the number of covariates p exceeds the sample size n, regularized methods like the lasso or Dantzig selector are required. Several recent papers have studied methods which correct for measurement error in the lasso or Dantzig selector for linear models in the p > n setting. We study a correction for generalized linear models based on Rosenbaum and Tsybakov's matrix uncertainty selector. By not requiring … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
30
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 28 publications
(30 citation statements)
references
References 36 publications
0
30
0
Order By: Relevance
“…Sørensen et al . () proposed a GMUS for sparse high‐dimensional GLM models with measurement error. The GMUS estimator does not make use of Σu.…”
Section: Model Illustration and Simulation Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Sørensen et al . () proposed a GMUS for sparse high‐dimensional GLM models with measurement error. The GMUS estimator does not make use of Σu.…”
Section: Model Illustration and Simulation Resultsmentioning
confidence: 99%
“…Additionally, Sørensen et al . () developed the generalized matrix uncertainty selector (GMUS) for generalized linear models. Both the conditional score approach and GMUS require subjective choices of tuning parameters.…”
Section: Introductionmentioning
confidence: 99%
“…In future work, we will also extend the estimation methods to the settings where the covariates are measured with multiplicative errors which are shown to be reducible to the additive error problem as studied in the present work [36,30]. Moreover, we are interested in applying the analysis and concentration of measure results developed in the current paper and in our ongoing work to the more general contexts and settings where measurement error models are introduced and investigated; see for example [16,8,44,24,20,45,9,7,14,46,25,28,47,53,23,29,32,2,43,41,42] and references therein.…”
Section: Discussionmentioning
confidence: 98%
“…We present an architecture that recovers missing expression data for multiple tissue types under the missing completely at random assumption (MCAR; Little and Rubin, 2019), that is, the missingness of the data is independent of the observed and the missing variables. In contrast to existing linear methods for deconfounding gene expression (Øystein Sørensen et al, 2018), our model integrates covariates (global determinants of gene expression; Stegle et al, 2012) to account for their non-linear effects. In particular, a characteristic feature of our architecture is the use of word embeddings (Mikolov et al, 2013) to learn rich and distributed representations for the tissue types and other covariates.…”
Section: Introductionmentioning
confidence: 99%