2021
DOI: 10.21203/rs.3.rs-384204/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

New adaptive lasso approaches for variable selection in automated pharmacovigilance signal detection

Abstract: Background: Adverse effects of drugs are often identified after market introduction. Post-marketing pharmacovigilance aims to detect them as early as possible and relies on spontaneous reporting systems collecting suspicious cases. Signal detection tools have been developed to mine these large databases and counts of reports are analysed with disproportionality methods. To address disproportionality method biases, recent methods apply to individual observations taking into account all exposures for the same pa… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 41 publications
0
2
0
Order By: Relevance
“…The cross‐validation scheme described in Algorithm 1 is standard for the calibration of the lasso and the weighted lasso (i.e., when weights are given a priori). But, it is also commonly used for the calibration of the adaptive lasso (Chang et al., 2020; Courtois et al., 2021; Dessie et al., 2021; Pollard et al., 2021). Indeed, for the calibration of the one‐step lasso, for example, many statistical analysts implement Algorithm 2: They first compute initial estimates on the total sample, from which they derive the weight vector boldwR0p${\bf w}\in \mathbb {R}^p_{\ge 0}$ (Step 1).…”
Section: The Adaptive Lasso Under the Linear Regression Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…The cross‐validation scheme described in Algorithm 1 is standard for the calibration of the lasso and the weighted lasso (i.e., when weights are given a priori). But, it is also commonly used for the calibration of the adaptive lasso (Chang et al., 2020; Courtois et al., 2021; Dessie et al., 2021; Pollard et al., 2021). Indeed, for the calibration of the one‐step lasso, for example, many statistical analysts implement Algorithm 2: They first compute initial estimates on the total sample, from which they derive the weight vector boldwR0p${\bf w}\in \mathbb {R}^p_{\ge 0}$ (Step 1).…”
Section: The Adaptive Lasso Under the Linear Regression Modelmentioning
confidence: 99%
“…In particular, when using the popular glmnet R package (Friedman et al., 2010), it is tempting to implement the naive scheme by simply applying the cv.glmnet function to select the hyperparameter of the adaptive lasso after computing the initial estimates on the total sample and considering the resulting weight vector as given a priori; see Algorithms 2, A1, and A3 for the complete description of this strategy in the case of the one‐step lasso, ols‐adaptive lasso, and ridge‐adaptive lasso, respectively. This improper cross‐validation scheme was used in the illustrative example of section 2.8.1 of Bühlmann and van De Geer (2011), and can also be found in the original version of the adapt4pv R package (Courtois et al., 2021). Consequently, many statistical analysts still seem to apply this naive and improper cross‐validation scheme for calibrating the adaptive lasso (Chang et al., 2020; Dessie et al., 2021; Pollard et al., 2021).…”
Section: Introductionmentioning
confidence: 99%