2019
DOI: 10.1214/19-ejs1562
|View full text |Cite
|
Sign up to set email alerts
|

Inference under Fine-Gray competing risks model with high-dimensional covariates

Abstract: The purpose of this paper is to construct confidence intervals for the regression coefficients in the Fine-Gray model for competing risks data with random censoring, where the number of covariates can be larger than the sample size. Despite strong motivation from biomedical applications, a high-dimensional Fine-Gray model has attracted relatively little attention among the methodological or theoretical literature. We fill in this gap by developing confidence intervals based on a one-step bias-correction for a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 36 publications
0
4
0
Order By: Relevance
“…Although, the penalized proportional subdistribution hazards model have been proposed in previous studies, this study considered more scenarios with more penalties. Other works considered only low dimension with different penalties [ 32 ] or high dimension with a few L 1 penalties with no more than 1000 variables [ 26 , 27 ]. Our findings revealed that sensitivity of all penalties were comparable, but the MCP and MCP-L 2 penalties outperformed the other methods in term of selecting less noninformative variables.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Although, the penalized proportional subdistribution hazards model have been proposed in previous studies, this study considered more scenarios with more penalties. Other works considered only low dimension with different penalties [ 32 ] or high dimension with a few L 1 penalties with no more than 1000 variables [ 26 , 27 ]. Our findings revealed that sensitivity of all penalties were comparable, but the MCP and MCP-L 2 penalties outperformed the other methods in term of selecting less noninformative variables.…”
Section: Discussionmentioning
confidence: 99%
“…(5) The SCAD-L 2 Zeng and Xie 2020 [36] and MCP-L 2 penalties, where a L 2 penalty is appended to the SCAD and MCP penalties to induce grouping effect in variable selection Asymptotic properties of penalized estimators in different contexts have been investigated by different studies, and all the above penalties have been shown to enjoy the oracle property [26,27,32], i.e., these penalties are consistent in variable selection, and their estimators are asymptotically normal and unbiased. More explicitly, they work as well as knowing the true model in advance.…”
Section: Penalized Weighted Nonparametric Maximummentioning
confidence: 99%
See 1 more Smart Citation
“…With high-dimensional predictors, several authors (Kawaguchi et al 2019, Ha et al 2014, Ahn et al 2018 proposed regularized subdistribution hazard models for variable selection, and Hou et al (2019) further performed inference using a one-step debiased lasso estimator. For prediction, several deep learning methods for competing risks have been proposed based on CIFs.…”
Section: Competing Risksmentioning
confidence: 99%
“…In fact, model ( 7) implies that 1 − Fc(t|Xi) = {1 − F0c(t)} exp(X i β) , where Fc(t|Xi) and F0c(t) are the CIF given Xi and the baseline CIF, respectively. With high-dimensional predictors, several authors (Kawaguchi et al 2019, Ha et al 2014, Ahn et al 2018 proposed regularized subdistribution hazard models for variable selection, and Hou et al (2019) further performed inference using a one-step debiased LASSO estimator. For prediction, several deep learning works for competing risks have been proposed based on CIFs.…”
Section: Timementioning
confidence: 99%