2013
DOI: 10.5705/ss.2011.058
|View full text |Cite
|
Sign up to set email alerts
|

Model selection for correlated data with diverging number of parameters

Abstract: High-dimensional longitudinal data arises frequently in biomedical and genomic research. It is important to select relevant covariates when the dimension of the parameters diverges as the sample size increases. We propose the penalized quadratic inference function to perform model selection and estimation simultaneously in the framework of a diverging number of regression parameters.The penalized quadratic inference function can easily take correlation information from clustered data into account, yet it does … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

5
62
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(67 citation statements)
references
References 33 publications
5
62
0
Order By: Relevance
“…As n →∞, the size of nonzero parameters detectable by the procedure can approach zero, but at a slower rate than the tuning parameter. This condition is required for the derivation of the asymptotic properties of the proposed procedure, and has been assumed by many authors (e.g., Peng & Fan, 2004; Wang et al, 2009; Cho & Qu, 2013; Fan & Tang, 2013). In real-world biomedical research, there usually exists a fixed minimum clinically important effect size.…”
Section: Variable Selection With a Penalized Pseudo-partial Likelimentioning
confidence: 99%
“…As n →∞, the size of nonzero parameters detectable by the procedure can approach zero, but at a slower rate than the tuning parameter. This condition is required for the derivation of the asymptotic properties of the proposed procedure, and has been assumed by many authors (e.g., Peng & Fan, 2004; Wang et al, 2009; Cho & Qu, 2013; Fan & Tang, 2013). In real-world biomedical research, there usually exists a fixed minimum clinically important effect size.…”
Section: Variable Selection With a Penalized Pseudo-partial Likelimentioning
confidence: 99%
“…In such cases, a transformation matrix H i can be applied for each subject to fit the pQIF model 26 . For each fully observed individual without missing data, H i is expressed as an m  ×  m identity matrix for the i th subject, where m is the total repeated time point.…”
Section: Methodsmentioning
confidence: 99%
“…For variable selection, a penalty can be added to the QIF, defined as: normalPQIFfalse(trueθˆ,R;y,Pλfalse)=NgNTC1gN+Nj=1qPλfalse(θjfalse), where trueboldC=false(1/Nfalse)i=1NgigiT is the sample covariance of gi and Pλ is the penalty function. We follow Cho and Qu () by choosing the nonconvex SCAD penalty for Pλ and use a local quadratic approximation algorithm to carry out the estimation and variable selection. The PQIF leads to some very nice variable selection properties provided that q increases as N increases.…”
Section: Methodsmentioning
confidence: 99%
“…Several alternative criteria have since been proposed, these include: quasi‐likelihood information criterion (QIC, Pan, ) and its variants (Hin and Wang, ), generalized Mallow's Cp (GCp, Cantoni, Flemming, and Ronchetti, ), and BIC using the quadratic inference function (QIF) approach (Wang and Qu, ). More recently, model selection criteria have been proposed using penalized GEEs (Wang, Zhou, and Qu, ) and penalized QIFs (Cho and Qu, ), where both methods use the smoothly clipped absolute deviation (SCAD) penalty.…”
Section: Introductionmentioning
confidence: 99%