2016
DOI: 10.1007/s11336-016-9529-6
|View full text |Cite
|
Sign up to set email alerts
|

Latent Variable Selection for Multidimensional Item Response Theory Models via $$L_{1}$$ Regularization

Abstract: We develop a latent variable selection method for multidimensional item response theory models. The proposed method identifies latent traits probed by items of a multidimensional test. Its basic strategy is to impose an [Formula: see text] penalty term to the log-likelihood. The computation is carried out by the expectation-maximization algorithm combined with the coordinate descent algorithm. Simulation studies show that the resulting estimator provides an effective way in correctly identifying the latent str… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
99
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 48 publications
(100 citation statements)
references
References 24 publications
1
99
0
Order By: Relevance
“…Second, even after applying rotational methods, the obtained factor loading matrix may not be simple (i.e., sparse) enough for a good interpretation. To better pursue a simple loading structure, it may be helpful to further add L 1 regularization of factor loading parameters (Sun et al, 2016) into the current optimization program for CJMLE, under which the estimated factor loading matrix is automatically sparse and thus no post-hoc rotation is needed.…”
Section: Discussionmentioning
confidence: 99%
“…Second, even after applying rotational methods, the obtained factor loading matrix may not be simple (i.e., sparse) enough for a good interpretation. To better pursue a simple loading structure, it may be helpful to further add L 1 regularization of factor loading parameters (Sun et al, 2016) into the current optimization program for CJMLE, under which the estimated factor loading matrix is automatically sparse and thus no post-hoc rotation is needed.…”
Section: Discussionmentioning
confidence: 99%
“…Thanks to the simple procedure of the StEM algorithm, the algorithm can be generalized to solve many other problems. In particular, an StEM algorithm can be used to solve the optimization for the L 1 regularized estimator for exploratory IFA (Sun, Chen, Liu, Ying, & Xin, ). Specifically, in the exploratory IFA setting, no Q ‐matrix is pre‐specified and thus no constraint is imposed on the slope parameters a jk .…”
Section: Discussionmentioning
confidence: 99%
“…To impose a simple structure on the slopes, Sun et al . () propose an L 1 regularized maximum likelihood estimator, under which many of the a jk are estimated to be zero. In other words, the L 1 regularized estimator automatically rotates the factors to achieve a sparse slope structure.…”
Section: Discussionmentioning
confidence: 99%
“…The Λ-matrix is usually provided by the item designers and is often assumed to be known. When information about the Λ-matrix is vague, data-driven approaches for learning the Λ-matrix are proposed (Liu et al, 2012(Liu et al, , 2013Chen et al, 2015a;Sun et al, 2016;Chen et al, 2015b;Liu, 2017).…”
Section: Introductionmentioning
confidence: 99%