2018
DOI: 10.21449/ijate.377138
|View full text |Cite
|
Sign up to set email alerts
|

Effects of Various Simulation Conditions on Latent-Trait Estimates: A Simulation Study

Abstract: The aim of this simulation study, determine the relationship between true latent scores and estimated latent scores by including various control variables and different statistical models. The study also aimed to compare the statistical models and determine the effects of different distribution types, response formats and sample sizes on latent score estimations. 108 different data bases, comprised of three different distribution types (positively skewed, normal, negatively skewed), three response formats (thr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

1
0
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 14 publications
1
0
0
Order By: Relevance
“…Fourth, the number of response categories (5) used in this study did not seem to affect the accuracy of ability or item information estimates due to the good fit of the model, the acceptable item discrimination values, and the large sample size. This is supported by the findings of several studies (Hauck Filho et al, 2014;Koğar, 2018;Maydeu-Olivares et al, 2009) that investigated the effect of number of response categories (3, 5, and 7) in rating scales, response distribution and difficulty level (moderate, too easy or too difficult for the sample assessed), and sample size on accuracy of latent score estimations within IRT-GRM framework. They reported that ability estimates were good and stable under different conditions, and that the ability estimates became with higher accuracy when the sample size increased (500 and more).…”
Section: Discussionsupporting
confidence: 59%
“…Fourth, the number of response categories (5) used in this study did not seem to affect the accuracy of ability or item information estimates due to the good fit of the model, the acceptable item discrimination values, and the large sample size. This is supported by the findings of several studies (Hauck Filho et al, 2014;Koğar, 2018;Maydeu-Olivares et al, 2009) that investigated the effect of number of response categories (3, 5, and 7) in rating scales, response distribution and difficulty level (moderate, too easy or too difficult for the sample assessed), and sample size on accuracy of latent score estimations within IRT-GRM framework. They reported that ability estimates were good and stable under different conditions, and that the ability estimates became with higher accuracy when the sample size increased (500 and more).…”
Section: Discussionsupporting
confidence: 59%