2017
DOI: 10.1177/0013164417728358
|View full text |Cite
|
Sign up to set email alerts
|

Item-Score Reliability in Empirical-Data Sets and Its Relationship With Other Item Indices

Abstract: Reliability is usually estimated for a total score, but it can also be estimated for item scores. Item-score reliability can be useful to assess the repeatability of an individual item score in a group. Three methods to estimate item-score reliability are discussed, known as method MS, method λ6, and method CA. The item-score reliability methods are compared with four well-known and widely accepted item indices, which are the item-rest correlation, the item-factor loading, the item scalability, and the item di… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
37
0
5

Year Published

2019
2019
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 71 publications
(54 citation statements)
references
References 57 publications
4
37
0
5
Order By: Relevance
“…In line with previous findings, scale length, number of response categories, and selection rates also had an effect on the outcome variables (e.g., Crişan et al, 2017 ; Zijlmans et al, 2018 ). The item scalability coefficient is equivalent to a normed item-rest correlation, which, in turn, is used as an index of item-score reliability (e.g., Zijlmans et al, 2018 ). Therefore, it is not surprising that overall scale reliability decreased as the item scalability coefficients decreased.…”
Section: Discussionsupporting
confidence: 90%
“…In line with previous findings, scale length, number of response categories, and selection rates also had an effect on the outcome variables (e.g., Crişan et al, 2017 ; Zijlmans et al, 2018 ). The item scalability coefficient is equivalent to a normed item-rest correlation, which, in turn, is used as an index of item-score reliability (e.g., Zijlmans et al, 2018 ). Therefore, it is not surprising that overall scale reliability decreased as the item scalability coefficients decreased.…”
Section: Discussionsupporting
confidence: 90%
“…In line with previous findings, scale length, number of response categories, and selection rates also had an effect on the outcome variables (e.g., Crișan et al, 2017;Zijlmans, Tijmstra, van der Ark, & Sijtsma, 2018). The item scalability coefficient is equivalent to a normed item-rest correlation, which, in turn, is used as an index of item-score reliability (e.g., Zijlmans et al, 2018). Therefore, it is not surprising that overall scale reliability decreased as the item scalability coefficients decreased.…”
Section: Discussionsupporting
confidence: 85%
“…Item-rest correlation of 11 raters who provided masculinity ratings and 10 raters who provided femininity ratings was below the cut-off of r = .20 (Zijlmans et al, 2018); hence, their ratings were excluded from the analyses. Intraclass correlation coefficient analysis revealed good inter-rater agreement (r = .84, p < .001) and high internal consistency (Cronbach's a = .90) of the femininity ratings of the female composite faces.…”
Section: Resultsmentioning
confidence: 99%
“…Results AQ scores and masculinity/femininity ratings Item-rest correlation analyses were used to identify any raters whose ratings correlated poorly with the rest of the raters. All correlations were above the recommended cut-off of r = .20 (Zijlmans, Tijmstra, van der Ark, & Sijtsma, 2018), and none was excluded from subsequent analyses. Intraclass correlation coefficient analysis revealed excellent interrater agreement in the femininity ratings of the female faces (r = .97, p < .001) and high internal consistency (Cronbach's a = .98).…”
Section: Statistical Analysesmentioning
confidence: 99%