2014
DOI: 10.3389/fpsyg.2014.00509
|View full text |Cite
|
Sign up to set email alerts
|

How to assess and compare inter-rater reliability, agreement and correlation of ratings: an exemplary analysis of mother-father and parent-teacher expressive vocabulary rating pairs

Abstract: This report has two main purposes. First, we combine well-known analytical approaches to conduct a comprehensive assessment of agreement and correlation of rating-pairs and to dis-entangle these often confused concepts, providing a best-practice example on concrete data and a tutorial for future reference. Second, we explore whether a screening questionnaire developed for use with parents can be reliably employed with daycare teachers when assessing early expressive vocabulary. A total of 53 vocabulary rating … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
56
0
4

Year Published

2014
2014
2021
2021

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 69 publications
(63 citation statements)
references
References 43 publications
1
56
0
4
Order By: Relevance
“…This is variously called multi-source feedback, 360° feedback or inter-rater reliability (Stolarova et al, 2014). A wide variety of issues have been considered such as the duration of acquaintance, observer type (peers, boss, subordinate), personality of the raters etc.…”
Section: Introductionmentioning
confidence: 99%
“…This is variously called multi-source feedback, 360° feedback or inter-rater reliability (Stolarova et al, 2014). A wide variety of issues have been considered such as the duration of acquaintance, observer type (peers, boss, subordinate), personality of the raters etc.…”
Section: Introductionmentioning
confidence: 99%
“…Two coders separately studied the degree of agreement in the items dimension assigned and intercoder reliability (Nimon et al, 2012; Stolarova et al, 2014) was studied by calculating Cohen's κ (Cohen, 1960). Any disagreements were resolved by consensus.…”
Section: Methodsmentioning
confidence: 99%
“…This was confirmed by the two-way random effects model of intraclass correlation coefficient (ICC) which is used to estimate the agreement on continuous scores between observers where perfect agreement is denoted by 1 and 0 establishes no significant agreement under the null hypothesis (Supplementary Table 2). [32][33][34][35] The ICC values obtained are above the 0.7 threshold for acceptable reliability as defined for ICC for group comparisons by the ISPOR (International Society for Pharmacoeconomics and Outcomes Research) task force. 36 Statistics were computed using the SPSS (IBM SPSS, Chicago, IL).…”
Section: Discussionmentioning
confidence: 99%