2006
DOI: 10.1016/j.jclinepi.2005.10.015
|View full text |Cite
|
Sign up to set email alerts
|

When to use agreement versus reliability measures

Abstract: If the research question concerns the distinction of persons, reliability parameters are the most appropriate. But if the aim is to measure change in health status, which is often the case in clinical practice, parameters of agreement are preferred.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

15
1,305
1
12

Year Published

2009
2009
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 1,390 publications
(1,333 citation statements)
references
References 9 publications
15
1,305
1
12
Order By: Relevance
“…The function “lmer” was used for dichotomous, ordinal, and continuous variables. Odds ratios (OR) were calculated and significance set using Fisher's exact test and the package “epicalc.”27 The intraclass correlation coefficient (ICC) was calculated as defined in formulas 1 to 4 (Data S1), all redefined 4, 28, 29…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The function “lmer” was used for dichotomous, ordinal, and continuous variables. Odds ratios (OR) were calculated and significance set using Fisher's exact test and the package “epicalc.”27 The intraclass correlation coefficient (ICC) was calculated as defined in formulas 1 to 4 (Data S1), all redefined 4, 28, 29…”
Section: Methodsmentioning
confidence: 99%
“…Examinations should therefore have good reliability (how well can patients be distinguished from each other), high agreement (low measurement error for repeated assessments)4 and results of the examination should be valid (accurate) 5. Inaccurate or unreliable assessment of an underlying problem might lead to misdiagnosis or inappropriate further testing and treatments.…”
mentioning
confidence: 99%
“…The ideal scenario would be that the SDC is smaller than MIC [8], but unfortunately these values were not available in the literature.…”
Section: Reproducibility Thicknessmentioning
confidence: 99%
“…Agreement assesses how close the results of repeated measurements are, by estimating the measurement error in repeated measurements. Reliability assesses whether study subjects could be distinguished from each other, despite measurement errors [8,31].…”
Section: Introductionmentioning
confidence: 99%
“…Through this relationship, kappa coefficients depend on the prevalence of the trait under study, which limits the possibility to compare them among studies with different prevalence. Several authors (Thompson and Walter, 1988; Feinstein and Cicchetti, 1990; Cicchetti and Feinstein, 1990; Byrt et al ., 1993; de Vet et al ., 2006) proposed the use of absolute agreement measures (e.g. the proportion of items classified in the same category by the two observers) to avoid that dependency.…”
Section: Introductionmentioning
confidence: 99%