2017
DOI: 10.4103/picr.picr_123_17
|View full text |Cite|
|
Sign up to set email alerts
|

Common pitfalls in statistical analysis: Measures of agreement

Abstract: Agreement between measurements refers to the degree of concordance between two (or more) sets of measurements. Statistical methods to test agreement are used to assess inter-rater variability or to decide whether one technique for measuring a variable can substitute another. In this article, we look at statistical measures of agreement for different types of data and discuss the differences between these and those for assessing correlation.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
220
1
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 321 publications
(224 citation statements)
references
References 5 publications
2
220
1
1
Order By: Relevance
“…The inter-rater agreement 49 was moderate for CD90 (Cohen's kappa = 0.506) and PDGFRb (kappa = 0.599), substantial for ASMA (kappa = 0.663) and PDGFRa (kappa = 0.620) and excellent for FAP (kappa = 0.890) and CD8a (kappa = 0.809) ( Supplementary Figs. 1 and 2).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The inter-rater agreement 49 was moderate for CD90 (Cohen's kappa = 0.506) and PDGFRb (kappa = 0.599), substantial for ASMA (kappa = 0.663) and PDGFRa (kappa = 0.620) and excellent for FAP (kappa = 0.890) and CD8a (kappa = 0.809) ( Supplementary Figs. 1 and 2).…”
Section: Resultsmentioning
confidence: 99%
“…To assess the degree that raters provided consistency in their scorings the inter-rater agreement was calculated with Cohen's kappa (squared weightage) 49 using the irr package for R. For subsequent analyses, the average score between the two raters was calculated resulting in a data range from 0 to 9 with 0.5 intervals (see also Supplementary Figs. 1 and 2).…”
mentioning
confidence: 99%
“…We used Fleiss's kappa statistics (MAGREE.SAS) to examine the agreement amongst all observers (interoperator agreement). 13,14 We defined the level of agreement according to kappa value as follows: poor agreement, k ≤ 0; slight agreement, k 0 to 0.2; fair agreement, k 0.2 to 0.4; moderate agreement, k 0.4 to 0.6; substantial agreement, k 0.6 to 0.8; almost perfect agreement, k 0.8 to 1. SEs and 95% confidence intervals (CI) for the stomach position grades are also presented.…”
Section: Discussionmentioning
confidence: 99%
“…Meanwhile, when agreement was analyzed according to the K index, the modified Friedewald equation showed the highest agreement in the overall analysis and the subgroup of patients with TG below 150 mg/dL, aged over 65 years, female subjects, and with diabetes (Table 3). However, there could be differences in findings between two statistical methods based on scale measurements, as reported previously 23 . The high K index of the Friedewald equation can be attributed to the definition of the study population.…”
Section: Discussionmentioning
confidence: 72%