2019
DOI: 10.1002/sim.8398
|View full text |Cite
|
Sign up to set email alerts
|

Assessing method agreement for paired repeated binary measurements administered by multiple raters

Abstract: Method comparison studies are essential for development in medical and clinical fields. These studies often compare a cheaper, faster, or less invasive measuring method with a widely used one to see if they have sufficient agreement for interchangeable use. Moreover, unlike simply reading measurements from devices, e.g., reading body temperature from a thermometer, the response measurement in many clinical and medical assessments is impacted not only by the measuring device but also by the rater. For example, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(8 citation statements)
references
References 38 publications
0
8
0
Order By: Relevance
“…An individual-level summary measure for each method was given based upon the latent variable formulation of the GLMM used for testing method agreement. 17 That is, a pair of model-estimated continuous delirium outcomes for the CAM and the 3D-CAM was determined for each of the 299 patients and used to plot the Bland-Altman diagram. A pair of model-estimated binary delirium outcomes was then generated based on the latent variable for the evaluation of Cohen κ.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…An individual-level summary measure for each method was given based upon the latent variable formulation of the GLMM used for testing method agreement. 17 That is, a pair of model-estimated continuous delirium outcomes for the CAM and the 3D-CAM was determined for each of the 299 patients and used to plot the Bland-Altman diagram. A pair of model-estimated binary delirium outcomes was then generated based on the latent variable for the evaluation of Cohen κ.…”
Section: Resultsmentioning
confidence: 99%
“…An individual-level summary measure for each method was given based upon the latent variable formulation of the GLMM used for testing method agreement. 17 Additional Bland-Altman diagrams were generated for each of the 4 features (Figure 2). One feature, altered level of consciousness, was plotted on a log scale since the data must be normally distributed for Bland-Altman analysis.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Despite a high percentage (97%) of overall agreement between raters for classifying breast type, weighted kappa revealed fair agreement between raters for classifying breast type; however, 0 was in the 95% CI, indicating no agreement. Kappa is affected by the prevalence of the attribute being observed and for rare findings low values of kappa may not necessarily reflect low rates of overall agreement (Wang et al, 2020). This was the situation with our sample because 90% of participants were observed to have Type 1 breasts and 10% Type 2 (no Types 3 or 4 breasts were identified).…”
Section: Discussionmentioning
confidence: 99%
“…For categorizing breast type, fair interrater reliability was considered if the weighted kappa was 0.21–0.40, moderate if it was 0.41– 0.6, substantial if it fell between 0.61–0.8 and almost perfect if it was greater than 0.8 (Denham, 2016). Kappa is affected by the prevalence of the observed attribute, meaning that for rare findings, a very low value of kappa may not necessarily reflect low rates of overall agreement (Wang et al, 2020).…”
Section: Methodsmentioning
confidence: 99%