2008
DOI: 10.1186/1471-2288-8-29
|View full text |Cite
|
Sign up to set email alerts
|

Examining intra-rater and inter-rater response agreement: A medical chart abstraction study of a community-based asthma care program

Abstract: Background: To assess the intra-and inter-rater agreement of chart abstractors from multiple sites involved in the evaluation of an Asthma Care Program (ACP).

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
25
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(26 citation statements)
references
References 19 publications
1
25
0
Order By: Relevance
“…We tested the performance of the data-collection instrument used for medical-record abstraction on 10 patients excluded for these analyses prior to use in the present study. Then, to confirm the accuracy of data taken from medical records, we randomly selected 20 women included in this sample for repeat medical-record review, which represented 5% of the reviews we performed in this study, as recommended in protocols for chart-abstraction reliability testing (To, Estrabillo, Wang, & Cicutto, 2008). Record reviews showed 95.14% of data were accurate, meeting accuracy standards of > 95% (Mi, Collins, Lerner, Losina, & Katz, 2013).…”
Section: Methodsmentioning
confidence: 99%
“…We tested the performance of the data-collection instrument used for medical-record abstraction on 10 patients excluded for these analyses prior to use in the present study. Then, to confirm the accuracy of data taken from medical records, we randomly selected 20 women included in this sample for repeat medical-record review, which represented 5% of the reviews we performed in this study, as recommended in protocols for chart-abstraction reliability testing (To, Estrabillo, Wang, & Cicutto, 2008). Record reviews showed 95.14% of data were accurate, meeting accuracy standards of > 95% (Mi, Collins, Lerner, Losina, & Katz, 2013).…”
Section: Methodsmentioning
confidence: 99%
“…Kappa adjusts for chance agreement among two ratings. While kappa has been most commonly used to assess agreement between raters (inter-rater reliability), it can also be used to assess similarity or concordance between ratings within the same raters (Fleiss, 1981; Grootendorst, Feeny, & Furlong, 1997; To, Estrabillo, Wang, & Cicutto, 2008). Kappa values greater than .75 may be taken to represent excellent agreement, values in the .40 to .75 range, fair to good agreement, and values less than .40, poor agreement (Fleiss, 1981).…”
Section: Methodsmentioning
confidence: 99%
“…Similarly, To et al 17 found an overall percent agreement of 93% and an overall κ of 0.81 when they examined IRR between the study chart abstractor and an experienced nonstudy chart abstractor, using 8 fi ctitious medical charts.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, there have not been many IRR studies in primary care research, 5,6,17 to assist in determining the lowest threshold of data quality before repeated collection is required. A common interpretation of κ states that a value between 0.61 and 0.80 constitutes substantial agreement between raters, while in emergency medicine research, the benchmark is 95% agreement.…”
Section: In T Er R At Er R El Ia Bil I T Y In Data Col L Ec T Ionmentioning
confidence: 99%