2002
DOI: 10.1046/j.1528-1157.2002.20902.x
|View full text |Cite
|
Sign up to set email alerts
|

Interrater Reliability among Epilepsy Centers: Multicenter Study of Epilepsy Surgery

Abstract: Summary:Purpose: To measure the interrater reliability of presurgical testing and surgical decisions among epilepsy centers.Methods: Seven centers participating in an ongoing, prospective multicenter study of resective epilepsy surgery agreed to conform to a detailed protocol regarding presurgical evaluation and surgery. To assess quality assurance, each center independently reviewed 21 randomly selected surgical cases for preoperative study lateralization and localization, and surgical decisions. Interrater r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
32
0

Year Published

2008
2008
2019
2019

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(33 citation statements)
references
References 23 publications
1
32
0
Order By: Relevance
“…9 The coefficients for IRR can vary dramatically across different fields. As a reference point, one study revealed an IRR of 0.83 between epilepsy centers on whether to perform epilepsy surgery, 10 and the IRR between sleep centers for scoring 5 different sleep stages was 0.68. 11 It is well known that the range of values is constrained by the margins.…”
Section: Resultsmentioning
confidence: 99%
“…9 The coefficients for IRR can vary dramatically across different fields. As a reference point, one study revealed an IRR of 0.83 between epilepsy centers on whether to perform epilepsy surgery, 10 and the IRR between sleep centers for scoring 5 different sleep stages was 0.68. 11 It is well known that the range of values is constrained by the margins.…”
Section: Resultsmentioning
confidence: 99%
“…However, the demonstrated ability of the scale to relatively accurately differentiate between the LTL and RTL subgroups as reported above, may suggest that factors other than the psychometric properties of the scale are at work in influencing the performance of patients in this sample. On the other hand, the amount of agreement between the neuropsychological data and other evaluation modalities was at best 'fair' (Haut, et al, 2002) and limited to temporal lobe areas. Studies that compare and contrast the efficacy of different evaluation modalities in documenting dysfunction in epilepsy patients at the presurgical workup are quite rare (e.g., Akanuma, et al, 2003;Haut, et al, 2002).…”
Section: Discussionmentioning
confidence: 91%
“…On the other hand, the amount of agreement between the neuropsychological data and other evaluation modalities was at best 'fair' (Haut, et al, 2002) and limited to temporal lobe areas. Studies that compare and contrast the efficacy of different evaluation modalities in documenting dysfunction in epilepsy patients at the presurgical workup are quite rare (e.g., Akanuma, et al, 2003;Haut, et al, 2002). Such comparison studies can provide a gold standard against which neuropsychological evaluation tools can be calibrated.…”
Section: Discussionmentioning
confidence: 91%
“…The la er part can lead to a high degree of variability and inconsistency between reviewers (Haut et al, 2002;Benbadis et al, 2009;Gerber et al, 2008;Azuma et al, 2003), and given that the findings are not noted down using a set category of outcomes or that no commonly accepted guidelines exist for describing some properties, reports become di icult to query and compare to the findings of other clinicians. In recent work by Beniczky et al (2013), the authors describe a set of guidelines and definitions (including the reporting of common background properties) that is being constructed as part of a pan-European project with the goal of providing more consistency and structure for the reports in clinical EEG reviews (Beniczky et al, 2013;Aurlien et al, 2004Aurlien et al, , 2007.…”
Section: Background Activitymentioning
confidence: 99%
“…Unfortunately, various studies have shown that a large inter-and intra-observer variability still exists between reviewers. Depending on the reported feature or decision outcome, the inter-rater agreement (Kappa coe icients) range from slight (0.09) to substantial (0.94) (Haut et al, 2002;Benbadis et al, 2009;Gerber et al, 2008;Azuma et al, 2003). One of the main reasons for this is a lack of consistency in describing the properties accurately.…”
Section: Introductionmentioning
confidence: 99%