2017
DOI: 10.1016/j.diii.2016.05.014
|View full text |Cite
|
Sign up to set email alerts
|

Agreement studies in radiology research

Abstract: Agreement studies are preliminary and not adequately reported. Studies dedicated to agreement are infrequent. They are research opportunities that should be promoted.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
17
1

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
3

Relationship

3
6

Authors

Journals

citations
Cited by 29 publications
(18 citation statements)
references
References 16 publications
0
17
1
Order By: Relevance
“…In addition, the poor interobserver agreement was not influenced by readers’ experience. These findings add to other limitations of all DSA-based collateral grading, even those which demonstrated a higher interobserver agreement 29. The main limitation is that for patients with AIS-LVO, therapeutic decisions have to be made in advance of DSA.…”
Section: Discussionmentioning
confidence: 82%
See 1 more Smart Citation
“…In addition, the poor interobserver agreement was not influenced by readers’ experience. These findings add to other limitations of all DSA-based collateral grading, even those which demonstrated a higher interobserver agreement 29. The main limitation is that for patients with AIS-LVO, therapeutic decisions have to be made in advance of DSA.…”
Section: Discussionmentioning
confidence: 82%
“…The electronic survey used imaging data from the THRACE23 randomized trial database sent to a large number of readers with different levels of experience, and from various backgrounds, and institutions, as methodologically requested 29. In our opinion, an electronic survey accounts for non-response bias and maximizes the response rate, offering the possibility of having a large panel of readers.…”
Section: Discussionmentioning
confidence: 99%
“…These may be very informative, but they are rarely performed. [31,40,41] Better agreement can be expected when the same clinician responds twice to the same series of cases (typically weeks apart in patients presented in a different order to assure independence between judgments), but the risk here is that the clinician may reveal their own inconsistencies in decision-making. In the case of diagnostic tests, poor intra-rater agreement (across multiple raters) is evidence of the lack of reliability of the score/measurement/diagnostic categories, and a strong indication that the scale or categories should be modi ed.…”
Section: Spectrum Of Cliniciansmentioning
confidence: 99%
“…These may be very informative, but they are rarely performed. [18][19][20] Better agreement can be expected when the same clinician responds twice to the same series of cases (typically weeks apart in patients presented in a different order to assure independence between judgments), but the risk here is that the clinician may reveal their own inconsistencies in decision-making. In the case of diagnostic tests, poor intra-rater agreement (across multiple raters) is evidence of the lack of reliability of the score/measurement/ diagnostic categories, and a strong indication that the scale or categories should be modified.…”
Section: Spectrum Of Cliniciansmentioning
confidence: 99%