2021
DOI: 10.1186/s43055-021-00485-2
|View full text |Cite
|
Sign up to set email alerts
|

Validation of imaging reporting and data system of coronavirus disease 2019 lexicons CO-RADS and COVID-RADS with radiologists’ preference: a multicentric study

Abstract: Background A retrospective multicentric study gathered 1439 CT chest studies with suspected coronavirus disease 2019 (COVID-19) affection. Three radiologists, blinded to other results, interpreted all studies using both lexicons with documentation of applicability and preferred score in assessing every case. The purpose of the study is to assess COVID-19 standardized assessment schemes’ (CO-RADS and COVID-RADS lexicons) applicability and diagnostic efficacy. Resul… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 29 publications
1
2
0
Order By: Relevance
“…This suggests that pulmonologists and house officers can also reliably apply the CO-RADS, although there is more variability compared to specialized radiologists. These results are consistent with those of previous studies [ 3 , 5 , 13 , 15 , 16 , 21 , 22 , 23 , 24 , 25 , 26 ]. Fonseca et al [ 3 ] emphasized the substantial inter-observer agreement among the three readers for CO-RADS classifications, even three months after the initial case analysis, and without any additional training (κ = 0.642).…”
Section: Discussionsupporting
confidence: 94%
See 1 more Smart Citation
“…This suggests that pulmonologists and house officers can also reliably apply the CO-RADS, although there is more variability compared to specialized radiologists. These results are consistent with those of previous studies [ 3 , 5 , 13 , 15 , 16 , 21 , 22 , 23 , 24 , 25 , 26 ]. Fonseca et al [ 3 ] emphasized the substantial inter-observer agreement among the three readers for CO-RADS classifications, even three months after the initial case analysis, and without any additional training (κ = 0.642).…”
Section: Discussionsupporting
confidence: 94%
“…Nair et al [ 24 ] reported an overall moderate inter-observer agreement for CO-RADS categories among the six readers (κ = 0.548). Atta et al [ 25 ] reported an overall substantial agreement among three readers (κ = 0.78). Sushentsev et al [ 26 ] demonstrated moderate inter-observer agreement among the three readers for the CO-RADS, with a κ value of 0.51.…”
Section: Discussionmentioning
confidence: 99%
“…For the overall CT categories, the interobserver agreement was reported in 18 studies: 12 studies reported diagnostic accuracy and the interobserver agreement together [ 9 , 20 , 22 , 25 , 28 , 36 39 , 45 , 51 , 54 ] and six studies solely reported the interobserver agreement [ 59 , 61 65 ]. They reported κ values ranged from 0.43 to 0.90.…”
Section: Resultsmentioning
confidence: 99%