Background The respiratory illness caused by SARS‐CoV‐2 infection continues to present diagnostic challenges. Our 2020 edition of this review showed thoracic (chest) imaging to be sensitive and moderately specific in the diagnosis of coronavirus disease 2019 (COVID‐19). In this update, we include new relevant studies, and have removed studies with case‐control designs, and those not intended to be diagnostic test accuracy studies. Objectives To evaluate the diagnostic accuracy of thoracic imaging (computed tomography (CT), X‐ray and ultrasound) in people with suspected COVID‐19. Search methods We searched the COVID‐19 Living Evidence Database from the University of Bern, the Cochrane COVID‐19 Study Register, The Stephen B. Thacker CDC Library, and repositories of COVID‐19 publications through to 30 September 2020. We did not apply any language restrictions. Selection criteria We included studies of all designs, except for case‐control, that recruited participants of any age group suspected to have COVID‐19 and that reported estimates of test accuracy or provided data from which we could compute estimates. Data collection and analysis The review authors independently and in duplicate screened articles, extracted data and assessed risk of bias and applicability concerns using the QUADAS‐2 domain‐list. We presented the results of estimated sensitivity and specificity using paired forest plots, and we summarised pooled estimates in tables. We used a bivariate meta‐analysis model where appropriate. We presented the uncertainty of accuracy estimates using 95% confidence intervals (CIs). Main results We included 51 studies with 19,775 participants suspected of having COVID‐19, of whom 10,155 (51%) had a final diagnosis of COVID‐19. Forty‐seven studies evaluated one imaging modality each, and four studies evaluated two imaging modalities each. All studies used RT‐PCR as the reference standard for the diagnosis of COVID‐19, with 47 studies using only RT‐PCR and four studies using a combination of RT‐PCR and other criteria (such as clinical signs, imaging tests, positive contacts, and follow‐up phone calls) as the reference standard. Studies were conducted in Europe (33), Asia (13), North America (3) and South America (2); including only adults (26), all ages (21), children only (1), adults over 70 years (1), and unclear (2); in inpatients (2), outpatients (32), and setting unclear (17). Risk of bias was high or unclear in thirty‐two (63%) studies with respect to participant selection, 40 (78%) studies with respect to reference standard, 30 (59%) studies with respect to index test, and 24 (47%) studies with respect to participant flow. For chest CT (41 studies, 16,133 participants, 8110 (50%) cases), the sensitivity ranged from 56.3% to 100%, and specificity ranged from 25.4% to 97.4%. The pooled sensitivit...
Background: Preferential publication of studies with positive findings can lead to overestimation of diagnostic test accuracy (i.e. publication bias). Understanding the contribution of the editorial process to publication bias could inform interventions to optimize the evidence guiding clinical decisions. Purpose/Hypothesis: To evaluate whether accuracy estimates, abstract conclusion positivity, and completeness of abstract reporting are associated with acceptance to radiology conferences and journals. Study Type: Meta-research.
Background P-hacking, the tendency to run selective analyses until they become significant, is prevalent in many scientific disciplines. Purpose This study aims to assess if p-hacking exists in imaging research. Methods Protocol, data, and code available here https://osf.io/xz9ku/?view_only=a9f7c2d841684cb7a3616f567db273fa . We searched imaging journals Ovid MEDLINE from 1972 to 2021. Text mining using Python script was used to collect metadata: journal, publication year, title, abstract, and P-values from abstracts. One P-value was randomly sampled per abstract. We assessed for evidence of p-hacking using a p-curve, by evaluating for a concentration of P-values just below .05. We conducted a one-tailed binomial test (α = .05 level of significance) to assess whether there were more P-values falling in the upper range (e.g., .045 < P < .05) than in the lower range (e.g., .04 < P < .045). To assess variation in results introduced by our random sampling of a single P-value per abstract, we repeated the random sampling process 1000 times and pooled results across the samples. Analysis was done (divided into 10-year periods) to determine if p-hacking practices evolved over time. Results Our search of 136 journals identified 967,981 abstracts. Text mining identified 293,687 P-values, and a total of 4105 randomly sampled P-values were included in the p-hacking analysis. The number of journals and abstracts that were included in the analysis as a fraction and percentage of the total number was, respectively, 108/136 (80%) and 4105/967,981 (.4%). P-values did not concentrate just under .05; in fact, there were more P-values falling in the lower range (e.g., .04 < P < .045) than falling just below .05 (e.g., .045 < P < .05), indicating lack of evidence for p-hacking. Time trend analysis did not identify p-hacking in any of the five 10-year periods. Conclusion We did not identify evidence of p-hacking in abstracts published in over 100 imaging journals since 1972. These analyses cannot detect all forms of p-hacking, and other forms of bias may exist in imaging research such as publication bias and selective outcome reporting.
The ongoing coronavirus disease 2019 (COVID-19) pandemic continues to present diagnostic challenges. The use of thoracic radiography has been studied as a method to improve the diagnostic accuracy of COVID-19. The ‘Living’ Cochrane Systematic Review on the diagnostic accuracy of imaging tests for COVID-19 is continuously updated as new information becomes available for study. In the most recent version, published in March 2021, a meta-analysis was done to determine the pooled sensitivity and specificity of chest X-ray (CXR) and lung ultrasound (LUS) for the diagnosis of COVID-19. CXR gave a sensitivity of 80.6% (95%CI: 69.1-88.6) and a specificity of 71.5% (95%CI: 59.8-80.8). LUS gave a sensitivity rate of 86.4% (95%CI: 72.7-93.9) and specificity of 54.6% (95%CI: 35.3-72.6). These results differed from the findings reported in the recent article in this journal where they cited the previous versions of the study in which a meta-analysis for CXR and LUS could not be performed. Additionally, the article states that COVID-19 could not be distinguished, using chest computed tomography (CT), from other respiratory diseases. However, the latest review version identifies chest CT as having a specificity of 80.0% (95%CI: 74.9-84.3), which is much higher than the previous version which indicated a specificity of 61.1% (95%CI: 42.3-77.1). Therefore, CXR, chest CT and LUS have the potential to be used in conjunction with other methods in the diagnosis of COVID-19.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.